top of page

Why AI Stalls After the Pilot: Drexel Hill, PA Workshop Insights + Industry Data

  • Writer: JR
    JR
  • Feb 5
  • 9 min read
What Leaders Want, What’s Blocking Scale

Executive Summary


  • Leaders in the Collegeville, PA workshop sample mostly want AI to drive revenue growth (48%), then cost reduction (29%), and customer experience (24%).

  • The biggest blocker is not tools. It is talent and skills (52%), with smaller but persistent friction from data quality, compliance, tech stack constraints, budget, and leadership buy-in (each 10%).

  • Most organizations are still operating with pilot gravity: 57% reported zero pilots reaching production in the last 12 months, and 67% have no KPIs tied to AI.

  • Ownership is uneven: 43% have a functional leader accountable, while 38% split between “no clear owner,” “working group,” or “CEO/GM owner.” That mix predicts whether AI becomes a capability or a side project.

  • Across industries, adoption is rising, but the differentiator is governed execution: clear owners, real data access, enforceable safety rules, and measurable outcomes.

  • The workshop feedback was extremely strong (Quality of Content 5.0, Delivery 4.86, Applicability 4.88, 100% recommended), and the main “ask” was more visibility into how the AI agents are built, maintained, and governed.

  • The path forward is straightforward: appoint an AI Owner, pick 1–2 use cases with KPIs, establish basic guardrails, and run short cycles that ship into production, not just demos.


What the Survey Reveals About AI Readiness


This section is based on 21 survey responses from the “Outsell, Outgrow, and Outsmart with AI” workshop. The goal is not to generalize to every company. It is to treat the responses as a clear snapshot of what leaders are trying to do right now, and what is getting in their way.


Outcomes leaders want


Across the group, AI is being judged on business outcomes, not novelty:

  • Revenue growth (48%) leads, which usually shows up as faster sales cycles, better targeting, more consistent follow-up, and improved conversion performance.

  • Cost reduction (29%) is close behind, often meaning fewer hours spent on repetitive internal work, fewer rework loops, and less operational drag.

  • Customer experience (24%) comes through as responsiveness, consistency, and less friction in service delivery.


The takeaway: leaders are not asking for “AI strategy decks.” They want AI to change the throughput of the business.


What’s blocking progress


The top blocker is clear:

  • Talent/skills (52%) is the dominant constraint. It includes technical skills, but also operational fluency: how to scope a use case, define success, govern risk, and ship something without derailing the business.

  • A second tier of blockers are all tied at 10% each: leadership buy-in, regulation/compliance, tech stack/tools, data quality, and budget.


This distribution matters. When skills is the top blocker, training alone is not enough. You need a repeatable operating model and practical delivery support that turns learning into shipped work.


The ownership gap (patterns) and why it matters


When asked who is responsible for AI and automation outcomes:

  • 43% report a functional leader (Sales/Ops/IT) is accountable.

  • 19% say CEO/GM is named and accountable.

  • 19% say there is no clear owner.

  • 19% say there is a working group, but no single owner.


This split shows why many AI efforts stall. Working groups can generate momentum, but they rarely resolve trade-offs. A named owner can.


The rest of the readiness signals in the data reinforce this:

  • Speed to change: 67% need a month or a quarter to make a production change after a 15% KPI drop. Only 10% can move same day.

  • Data readiness: 62% report scattered or siloed exports, and only 5% report having a clean, labeled dataset with access controls ready to go.

  • Safety rules: only 10% report strong controls (blocking sensitive data and logging/review). The rest are split between partial enforcement (43%), informal habits (24%), or no protections yet (24%).

  • Shipping rate: 57% shipped 0 pilots to production in the last 12 months.

  • Measurement: 67% have no KPIs tied to AI today.


And yet, confidence is not low. The average confidence score is 7/10 for being competitive by 2027, and 90% want to learn more about developing an internal AI leader.

That combination is a signal: leaders believe they can get there, but the system around them is not built to convert intent into shipped capability.


Industry Intelligence: How 5 Sectors Are Responding to AI Right Now


Below are five sector mini-briefs drawn from industries represented in the survey (grouped professionally), paired with current external data so the survey results can be interpreted against what is actually happening in the market.


1) Financial services and insurance


What’s changing: Financial services has moved from experimentation to operational integration, especially in customer service, fraud, compliance, and internal productivity.


Where AI is being applied

  • Contact center automation with supervised workflows

  • Fraud detection and transaction monitoring

  • Document-heavy work: credit memos, policy summaries, regulatory reporting

  • Claims triage and underwriting support in insurance


Common pitfalls

  • “Pilot islands” that never connect to core systems

  • Weak governance that creates compliance and model risk exposure

  • Underestimating data permissions and auditability needs


Current data points

  • IBM reported that in 2024 only 8% of banks were developing genAI systematically, while 78% were taking a tactical approach.

  • McKinsey estimated genAI’s annual value potential in banking at $200B–$340B (about 9–15% of operating profits) if use cases are implemented at scale.

  • Deloitte reported 76% of insurers surveyed had implemented genAI in at least one business function (with variation by line of business).

  • Gartner reported 59% of finance leaders were using AI in their finance function in 2025.


How this relates to the survey: If your blocker is “skills,” finance shows the real requirement: not just prompt skill, but controls, audit trails, and clear ownership so AI outputs can safely influence decisions.


2) Life sciences and health products


What’s changing: Life sciences is adopting AI both as an innovation accelerator and as an operations multiplier, but credibility and regulatory expectations are rising alongside adoption.


Where AI is being applied

  • Drug discovery and candidate screening

  • Clinical operations support (site selection, document workflows, monitoring)

  • Quality, safety, and regulatory documentation assistance

  • Commercial operations: medical information and field enablement (with guardrails)


Common pitfalls

  • Treating AI outputs as “answers” instead of “evidence to review”

  • Poor model credibility documentation for regulated use

  • Limited data access or permissions slowing pilots


Current data points

  • Deloitte’s 2026 Life Sciences Executive Outlook notes strong momentum: for example, 53% of medtech executives cited investments in AI-enabled platforms as a key growth driver, and 82% pointed to health IT and AI-enhanced workflow solutions as near-term revenue drivers.

  • McKinsey estimates genAI could unlock $60B–$110B per year for pharmaceutical and medical products industries.

  • The FDA released guidance outlining a risk-based credibility assessment framework for AI used to support regulatory decision-making for drugs and biologics.

  • A peer-reviewed analysis found AI-discovered molecules showed an 80–90% Phase I success rate in its reviewed sample, higher than historic averages (interpret cautiously, but the direction is notable).


How this relates to the survey: This is the clearest example of why “rules for safe AI use” cannot stay informal. In regulated environments, governance is not overhead. It is the pathway to adoption.


3) Retail and consumer products


What’s changing: Retail and consumer products are pushing AI into supply chain and merchandising because small efficiency gains compound quickly across volume.


Where AI is being applied

  • Demand forecasting and inventory allocation

  • Supply chain visibility and exception management

  • Product content, catalog enrichment, and customer support

  • Marketing operations and creative throughput (with brand controls)


Common pitfalls

  • Fragmented data across commerce, inventory, and customer systems

  • ROI assumptions that ignore change management and process redesign

  • Over-automation of customer-facing content without QA


Current data points

  • Deloitte reported 30% of retailers surveyed use AI for supply chain visibility, expected to rise to 41% within the next year; 59% expected positive ROI from AI-driven supply chain initiatives within 12 months.

  • Deloitte also reported 25% of retail and 15% of consumer products organizations identify themselves as leading in genAI.


How this relates to the survey: Retail’s lesson matches the workshop data: “scattered exports” is not a minor inconvenience. It is the difference between scalable pilots and stalled initiatives.


4) Legal services


What’s changing: Legal work is text- and research-heavy, which makes it naturally compatible with genAI. But the sector is also acutely exposed to accuracy, confidentiality, and professional responsibility risk.


Where AI is being applied

  • Document review, summarization, and research assistance

  • Drafting memos, briefs, and correspondence with human review

  • Internal knowledge management and matter intake support


Common pitfalls

  • Using consumer tools without confidentiality protections

  • Weak training and inconsistent policy enforcement

  • Measuring “time saved” without tracking quality, risk, and client impact


Current data points

  • Thomson Reuters Institute reported 26% of legal professionals were already using genAI in early 2025 (up from 14% the prior year), with 72% of users using it at least weekly and over 40% using it daily.

  • The same summary notes only 20% were measuring ROI, and many lacked policy and training foundations.


How this relates to the survey: Legal mirrors the workshop’s governance signal: partial enforcement and informal habits are common early, but they become the ceiling if not addressed.


5) Construction and engineering


What’s changing: Construction is adopting AI inside broader digital adoption: BIM, analytics, cloud management, and connected project delivery. The pressure is coming from labor shortages, cost volatility, and schedule risk.


Where AI is being applied

  • Project risk prediction and schedule variance detection

  • Document control and submittal workflows

  • Safety reporting and incident analysis

  • Estimating support and procurement intelligence


Common pitfalls

  • Too many disconnected data environments

  • Adoption without training, which creates shadow processes

  • AI used for “insights” that never reach operational decisions


Current data points

  • Deloitte Access Economics (commissioned by Autodesk) reported 37% of surveyed construction businesses were using AI and machine learning, up from 26% in 2023.

  • The report found the average firm had adopted 6.2 technologies (up 20% from 5.3), and the median number of data environments used was 11, creating duplication and training cost.

  • Leaders estimated a more uniform data environment could save about 10.5 hours per week, and each additional technology adopted was associated with a 1.14% increase in expected revenue (association, not guaranteed outcome).


How this relates to the survey: Construction shows a practical translation of the workshop’s “data readiness” problem: the issue is not “having data,” it is having usable, governed data flows.


What High-Performing Organizations Are Doing Differently


Across sectors, the gap between “AI activity” and “AI advantage” is an operating model gap. McKinsey’s work on AI high performers consistently points to management practices that separate experimentation from value capture, including strategy, talent, operating model, technology, data, and adoption/scaling.


In plain terms, high performers do a few unsexy things extremely well:

  1. They name an accountable owner with authority to prioritize, approve, and stop work.

  2. They treat data access like product infrastructure, with permissions and traceability designed up front.

  3. They build governance into the workflow, not as a separate committee that reviews after the fact.

  4. They measure outcomes, then keep or kill use cases quickly.

  5. They ship in small cycles, which forces integration with real systems and reveals constraints early.

  6. They plan for drop-off, because many genAI projects get abandoned after proof-of-concept when data, risk, cost, or value clarity is missing.


This maps directly to your survey signals: ownership inconsistency, scattered data, partial safety rules, and pilots not making it into production.


Recommendations Informed by the Workshop Data


Below are recommendations tied directly to the survey’s dominant themes (outcomes, blockers, ownership). I’ve labeled quick wins vs deeper changes, but both matter.


Quick wins (do these in the next 30 days)


  1. Appoint an AI Owner and publish a one-page charter. Tie this to your top outcome (revenue, cost, or CX). This directly addresses the “no clear owner / working group” pattern.

  2. Pick one use case per outcome and define a KPI that a CFO would accept. Given that 67% report no AI KPIs today, start with one measurable target and a weekly review cadence.

  3. Run a “data readiness inventory” for the selected use case only. With 62% reporting scattered exports, stop trying to fix enterprise data all at once. Identify the minimum viable dataset, its owner, and access rules.

  4. Establish a basic AI safety policy that is enforceable, not aspirational. The survey shows most teams are in partial or informal enforcement. Define what data is prohibited, approved tools, review expectations, and logging requirements.


Deeper changes (60–180 days)


  1. Build a repeatable “pilot-to-production” pathway. Since 57% shipped zero pilots to production, you need a standard: intake, risk review, KPI definition, data access, rollout plan, and post-launch monitoring.

  2. Create an internal capability plan to close the skills blocker. Because skills is the top blocker (52%), do not rely on “send people to training.” Define roles (AI Owner, domain lead, data steward, security/risk), and a practical curriculum linked to current use cases.

  3. Adopt governance that matches your risk level and industry. In regulated or high-liability contexts (life sciences, insurance, legal), align governance to external expectations like model credibility and auditability.

  4. Design workflows where AI produces drafts, options, or summaries, not final decisions. This is how you get speed without sacrificing control. It also makes it easier to train teams and reduce risk.

  5. Treat AI as a product portfolio, not a collection of experiments. Maintain a simple portfolio view: use case, owner, KPI, status, risk tier, and next milestone.

  6. Develop the “internal AI Leader” role with real authority. With 90% expressing interest, convert that interest into a role definition: accountability, decision rights, and a mandate to standardize practices.


Ready to build internal AI capability?


If your team wants hands-on support building the AI Owner charter, governance framework, and pilots that actually ship to production, the GPS Summit transforms high-potential employees into AI Systems Generalists in three days.


Applied frameworks. Operational intelligence. Measurable ROI.



Revenue growth requires internal leaders who execute. Will you build that capability?

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page