top of page

AI Is Not the Bottleneck. Ownership Is: What Vancouver Leaders Told Us (and What the Data Confirms)

  • Writer: JR
    JR
  • Feb 13
  • 8 min read

Executive Summary

  • In this Vancouver workshop sample (n=18), leaders want AI most for cost reduction (44%), then revenue growth (22%) and customer experience (22%), with risk/compliance (11%) also present.

  • The biggest blockers are talent/skills (50%) and leadership buy-in (28%), not tool selection.

  • The defining readiness gap is accountability: 67% report no clear owner for AI and automation outcomes.

  • Execution maturity is early: 67% report zero pilots making it into production in the last 12 months; 78% have no AI KPIs tied to initiatives.

  • Governance is uneven: 56% report no protections for safe AI use, which increases risk and slows scaling once pilots touch real workflows.

  • Industry signals across construction, aviation, retail, hospitality, and safety/security show the same lesson: organizations getting value are moving from “pilots” to operating model change (ownership, data foundations, governance, measurement).


What the Survey Reveals About AI Readiness

Outcomes leaders want


The strongest theme is practical value, not experimentation. Vancouver participants most often chose cost reduction (44%) as the primary AI outcome, followed by revenue growth (22%) and customer experience (22%). A smaller group prioritized risk and compliance (11%).


This combination matters because it implies AI is being evaluated as a business lever across multiple functions. Cost reduction and customer experience typically live in frontline workflows (support, sales, operations). Revenue growth often requires cross-functional alignment (marketing, sales, pricing, product). Risk/compliance demands governance before scale. When outcomes span functions, unclear ownership becomes a structural problem, not a project management inconvenience.


What’s blocking progress


The blockers are consistent with a market that has moved past curiosity and into operational friction:

  • Talent/skills (50%): Leaders feel the internal capability gap.

  • Leadership buy-in (28%): Competing priorities, uncertainty about ROI, and risk concerns slow commitment.

  • Regulation/compliance (11%), plus smaller mentions of budget (6%) and tech stack/tools (6%).


This maps to broader Canadian signals: AI adoption is growing, but many businesses still report no plans or uncertainty, and adoption varies sharply by sector.


The ownership gap and why it matters


This dataset has a loud signal: 67% reported no clear owner accountable for AI and automation outcomes. Only 11% reported a CEO/GM-level accountable owner, while 17% pointed to a functional leader and 6% to a working group without a single accountable owner.


That ownership gap shows up downstream in two places that determine whether AI becomes real:

  • Production delivery: 67% report zero pilots reaching production in the last 12 months.

  • Measurement discipline: 78% report no KPIs tied to AI, and only 6% have at least one use case with a KPI, named owner, and review cadence.


It also shows up in risk posture. 56% report no safe-use protections today. In open comments, concerns often clustered around “how do we use AI without leaking sensitive information,” and “how do we know if it’s right.” Those are governance and workflow questions, not model questions.


Finally, confidence is cautious. Average competitiveness confidence by 2027 is 5.3/10 (with responses spread from 1 to 10). In a small sample, that kind of distribution usually means leaders are watching the market move quickly but aren’t yet sure their internal execution system can keep up.


Industry Intelligence: How 5 Sectors Are Responding to AI Right Now


Below are five sector mini-briefs grounded in (1) what appeared in the Vancouver survey industries and (2) credible external research.


Construction and building materials


What’s changing: Construction is under pressure to build faster with tighter margins and ongoing labor constraints. The shift is from “digital tools for coordination” to AI-supported execution: reducing rework, compressing schedules, and managing risk.


Where AI is being applied (realistic use cases)

  • Document intelligence for RFIs, submittals, change orders

  • Schedule risk prediction and constraint management

  • Safety guidance, inspection support, quality tracking

  • Estimating support and bid/no-bid decisioning


Common pitfalls

  • Data fragmentation across many systems

  • Pilots that don’t survive handoff to field workflows

  • Governance gaps when AI touches contract-adjacent documents


Stats (2–3) + sources

  • McKinsey estimates construction productivity grew only 0.4% annually from 2000–2022, with a decline from 2020–2022, which explains why “rework reduction” and “cycle time” use cases attract attention.

  • Deloitte/Autodesk reported 37% of surveyed construction businesses using AI/ML (APAC study), and highlighted the burden of multiple data environments, which directly limits scalable AI.

  • In Canada, overall AI use among businesses rose to 12.2% in Q2 2025, suggesting momentum, but sector-level adoption remains uneven.


Aviation and commercial operations


What’s changing: Airlines are treating AI as a resilience tool: preventing disruption, scaling operations to demand swings, and hardening cybersecurity. This is less about “innovation theatre” and more about reliability and recovery.


Where AI is being applied (realistic use cases)

  • Disruption prediction and irregular operations decision support

  • Customer communications and agent assist

  • Cybersecurity monitoring and response

  • Data platforms that enable cross-stakeholder coordination


Common pitfalls

  • Siloed operational data across partners

  • Heavy governance requirements that slow deployment

  • AI that improves response speed without improving root-cause operations


Stats (2–3) + sources

  • SITA’s North American airline IT insights report shows 100% of surveyed airlines implementing AI for cybersecurity, with North America ahead in scaling operations and disruption prediction capabilities.

  • A SITA press release notes that AI is a top investment priority for North American airlines, with cybersecurity and AI elevated in IT investment planning.

  • SITA reporting also highlights the sector’s focus on data platforms as foundations for AI, reinforcing that “data-first” is not optional in high-reliability industries.


Retail


What’s changing: Retail AI is moving from personalization experiments to supply chain, availability, and margin protection. The competitive bar is rising because AI is influencing both cost structure and how customers discover products.


Where AI is being applied (realistic use cases)

  • Demand forecasting and inventory positioning

  • Supply chain visibility and exception management

  • Pricing and promotion optimization

  • Customer service automation and content operations


Common pitfalls

  • Poor product data quality and inconsistent taxonomy

  • Measuring activity (outputs) instead of business impact (margin, conversion, turns)

  • Point solutions that don’t integrate into merchandising workflows


Stats (2–3) + sources

  • Deloitte reports 30% of retailers surveyed use AI for supply chain visibility, expected to rise to 41% within a year, and 59% anticipate positive ROI from AI-driven supply chain initiatives within 12 months.

  • McKinsey estimates gen AI could unlock $240B–$390B in annual value for retail, largely through productivity and customer-facing impact, but only when integrated into operating routines.

  • In Canada, expected AI use over the next 12 months remains limited overall, reinforcing that competitive gaps may widen between fast executors and cautious adopters.


Hospitality and restaurants


What’s changing: Hospitality shows a split reality: large operators are adopting AI inside core workflows, while official adoption metrics remain low across the broader sector. Customer expectations are rising regardless.


Where AI is being applied (realistic use cases)

  • Customer experience enhancements (service personalization, support)

  • Inventory management and waste reduction

  • Labor scheduling and forecasting

  • Loyalty program optimization


Common pitfalls

  • Fragmented tooling without governance

  • Underinvestment in frontline enablement

  • AI deployed for speed without service quality controls


Stats (2–3) + sources

  • Statistics Canada reported AI use was lowest in accommodation and food services (1.5%) in Q2 2025, underscoring the sector’s slower baseline adoption.

  • Deloitte’s restaurant survey reporting indicates 63% of surveyed respondents use AI daily for customer experience enhancements and 55% use it daily for inventory management (with additional groups piloting).

  • Deloitte also reports broad intent to increase AI investment among restaurant executives, signaling momentum even where baseline maturity varies.


Safety, security services, and mission-critical communications


What’s changing: Security-oriented sectors are facing two simultaneous changes: AI is improving defense and detection, while genAI is also accelerating adversarial tactics. That raises the value of strong governance, auditability, and clear safe-use rules.


Where AI is being applied (realistic use cases)

  • Threat detection and incident response support

  • Anomaly detection in logs and communications

  • Policy enforcement and access control monitoring

  • Physical security analytics (emerging, uneven adoption)


Common pitfalls

  • Deploying AI tools without assessing their security

  • Data leakage through poorly governed AI usage

  • No clear accountability for risk decisions and exceptions


Stats (2–3) + sources

  • World Economic Forum reports that 66% of organizations expect AI to have the most significant impact on cybersecurity, yet only 37% have processes to assess the security of AI tools before deployment (2025 outlook).

  • WEF also highlights elevated cyber risk and genAI-enabled adversarial concern, reflecting why governance maturity is now a competitive requirement, not a compliance checkbox.

  • IBM’s Cost of a Data Breach report shows organizations making extensive use of security AI and automation had lower average breach costs (example figures: $5.72M vs $3.84M, saving $1.88M), strengthening the business case for operationalizing security AI responsibly.


What High-Performing Organizations Are Doing Differently


Across sectors, the most consistent difference is not model choice. It is operating discipline.

  1. A single accountable owner with decision rights

    High performers name an owner for outcomes, prioritization, and trade-offs. Committees may advise, but they don’t substitute for accountability.

  2. Workflow-first design

    They start with a workflow where cost, delay, or risk is visible and measurable. They define handoffs, review steps, and exceptions before deploying automation.

  3. Data foundations treated as a product

    They standardize a small set of “decision datasets” (definitions, access, controls) and reduce manual exports. This is where many pilots fail quietly.

  4. Governance that enables speed

    High performers clarify what’s allowed, what’s restricted, and what requires review. They log usage where needed. Rules are enforced, not aspirational.

  5. Value measurement built in from day one

    They track a few business KPIs (cycle time, cost-to-serve, conversion, uptime, error rate) and review them on a cadence. If it does not move a number, it does not scale.


Recommendations Informed by the Workshop Data


These recommendations map directly to the survey’s themes: outcomes (cost, growth, CX, risk), blockers (skills, buy-in), and the ownership gap.


Quick wins (next 2–4 weeks)


  1. Name an AI Owner for 90 days, with a narrow scope and real authority. This addresses the 67% “no clear owner” gap and prevents diffusion into tool trials.

  2. Create a one-page AI KPI scoreboard tied to outcomes leaders selected. If cost reduction is the priority, choose measures like cost-to-serve, cycle time, and rework. If CX is the priority, choose response time and satisfaction proxies. This directly addresses the 78% with no AI KPIs.

  3. Pick one pilot and write a “pilot-to-production checklist” before building. Make production readiness explicit: data owner, workflow owner, review steps, logging plan, rollback plan, and success threshold. This targets the 67% with no pilots reaching production.

  4. Publish “minimum viable safe-use rules” and enforce them. Given 56% report no protections, start with: restricted data types, approved tools, required human review for customer-facing outputs, and logging expectations for sensitive workflows.


Deeper changes (next 60–120 days)


  1. Build internal capability as a small bench, not a hero. Talent/skills is the top blocker (50%). Identify three roles: AI product owner (workflow + KPI), data steward (definitions + access), and risk lead (policy + exceptions).

  2. Standardize two “decision datasets” that unlock most pilots. The dataset shows 50% rely on scattered exports and 22% have nothing accessible. Choose datasets closest to outcomes (service tickets, production exceptions, sales pipeline, inventory positions).

  3. Solve leadership buy-in by shrinking the decision and increasing visibility. For the 28% citing buy-in, frame the program as “one workflow, one owner, one KPI, one quarter.” Reduce perceived risk and make progress legible.

  4. Treat governance as a speed system, not a brake. Security and compliance concerns are real, especially as genAI risk grows. Define what must be reviewed, what must be logged, and who signs off. This is how organizations scale without fear.

  5. Make vendors prove integration and measurement, not features. Where “tech stack/tools” showed up as a blocker, the root issue is often workflow fit, access patterns, and KPI ownership. Require proof of those before purchase.


A practical note on leadership development


The survey also shows high interest in capability-building: 83% want to learn more about developing an internal AI leader. If your goal is to convert interest into execution, choose a structured path that results in (1) a named owner, (2) a measurable pilot, and (3) a governance baseline. The GPS Summit links provided are included at the end of this document in a “Links” block.


Implications for Future Workshops and Initiatives


What resonated


  • Practical framing: Participants responded to concrete, outcome-linked thinking, especially around cost and CX.

  • Benchmarking: Confidence and maturity signals suggest leaders value knowing where they stand compared to peers.

  • Safety concerns: Comments reflect a desire for clarity on what is acceptable, especially with sensitive information.


Your Next Move: Build the AI Systems Generalist Who Drives Revenue


The data is clear: talent and skills block AI scale, not tools or budget.


Your organization needs the internal AI leader who moves from pilot to production, from experiment to measurable ROI.



Transform your high-potential employee into an AI Systems Generalist in 3 days. Applied frameworks. Operational intelligence. Force multiplier results.


See how GPS Summit delivers faster speed to competency than MIT, Stanford, or Harvard programs.


Revenue growth requires internal capability. Will you build it?

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page