The Ownership Gap: What Vancouver Leaders Told Us About AI Readiness
- JR

- Feb 12
- 9 min read

Executive Summary
In a Vancouver workshop survey (16 responses), leaders across industries largely want AI to drive growth, but most are still building the basics: ownership, data readiness, and governance discipline.
The dominant desired outcome was revenue growth (56%), while the most common blocker was skills and capability gaps (50%), suggesting ambition is outpacing internal capacity.
Ownership is the clearest fault line: only 38% report a single accountable AI owner at the CEO/GM level, while 56% report either no clear owner or a committee with no single accountable person.
Execution maturity is low: 56% report zero AI/automation pilots reaching production in the past year, and 50% say they aren’t tracking AI performance measures at all.
Governance is uneven: only 6% describe strong, enforced rules for safe AI use; most report informal or inconsistently followed guidelines.
Industry data shows adoption is accelerating, but the advantage is shifting to organizations that can operationalize AI: integrate data, redesign workflows, and measure outcomes, not just experiment.
What the Survey Reveals About AI Readiness
Outcomes leaders want
Even with a small sample, the outcome pattern is consistent: leaders are not asking for novelty. They want business movement.
Growth as the headline: 56% (9/16) chose revenue growth as their primary AI outcome.
Efficiency as the second wave: 19% (3/16) prioritized cost reduction, and 13% (2/16) prioritized automation of business processes.
A smaller set is thinking strategically: a few respondents framed AI as a way to build competitive advantage or avoid future disruption rather than as a direct cost lever.
This is a pragmatic mix. Growth-oriented leaders are implicitly asking for AI that touches pricing, sales execution, customer retention, and speed-to-market, not just back-office savings.
What’s blocking progress
The blockers skew heavily toward capability and execution constraints rather than lack of interest.
Talent and skills gap: 50% (8/16) identified capability as the primary barrier.
Budget pressure: 19% (3/16) cited constrained investment capacity.
Data readiness issues: 13% (2/16) pointed to data quality or availability.
Security/compliance concerns: 13% (2/16) cited risk and safe-use constraints.
Time and bandwidth: 6% (1/16) named lack of time as the main constraint.
This mix is important because it suggests many organizations are past the “should we care?” stage. They are stuck at “how do we build this without breaking things or distracting the team?”
The ownership gap and why it matters
When asked who is responsible for AI and automation outcomes, the responses split into three clear camps:
CEO/GM accountable owner: 38% (6/16)
Working group, no single accountable owner: 31% (5/16)
No clear owner at all: 25% (4/16)
Functional leader owner: 6% (1/16)
That gap shows up again in measurement discipline. Asked which statement best describes their AI metrics today:
50% (8/16) are not tracking AI KPIs
31% (5/16) track something but no one owns the number
19% (3/16) have clear KPI ownership
In practice, “no single owner” usually means:
pilots that do not graduate into real operations,
unclear prioritization (“everything is a use case”),
risk controls that are either absent or so strict they block progress,
and a measurement vacuum where teams can’t prove value.
The rest of the survey reinforces this. 56% report zero pilots into production in the last 12 months. Data readiness is also midstream: only 19% say they have clean, structured data ready for a 30-day pilot, while most report raw data that could be structured (44%) or siloed data requiring export and cleanup (38%).
Put simply: many leaders in this group are ready to move, but the operating system underneath them is not.
Industry Intelligence: How 5 Sectors Are Responding to AI Right Now
1) Construction and building trades
What’s changing: Construction is moving from “digital tools as paperwork” toward data-driven operations: field productivity, change order control, schedule risk prediction, and safety management. The winners are building systems that connect project data end to end.
Where AI is being applied (realistic use cases)
Schedule and risk forecasting from project signals
Document intelligence for RFIs, submittals, and change orders
Safety monitoring and quality inspection workflows
Estimating support and bid/no-bid decisioning
Common pitfalls
Fragmented data environments that prevent reliable insights
AI experiments that never connect to jobsite workflows
Over-reliance on tools without changing how decisions get made
Current data points
In a 2025 construction digital adoption study, 37% of surveyed construction businesses reported using AI/ML.
Construction firms reported a median of 11 data environments for project information, a structural barrier to scalable AI.
Respondents estimated they could save 10.5 hours per week if data were managed in one environment, underscoring the productivity value of integration before advanced AI.
Survey “voice of the market” connection: Several Vancouver participants came from construction-adjacent sectors and described the same tension: pressure to move faster, but an operating reality where data and accountability are split across roles and tools.
2) Manufacturing and specialty materials
What’s changing: Manufacturing AI is shifting from isolated “smart factory” pilots to broader adoption across planning, quality, maintenance, and workforce enablement. Generative AI is increasingly used as a layer that helps people access procedures, troubleshoot issues, and move information faster.
Where AI is being applied (realistic use cases)
Predictive maintenance and downtime reduction
Quality inspection and defect detection
Demand planning and inventory optimization
Shop-floor copilots: SOP retrieval, troubleshooting guidance, training support
Common pitfalls
Weak data foundations (sensor coverage gaps, inconsistent master data)
“Model-first” projects without a clear operational owner
Underestimating change management on the plant floor
Current data points
In Deloitte’s 2025 smart manufacturing survey, 78% of manufacturers said they started exploring generative AI, but only 24% had deployed it, showing a wide “interest-to-production” gap.
The same survey reports 29% are using AI/ML, and 82% expect AI/ML to be “very important” in the next three years, indicating momentum but uneven maturity.
Survey “voice of the market” connection: Participants from manufacturing-related sectors in the Vancouver group echoed this gap: real appetite to use AI for throughput and cost control, but limited internal skill depth and uneven data readiness.
3) Telecommunications and network services
What’s changing: Telecom is under constant cost pressure while networks grow more complex. AI is being used to cut operational load (network operations and customer service), and generative AI is being positioned as a productivity layer for engineers and frontline teams.
Where AI is being applied (realistic use cases)
Network optimization and anomaly detection
Field service routing and maintenance planning
Customer support automation and agent assist
Site planning and deployment decision support
Common pitfalls
Legacy systems blocking integration
Inconsistent data governance across network, customer, and billing systems
Using AI to answer tickets faster without addressing root causes
Current data points
A 2024 NVIDIA telecom survey reported nearly 90% of telecom respondents are using AI, with 41% deploying AI and 48% piloting it.
In a telecom industry survey, 67% cited data integration challenges and 52% pointed to legacy IT systems as barriers to effective AI.
The share using AI for new-site planning was reported at 31%, with an expectation to rise to 63% by 2027, signaling rapid operational adoption where data and systems allow it.
Survey “voice of the market” connection: Even outside telecom, the Vancouver responses mirror the same blocker: “we have data, but it isn’t connected enough to drive fast decisions.”
4) Insurance and risk management
What’s changing: Insurance is adopting AI where it can reduce cycle time and improve consistency: claims triage, underwriting support, fraud signals, and customer interactions. But the sector’s risk sensitivity makes governance and auditability non-negotiable.
Where AI is being applied (realistic use cases)
Claims intake, summarization, and triage
Underwriting document extraction and decision support
Fraud pattern detection
Customer communications with tighter controls and disclosure
Common pitfalls
Deploying tools without clear accountability for errors
Inadequate privacy and data-use controls
Overconfidence in output quality without human review
Current data points
Deloitte reports 76% of surveyed insurance organizations have implemented generative AI in at least one function, showing rapid movement from experimentation to use.
The same research indicates only 45% agreed that the benefits of generative AI outweigh the risks, reflecting real internal tension between value and exposure.
In Canada, expected AI usage in finance and insurance increased from 17.9% (Q3 2024) to 31.5% (Q3 2025), indicating rising intent even as governance expectations grow.
Survey “voice of the market” connection: In the workshop data, security and safe-use concerns were not the top blocker overall, but they were present and often paired with a lack of formal rules.
5) Agriculture and equipment ecosystems
What’s changing: Agriculture is adopting technology in a cost-sensitive environment where input prices and weather volatility force careful ROI decisions. AI is most effective when embedded in precision agriculture, maintenance, forecasting, and decision support, often through equipment and platform ecosystems.
Where AI is being applied (realistic use cases)
Precision agriculture decisioning (variable rate, yield optimization)
Predictive maintenance for equipment fleets
Supply chain forecasting and inventory planning
Advisory tools that translate agronomic signals into actions
Common pitfalls
Data fragmentation across equipment, farms, and vendors
High change friction during busy seasons
“Tool overload” without a single operating rhythm
Current data points
McKinsey notes 20–30% of farmers have adopted precision agriculture hardware and software, with additional adoption expected, indicating steady but not universal uptake.
In a global survey of 4,400 farmers, McKinsey highlights the pressure environment shaping tech decisions, including elevated concern around costs and productivity.
Canada’s expected AI usage in agriculture, forestry, fishing and hunting remained low at 4.9% (Q3 2025), a reminder that adoption is uneven by sector and often mediated through equipment providers and partners.
Survey “voice of the market” connection: Participants tied to equipment and operational ecosystems emphasized the same theme as other sectors: without internal capability and clean data flows, pilots stall.
What High-Performing Organizations Are Doing Differently
Across sectors, strong AI performance has less to do with picking the perfect use case and more to do with building a repeatable operating model. The high performers tend to share five traits:
They name an accountable owner and give them decision rights.Not “a committee that meets monthly,” but a single role responsible for outcomes, prioritization, and trade-offs.
They treat data readiness as a product, not a project.They standardize critical datasets, define ownership, and reduce tool sprawl before expecting advanced AI to work reliably.
They build governance that enables speed, not fear.Rules are clear, practical, and enforced, especially around sensitive data, customer communications, and vendor tools. In Canada, privacy expectations and public trust dynamics make this especially important.
They redesign workflows, then add AI.AI gets embedded into how work happens: who reviews what, what triggers a decision, and how exceptions are handled.
They measure a small set of “business truth” outcomes.Cycle time, cost-to-serve, win rate, conversion, retention, uptime, quality yield. If AI doesn’t move a number, it’s not a program, it’s a demo.
Recommendations Informed by the Workshop Data
Below are recommendations tied directly to the survey’s outcome themes (growth and efficiency), blocker themes (skills, budget, data, risk), and ownership patterns.
Quick wins
Appoint a single “AI Owner” for the next 90 days, even if temporary. This directly addresses the “no owner / committee” pattern (56% combined). Give them authority to say “no,” choose priorities, and own results.
Pick one growth use case and one efficiency use case, then define success in one sentence each. With revenue growth leading (56%), pair it with a cost or automation use case (32% combined) and set measurable targets.
Write a two-page “safe-use” rule set and make it real. Most respondents report informal or inconsistently followed AI rules. Start with: what data can’t be used, what requires review, what must be disclosed, and which tools are approved. Use government-style principles as a reference point for practical safeguards (fairness, accountability, security, transparency, education, relevance).
Create a “pilot-to-production checklist” before starting the pilot. This directly counters the 56% reporting zero pilots reaching production. Require: dataset owner, workflow owner, review step, audit/logging plan, and a decommission plan if value is not proven.
Deeper changes
Build internal capability through a small, formal “AI Leader bench,” not one hero.With skills as the top blocker (50%) and 100% expressing interest in developing internal AI leadership, treat capability as a program: training, templates, and weekly execution support.
Reduce data fragmentation by selecting 1–2 “decision datasets” to standardize first.Most respondents say their data is either siloed or raw-but-structurable. Choose the datasets closest to outcomes: pipeline and customer activity for growth, work order and cycle time for operations.
Put AI KPIs on the operating dashboard and assign ownership.Half aren’t tracking AI measures and another third have no KPI owner. Track only 3–5 KPIs that map to business outcomes, plus one risk KPI (privacy incidents, customer complaints, or model error rate).
Treat governance as enablement: privacy, security, and IP review baked into the workflow.Canada’s regulatory landscape is evolving and organizations should not assume “no law yet” means “no exposure.” Even federal guidance stresses risk evaluation, stakeholder involvement, and avoiding sensitive data leakage.
Fund AI with a portfolio approach, not one big bet.Budget is a top blocker (19%). Allocate small amounts across 2–3 pilots with shared components (data prep, governance templates, change management) instead of one expensive, bespoke experiment.




Comments