top of page

The Quiet AI Divide in Vancouver: Teams With Owners Are Shipping, Everyone Else Is Stuck

  • Writer: JR
    JR
  • Feb 13
  • 8 min read
What the Survey Reveals About AI Readiness

Executive Summary

  • This analysis is based on a small dataset (n=11) from a Vancouver “Outsell, Outgrow, and Outsmart with AI” workshop, so findings should be read as directional.

  • Leaders’ top AI outcomes are cost reduction (36%), revenue growth (27%), and customer experience (27%). A smaller slice framed the “outcome” as closing a skills gap (9%), which is revealing in itself.

  • The #1 blocker is clear: talent and skills (64%), followed by data quality (27%) and budget (9%).

  • Ownership is less chaotic than many organizations, but still uneven: 64% say a functional leader (Sales/Ops/IT) owns AI outcomes; 18% report no clear owner; 18% cite a CEO/GM accountable owner.

  • Delivery maturity is split: 55% report 1–2 pilots in production in the last 12 months, while 36% report zero and 9% report 3+.

  • Governance is the biggest operational risk: 0% report strong enforced safe-use rules; 55% report no protections, and 45% report rules that are only partly enforced.

  • Industry data across construction, manufacturing, distribution, financial services, and medtech points to the same conclusion: AI advantage is shifting from experimenting to operating. Organizations that clarify ownership, improve data readiness, and enforce governance are the ones converting pilots into durable performance. (Statistics Canada)


What the Survey Reveals About AI Readiness

Outcomes leaders want


Even in a small Vancouver sample, the intent is practical: reduce cost, grow revenue, and improve the customer experience.

  • Cost reduction (4/11) signals leaders are looking for automation that removes recurring effort and reduces rework, not just “insight dashboards.”

  • Revenue growth (3/11) suggests demand for better targeting, better conversion, faster quoting, stronger follow-up, and cleaner handoffs between teams.

  • Customer experience (3/11) points to response speed, consistency, and fewer service failures.

  • One respondent listed “talent/skills gap” as the desired “outcome.” That’s not a typical business outcome, but it reflects a reality many leaders are now confronting: capability is becoming a first-order constraint, not a nice-to-have.


A useful takeaway: AI programs in this group are being evaluated on business impact, but they will rise or fall on whether teams can build repeatable capability.


What’s blocking progress


The blocker ranking is unusually concentrated:

  1. Talent and skills (7/11)

  2. Data quality (3/11)

  3. Budget (1/11)


That ordering is important. It suggests that for most participants, “money” is not the first obstacle. The obstacles are:

  • not enough internal skill to choose and run the right pilots, and

  • not enough trustworthy data to operationalize them.


This matches broader Canadian signals that AI use is increasing but uneven, and that adoption often requires workflow redesign and staff training, not just tooling. (Statistics Canada)


The ownership gap (AI Owner Status patterns) and why it matters


Your Vancouver responses show more ownership than many workshops, but still not enough executive-level accountability:

  • Functional leader owner (Sales/Ops/IT): 7/11

  • No clear owner: 2/11

  • CEO/GM accountable owner: 2/11

  • Working group without a single owner: not the dominant pattern in this dataset (though it often appears in other cohorts).


This ownership pattern explains two other findings:


1) Pilot-to-production progress is real, but uneven.A majority reported 1–2 pilots reaching production in the last 12 months, but over a third reported zero. That split often occurs when some teams have an operator who can drive delivery end-to-end, while others stall in evaluation mode.


2) Measurement discipline is mixed.

  • 4/11: no KPIs tied to AI yet

  • 3/11: track results occasionally, but no one owns the KPI

  • 4/11: at least one use case has a KPI, named owner, and review cadence


In other words, Vancouver has a “two-speed” AI reality: some teams are starting to operate, others are still experimenting without a scoreboard.


The biggest risk is governance. Not one organization reported strong enforced safe-use rules. That becomes a scaling problem the moment AI touches customer communications, pricing, regulated data, or any workflow where errors create real cost.


Industry Intelligence: How 5 Sectors Are Responding to AI Right Now


Below are five mini-briefs aligned to industries represented in your survey. Each includes what’s changing, realistic use cases, pitfalls, and verifiable data points.


Construction


What’s changing: Construction is moving from “digital documentation” to AI-supported execution: estimating, risk, quality, and schedule control. The driver is not hype. It’s margin pressure and labor scarcity.


Where AI is being applied

  • Document intelligence for RFIs, submittals, and change orders

  • Schedule and risk forecasting

  • Safety and quality inspection support

  • Estimating assistance and cost variance analysis


Common pitfalls

  • Fragmented data across many tools and job sites

  • AI pilots that do not survive real field workflows

  • Poor governance for contract-adjacent documentation


Stats

  • Deloitte/Autodesk reports 37% of surveyed construction businesses use AI and machine learning, up from 26% in 2023. (Deloitte)

  • The same report notes the average construction business adopted 6.2 technologies (out of 16), a sign that tool sprawl and integration are now core constraints. (Deloitte)

  • McKinsey estimates construction productivity grew only ~0.4% annually from 2000–2022, reinforcing why execution-focused AI use cases (rework reduction, cycle time) matter more than experimentation. (McKinsey & Company)


Workshop reality check (Vancouver)

Your participants’ emphasis on cost reduction maps directly to construction’s biggest value pools: fewer repeats, faster approvals, tighter scheduling decisions.


Food manufacturing


What’s changing: Food manufacturers are applying AI where it reduces waste, increases throughput, and improves consistency. GenAI is showing up as an enablement layer: SOP support, maintenance assistance, faster documentation, and quicker problem-solving.


Where AI is being applied

  • Yield optimization and quality inspection

  • Predictive maintenance

  • Demand planning signals and production scheduling

  • Operator support: troubleshooting, training, and SOP retrieval


Common pitfalls

  • Data quality issues (labels, lot tracking, inconsistent definitions)

  • Pilots that never connect to operational routines

  • Overfitting “model performance” without measuring plant KPIs


Stats

  • Deloitte’s 2025 smart manufacturing survey reports 29% using AI/ML at facility or network level, and 24% have deployed generative AI at similar scale. (Deloitte)

  • In IFT’s Technology Trends Survey, 50% of industry professionals said their companies plan to invest in AI in 2025. (ift.org)

  • McKinsey estimates genAI can increase the economic impact of traditional AI by 15–40% for CPG, unlocking $160B–$270B in additional annual profit (EBITDA) globally. (McKinsey & Company)


Workshop reality check (Vancouver) Your #1 blocker (skills) is consistent with food manufacturing’s reality: operational value is available, but it takes multidisciplinary execution (plant ops + data + governance).


Food distribution


What’s changing: Distribution is adopting AI to make planning less manual and more responsive: inventory, demand signals, routing, labor scheduling, and exception handling. The gap is no longer “can we do it?” The gap is “do we have the operating system to scale it?”


Where AI is being applied

  • Demand forecasting and inventory optimization

  • Warehouse productivity and slotting

  • Order exceptions and service-level management

  • Supplier and replenishment decisions


Common pitfalls

  • Chasing short-term ROI with disconnected pilots

  • Treating AI as analytics instead of workflow redesign

  • Underinvesting in data governance and KPI ownership


Stats

  • Gartner found only 23% of supply chain leaders report having a formal AI strategy (survey of leaders who deployed AI in the prior 12 months). (Gartner)

  • Gartner predicts 70% of large organizations will adopt AI-based supply chain forecasting by 2030. (Gartner)

  • McKinsey reports AI-driven forecasting can reduce errors by 20–50%, with lost sales and product unavailability reduced by up to 65% in some contexts. (McKinsey & Company)


Workshop reality check (Vancouver) Your participants’ “data quality” blocker shows up here immediately. Forecasting and inventory AI is only as good as product, customer, and order data definitions.


Insurance and benefits consulting


What’s changing: The sector is deploying genAI faster than many expected, but with a persistent tension: value is clear in claims, underwriting support, and customer workflows, yet governance and reputational risk are non-negotiable.


Where AI is being applied

  • Claims triage, intake summarization, and workflow routing

  • Underwriting support: document extraction and decision assistance

  • Customer support and agent assist

  • Fraud detection and anomaly identification


Common pitfalls

  • Scaling without guardrails for sensitive data

  • KPI ambiguity (faster does not always mean better)

  • Overconfidence in output quality without human review


Stats

  • Statistics Canada reports 30.6% of businesses in finance and insurance used AI over the prior 12 months (Q2 2025). (Statistics Canada)

  • Deloitte reports 76% of surveyed insurers have implemented genAI in one or more business functions. (Deloitte)

  • Gartner reports 59% of finance leaders are using AI in their finance function (2025 survey). (Gartner)

Workshop reality check (Vancouver) Your governance result (0% strong enforcement) is the red flag for any financial services-related business. This is where “partly enforced rules” typically break first.


Medical technologies


What’s changing: AI-enabled devices are proliferating, and scrutiny is rising alongside adoption. The near-term shift is toward stronger monitoring, clearer evidence expectations, and more explicit governance around algorithm behavior.


Where AI is being applied

  • Medical imaging and diagnostic assistance

  • Surgical navigation and decision support

  • Monitoring devices and alerting

  • Administrative automation around clinical documentation (adjacent but influential)


Common pitfalls

  • Weak post-market monitoring and unclear accountability for model performance

  • Misalignment between “regulatory clearance” and real-world reliability

  • Insufficient governance for model updates over time


Stats

  • Reuters reports at least 1,357 AI-enabled medical devices are authorized by the FDA, about double the number through 2022. (Reuters)

  • Reuters also cites research finding 60 FDA-authorized AI devices linked to 182 product recalls, with 43% of recalls occurring within a year of approval. (Reuters)

  • The FDA’s AI-enabled medical device list notes it is not comprehensive and is updated periodically, reinforcing that transparency and monitoring are still evolving. (U.S. Food and Drug Administration)


Workshop reality check (Vancouver) Even for non-medtech firms, this is the cautionary tale: scaling AI without governance and monitoring creates operational and reputational risk.


What High-Performing Organizations Are Doing Differently


Across these sectors, the pattern is consistent: high performers focus on operating mechanics.

  1. They make ownership explicit and durable A single owner with decision rights (prioritization, resourcing, measurement) beats “shared responsibility.”

  2. They build around a workflow, not a tool They define where AI sits in the process, who reviews outputs, what exceptions look like, and what triggers a decision.

  3. They treat data quality as a business product They standardize definitions and access controls for the datasets that drive the most value.

  4. They enforce governance at the speed of work Rules are not a PDF. They are embedded: approved tools, restricted data, required reviews, logging, and clear escalation paths.

  5. They measure outcomes that a CFO and operator both believe

    Cycle time, cost-to-serve, conversion, error rate, uptime, customer churn, rework. They review on a cadence, not when something breaks.


Recommendations Informed by the Workshop Data


Each recommendation ties back to the dominant themes: outcomes (cost, growth, CX), blockers (skills, data quality), and ownership patterns.


Quick wins (next 2–4 weeks)


  1. Turn the “functional owner” into a real operating role with a charter.Most respondents already have a functional owner. Formalize decision rights, KPIs, and a weekly operating cadence.

  2. Create a two-page KPI scoreboard and baseline it before the next pilot.Your split KPI maturity suggests a simple intervention: pick 3–5 KPIs that match the chosen outcome and capture the baseline now.

  3. Run a “data readiness triage” for one high-value workflow.With 64% citing skills and 27% data quality, reduce scope: identify one workflow and define exactly which fields must be clean and consistent.

  4. Publish minimum viable safe-use rules and enforce them immediately.With 0% reporting strong rules, start with: restricted data types, approved tools, human review requirements, and logging for sensitive workflows.


Deeper changes (next 60–120 days)


  1. Build internal capability as a bench, not a single expert. Treat “skills gap” as a system problem: train operators, analysts, and leaders together around one workflow and one KPI.

  2. Adopt a pilot-to-production gate that forces operational readiness. Your production results are promising (55% shipped 1–2 pilots), but a third still shipped none. Require: data owner, workflow owner, risk review, training plan, KPI review cadence.

  3. Improve decision speed by designing for change frequency. Most teams reported they can only change production “within a month/quarterly” (64%). If AI is supposed to create advantage, it must compress decision cycles, not add another layer of work.

  4. Make governance a scaling tool, not a brake. Medtech shows what happens when monitoring and accountability are weak. Build monitoring, auditability, and escalation early, not after incident response.




Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page