Produced by Vector Labs
HOW TO USE THIS ASSESSMENT
Score each statement from 0 to 2:
- 0 = Not in place
- 1 = Partially in place / in progress
- 2 = Fully in place
Complete all 30 questions in your chosen industry track.
Total your score out of 60. Find your maturity stage and recommendations at the end.
Select Your Industry
Section 1 — Data Readiness
Q1. Data Availability & Quality
We have structured, labelled datasets relevant to our target AI use cases. Our data quality is actively monitored and documented.
We have structured, labelled datasets relevant to our target AI use cases. Our data quality is actively monitored and documented.
Q2. Data Governance & Accessibility
We have documented data governance policies. Our data is programmatically accessible to analytics and ML teams without manual extraction requests.
We have documented data governance policies. Our data is programmatically accessible to analytics and ML teams without manual extraction requests.
Q3. Data Pipeline Maturity
We have active data pipelines (not one-off exports). Data flows automatically from source systems into a centralised platform or data lake in near-real time or on a defined schedule.
We have active data pipelines (not one-off exports). Data flows automatically from source systems into a centralised platform or data lake in near-real time or on a defined schedule.
Q4. Consent & Privacy Architecture
We understand exactly which data assets have appropriate consent or legal basis for AI use. We have a clear process for ensuring AI systems only operate on permissioned data.
We understand exactly which data assets have appropriate consent or legal basis for AI use. We have a clear process for ensuring AI systems only operate on permissioned data.
Q5. Data Volume & Representativeness
Our datasets are large enough to train reliable models. They represent the real-world diversity of cases our AI will need to handle (e.g., different subscriber types, patient demographics, fraud typologies).
Our datasets are large enough to train reliable models. They represent the real-world diversity of cases our AI will need to handle (e.g., different subscriber types, patient demographics, fraud typologies).
Section Total: 0 / 10
Section 2 — Infrastructure
Q6. Cloud & Compute Infrastructure
We have a cloud-native or hybrid infrastructure capable of supporting model training, inference, and deployment at scale. We can access GPU or managed AI services when needed.
Q7. MLOps & Model Lifecycle Management
We have tooling and processes for versioning models, monitoring their performance in production, triggering retraining, and rolling back failed deployments. Our models do not "set and forget."
Q8. Integration Architecture
Our AI outputs can connect to the business systems that need to act on them (CRM, EHR, fraud management platform, content management system) via APIs or standard integration patterns — not manual exports.
Q9. Testing & Staging Environments
We have dedicated staging environments where AI models can be tested against real-world conditions before production deployment.
Q10. Security & Access Controls
Our AI infrastructure has appropriate security controls: data encryption at rest and in transit, role-based access, audit logging, and protection against model inversion or data extraction attacks.
Section Total: 0 / 10
We have a cloud-native or hybrid infrastructure capable of supporting model training, inference, and deployment at scale. We can access GPU or managed AI services when needed.
We have tooling and processes for versioning models, monitoring their performance in production, triggering retraining, and rolling back failed deployments. Our models do not "set and forget."
Our AI outputs can connect to the business systems that need to act on them (CRM, EHR, fraud management platform, content management system) via APIs or standard integration patterns — not manual exports.
We have dedicated staging environments where AI models can be tested against real-world conditions before production deployment.
Our AI infrastructure has appropriate security controls: data encryption at rest and in transit, role-based access, audit logging, and protection against model inversion or data extraction attacks.
Section 3 — Talent & Organisation
Q11. Internal AI/ML Capability
We have internal data scientists or ML engineers who can own the design, build, and iteration of AI models — not solely dependent on external vendors for every change.
We have internal data scientists or ML engineers who can own the design, build, and iteration of AI models — not solely dependent on external vendors for every change.
Q12. AI Leadership & Sponsorship
A senior leader (CDO, CTO, or equivalent) actively owns the AI programme, has board visibility, and can make resourcing decisions. AI is not treated as an IT project.
A senior leader (CDO, CTO, or equivalent) actively owns the AI programme, has board visibility, and can make resourcing decisions. AI is not treated as an IT project.
Q13. AI Literacy Across the Business
Key stakeholders who will use or be affected by AI outputs (e.g., marketing teams, clinicians, compliance officers) understand what AI can and cannot do, and can interpret model outputs without requiring a data scientist in the room.
Key stakeholders who will use or be affected by AI outputs (e.g., marketing teams, clinicians, compliance officers) understand what AI can and cannot do, and can interpret model outputs without requiring a data scientist in the room.
Q14. Change Management Capability
We have a defined approach to managing the organisational change that AI introduces: communicating with affected teams, redefining roles, and managing resistance.
We have a defined approach to managing the organisational change that AI introduces: communicating with affected teams, redefining roles, and managing resistance.
Q15. External Partner Readiness
We have a clear process for scoping, selecting, and managing external AI partners. We know how to write an AI brief, evaluate proposals, and measure delivery.
We have a clear process for scoping, selecting, and managing external AI partners. We know how to write an AI brief, evaluate proposals, and measure delivery.
Section Total: 0 / 10
TRACK A — MEDIA & PUBLISHING
Section 4 — Subscription AI Readiness
Q16. Subscriber Behavioural Data
We capture and store granular subscriber behaviour: content engagement by article/topic/format, visit frequency, session depth, device, and time-of-day patterns. This data is linked to subscription status and payment history at an individual level.
We capture and store granular subscriber behaviour: content engagement by article/topic/format, visit frequency, session depth, device, and time-of-day patterns. This data is linked to subscription status and payment history at an individual level.
Q17. Churn Prediction Capability
We have a working model (or structured process) that generates a real-time or near-real-time churn probability score per subscriber — not just a retrospective report of who cancelled last month.
We have a working model (or structured process) that generates a real-time or near-real-time churn probability score per subscriber — not just a retrospective report of who cancelled last month.
Q18. Personalisation Infrastructure
We have a Customer Data Platform (CDP) or equivalent that unifies subscriber data across devices and products. Personalisation is powered by this unified profile, not by siloed campaign tools.
We have a Customer Data Platform (CDP) or equivalent that unifies subscriber data across devices and products. Personalisation is powered by this unified profile, not by siloed campaign tools.
Q19. Consent Rate & First-Party Data Coverage
We know what percentage of our audience has consented to personalised data use. Our AI systems are built to operate on consented data, not assumed permissions.
We know what percentage of our audience has consented to personalised data use. Our AI systems are built to operate on consented data, not assumed permissions.
Q20. Acquisition Funnel Intelligence
We use propensity modelling to identify which anonymous or registered readers are most likely to subscribe — and vary our paywall or offer logic based on that prediction.
We use propensity modelling to identify which anonymous or registered readers are most likely to subscribe — and vary our paywall or offer logic based on that prediction.
Section Total: 0 / 10
Section 5 — Media Governance & Explainability
Q21. Editorial Trust in Algorithmic Systems
Senior editorial and product leadership trusts and actively uses AI-driven recommendations. There is a defined process for editorial override of algorithmic decisions. AI and editorial work together — the model does not operate in isolation.
Senior editorial and product leadership trusts and actively uses AI-driven recommendations. There is a defined process for editorial override of algorithmic decisions. AI and editorial work together — the model does not operate in isolation.
Q22. GDPR & ePrivacy Compliance for AI
Our AI personalisation systems have been reviewed for GDPR and ePrivacy compliance. We have a documented legal basis for each data processing activity that feeds our models.
Our AI personalisation systems have been reviewed for GDPR and ePrivacy compliance. We have a documented legal basis for each data processing activity that feeds our models.
Q23. Measurement & Incrementality
We measure the true incremental lift of our AI systems using holdout groups — not just comparing before/after performance. We can isolate what the AI is actually contributing vs. background trends.
We measure the true incremental lift of our AI systems using holdout groups — not just comparing before/after performance. We can isolate what the AI is actually contributing vs. background trends.
Q24. Content Licensing & AI Risk
We have a defined policy on our content's use by external AI systems (crawlers, LLMs) and on our own use of AI-generated content. Our intellectual property strategy accounts for the AI licensing landscape.
We have a defined policy on our content's use by external AI systems (crawlers, LLMs) and on our own use of AI-generated content. Our intellectual property strategy accounts for the AI licensing landscape.
Q25. Retention Workflow Automation
Our AI churn model connects to automated retention workflows in our CRM or marketing platform — triggering personalised interventions without manual campaign management for each at-risk subscriber.
Our AI churn model connects to automated retention workflows in our CRM or marketing platform — triggering personalised interventions without manual campaign management for each at-risk subscriber.
Section Total: 0 / 10
TRACK B — HEALTHCARE & LIFE SCIENCES
Section 4 — Healthcare AI Readiness
Q16. Clinical Data Quality & Harmonisation
Our clinical data is standardised across systems (consistent coding, terminology, and annotation standards). We have addressed the heterogeneity in data formats across EHR systems, imaging equipment, and care settings.
Our clinical data is standardised across systems (consistent coding, terminology, and annotation standards). We have addressed the heterogeneity in data formats across EHR systems, imaging equipment, and care settings.
Q17. Federated & Privacy-Preserving Capability
We can build or operate AI models on data that cannot be centralised — either through federated learning, differential privacy, or secure multi-party computation. We are not blocked by data sharing constraints from building effective models.
We can build or operate AI models on data that cannot be centralised — either through federated learning, differential privacy, or secure multi-party computation. We are not blocked by data sharing constraints from building effective models.
Q18. Regulatory & Ethics Approval Pathway
We have an established pathway for obtaining ethics board approval, clinical governance sign-off, and relevant regulatory clearance (MDR, UKCA, FDA 510k as applicable) for AI-based clinical tools.
We have an established pathway for obtaining ethics board approval, clinical governance sign-off, and relevant regulatory clearance (MDR, UKCA, FDA 510k as applicable) for AI-based clinical tools.
Q19. Clinical Workflow Integration
AI outputs from our models are integrated into clinical workflows at the point of care — not delivered as separate reports that clinicians must manually consult.
AI outputs from our models are integrated into clinical workflows at the point of care — not delivered as separate reports that clinicians must manually consult.
Q20. Real-World Evidence & Post-Market Surveillance
We have processes for monitoring the performance of deployed clinical AI in real-world conditions — including detecting model drift, demographic performance disparities, and updating models as patient populations change.
We have processes for monitoring the performance of deployed clinical AI in real-world conditions — including detecting model drift, demographic performance disparities, and updating models as patient populations change.
Section Total: 0 / 10
Section 5 - Healthcare Governance & Explainability
Q21. Clinician Trust & Explainability
Clinical staff who use AI outputs can understand why a prediction was made. We use explainable AI methods (SHAP, LIME, or equivalent) in production clinical models — not just as a research exercise.
Clinical staff who use AI outputs can understand why a prediction was made. We use explainable AI methods (SHAP, LIME, or equivalent) in production clinical models — not just as a research exercise.
Q22. Patient Consent Architecture
We have documented consent frameworks for each AI use case involving patient data. Consent is granular (not blanket), auditable, and revocable — and our systems enforce it technically, not just procedurally.
We have documented consent frameworks for each AI use case involving patient data. Consent is granular (not blanket), auditable, and revocable — and our systems enforce it technically, not just procedurally.
Q23. Model Bias & Fairness Assessment
We test our clinical AI models for performance disparities across demographic groups (age, sex, ethnicity, socioeconomic status). Bias assessment is part of our model validation process, not a post-hoc check.
We test our clinical AI models for performance disparities across demographic groups (age, sex, ethnicity, socioeconomic status). Bias assessment is part of our model validation process, not a post-hoc check.
Q24. Information Governance Maturity
We have ISO 27001 certification or equivalent. Our information governance framework covers AI-specific risks including model inversion, membership inference, and data re-identification.
We have ISO 27001 certification or equivalent. Our information governance framework covers AI-specific risks including model inversion, membership inference, and data re-identification.
Q25. Multi-Site Collaboration Capability
We can execute AI projects that span multiple clinical institutions without requiring centralised data sharing. We have the legal agreements, governance frameworks, and technical infrastructure to support collaborative AI development at scale.
We can execute AI projects that span multiple clinical institutions without requiring centralised data sharing. We have the legal agreements, governance frameworks, and technical infrastructure to support collaborative AI development at scale.
Section Total: 0 / 10
TRACK C — BANKING & FINANCIAL SERVICES
Section 4 — Financial Services AI Readiness
Q16. Transaction & Behavioural Data Accessibility
Our transaction data is accessible to fraud and risk models in real time (not batch delays). Event streaming infrastructure is in place so models can act on signals as they occur — not hours later.
Our transaction data is accessible to fraud and risk models in real time (not batch delays). Event streaming infrastructure is in place so models can act on signals as they occur — not hours later.
Q17. Fraud Detection Model Currency
Our fraud detection models are retrained regularly on recent fraud typologies. We do not rely on static rule-based systems or models trained on data more than 12 months old as our primary defence layer.
Our fraud detection models are retrained regularly on recent fraud typologies. We do not rely on static rule-based systems or models trained on data more than 12 months old as our primary defence layer.
Q18. Customer Lifetime Value & Propensity Modelling
We use ML-based CLV and propensity models to inform product offers, credit decisions, and retention interventions — not just rule-based segmentation or manual analyst processes.
We use ML-based CLV and propensity models to inform product offers, credit decisions, and retention interventions — not just rule-based segmentation or manual analyst processes.
Q19. Regulatory Model Validation
Our AI and ML models go through a formal model risk management (MRM) process before deployment, including independent validation, documentation of assumptions and limitations, and ongoing monitoring against approved performance thresholds.
Our AI and ML models go through a formal model risk management (MRM) process before deployment, including independent validation, documentation of assumptions and limitations, and ongoing monitoring against approved performance thresholds.
Q20. Real-Time Decision Infrastructure
Our AI models can deliver decisioning outputs within the latency requirements of the business use case — including sub-second fraud scoring at transaction authorisation and near-real-time credit risk assessment.
Our AI models can deliver decisioning outputs within the latency requirements of the business use case — including sub-second fraud scoring at transaction authorisation and near-real-time credit risk assessment.
Section Total: 0 / 10
Section 5 — Financial Services Governance & Compliance
Q21. Model Explainability for Regulatory Purposes
We can explain individual AI decisions (credit decline, fraud flag, pricing offer) to regulators, auditors, and affected customers in plain language. Our explainability documentation satisfies existing regulatory requirements (FCA, PRA, ECB guidelines as applicable).
We can explain individual AI decisions (credit decline, fraud flag, pricing offer) to regulators, auditors, and affected customers in plain language. Our explainability documentation satisfies existing regulatory requirements (FCA, PRA, ECB guidelines as applicable).
Q22. SR 11-7 / Model Risk Management Compliance
Our model risk management framework addresses AI-specific risks including concept drift, training data bias, feature leakage, and adversarial inputs. We have documented evidence of independent model validation.
Our model risk management framework addresses AI-specific risks including concept drift, training data bias, feature leakage, and adversarial inputs. We have documented evidence of independent model validation.
Q23. AI Ethics & Fairness in Credit & Insurance
We test AI models used in lending, insurance underwriting, or pricing for discriminatory outcomes across protected characteristics. Fair lending compliance is part of our model development lifecycle — not a retrospective audit.
We test AI models used in lending, insurance underwriting, or pricing for discriminatory outcomes across protected characteristics. Fair lending compliance is part of our model development lifecycle — not a retrospective audit.
Q24. Cyber & Adversarial AI Risk
We have assessed our AI systems for adversarial attack vectors — including model poisoning, adversarial examples in fraud detection, and prompt injection in customer-facing AI tools. Countermeasures are in place and tested.
We have assessed our AI systems for adversarial attack vectors — including model poisoning, adversarial examples in fraud detection, and prompt injection in customer-facing AI tools. Countermeasures are in place and tested.
Q25. AI Vendor & Third-Party Model Risk
We apply model risk management standards to third-party AI models and APIs embedded in our products — not just internally built models. We have a documented process for assessing, approving, and monitoring external AI.
We apply model risk management standards to third-party AI models and APIs embedded in our products — not just internally built models. We have a documented process for assessing, approving, and monitoring external AI.
Section Total: 0 / 10
Section 6 — Strategy
Q26. Business Outcome Definition
Our AI programme is driven by specific, measurable business outcomes — with agreed KPIs, baselines, and measurement methodology defined before development begins. We do not describe success in terms of model accuracy alone.
Our AI programme is driven by specific, measurable business outcomes — with agreed KPIs, baselines, and measurement methodology defined before development begins. We do not describe success in terms of model accuracy alone.
Q27. Executive Sponsorship & Investment Horizon
AI has genuine executive sponsorship (not just endorsement). Budget is committed for a 12+ month programme — not a single project. Leadership understands that AI is an ongoing capability, not a one-time implementation.
AI has genuine executive sponsorship (not just endorsement). Budget is committed for a 12+ month programme — not a single project. Leadership understands that AI is an ongoing capability, not a one-time implementation.
Q28. AI Roadmap & Prioritisation
We have a documented AI roadmap with prioritised use cases, sequenced by business value and technical feasibility. The roadmap is reviewed quarterly and connected to our overall business strategy.
We have a documented AI roadmap with prioritised use cases, sequenced by business value and technical feasibility. The roadmap is reviewed quarterly and connected to our overall business strategy.
Q29. Build vs. Buy vs. Partner Decision Framework
We have a clear and consistently applied framework for deciding which AI capabilities to build internally, which to license as tools, and which to develop with an external partner. We do not make these decisions on a project-by-project basis without a consistent rationale.
We have a clear and consistently applied framework for deciding which AI capabilities to build internally, which to license as tools, and which to develop with an external partner. We do not make these decisions on a project-by-project basis without a consistent rationale.
Q30. Board & Governance Reporting
AI performance, risk, and return are reported to board or senior leadership on a regular cadence — with metrics that connect AI activity to business outcomes, not just technical performance indicators.
AI performance, risk, and return are reported to board or senior leadership on a regular cadence — with metrics that connect AI activity to business outcomes, not just technical performance indicators.
Section Total: 0 / 10
