DEMO MODE - Synthetic Data
⚠️
Floor Rule Applied

Critical pillar A2: IP/Rights/Legal Readiness scored 1.83/5.0 across departments. Primary HAI score capped at 68 (from potential 76) to prevent masking of material risk. This is intentionalβ€”high operational readiness cannot justify weak legal/IP governance.

Overall HAI Index
68
Out of 100
Floor Rule: Capped at 68
Human Readiness Sub-Score
72
Avg. of H1-H6 pillars
AI Readiness Sub-Score
64
Avg. of A1-A6 pillars
Critical Pillars Below Threshold
3
A2, A5, and 1 near-miss

Department Leaderboard

Rank Department Human Score AI Score HAI Average Status Key Gaps
1 Marketing 78 71 74.5 Balanced Leader IP/Legal (1.9), Security (2.6)
2 Operations 75 68 71.5 Balanced Leader IP/Legal (2.0)
3 Distribution 72 65 68.5 Balanced Leader IP/Legal (1.8), Data Asset (3.4)
4 Finance 70 62 66 Balanced Leader IP/Legal (2.2), Integration (3.0)
5 PR 68 59 63.5 Human-Strong IP/Legal (1.7), Data Asset (2.8), Security (2.2)
6 Production 65 58 61.5 Human-Strong IP/Legal (1.6), Workflow Integration (2.7)
7 Sales 62 52 57 At-Risk IP/Legal (1.5), Change Readiness (2.5), Tools (2.4)
8 Talent 58 48 53 At-Risk IP/Legal (2.1), Skills (3.0), Measurement (2.6)

Why Additive Index + Floor Rule?

The HAI Index uses an additive (averaging) methodology to ensure transparency: executives can see exactly which pillars moved the score. However, an additive model alone can mask critical risk β€” a department could score 85 overall while having catastrophic IP/Legal or security vulnerabilities.

The Floor Rule enforces that if any of three critical pillars (A2: IP/Legal, A5: Security, H5: Ethical Use) score below 2.0, the overall score is capped (default: 59) or penalized. This ensures governance risks are never hidden by operational strength.

Current Status: A2 (IP/Legal) is critically weak at 1.83/5.0. This triggered the cap. Until A2 improves to 2.5+, the organization's readiness score remains constrained.

Synergy Indicator (Secondary)

Beyond the primary HAI Index, we compute a Synergy Score to highlight imbalance between Human and AI readiness.

Synergy Formula: (Human Γ— AI) / 100, rescaled to 0–100
= (72 Γ— 64) / 100 = 46.08

A Synergy of 46 indicates moderate imbalance: while Human readiness (72) is reasonably strong, AI readiness (64) lags, creating friction. Strong departments like Marketing show better balance (synergy ~56), while Talent shows significant lag (~28).

Top 3 Strategic Priorities (Next 90 Days)

  • 1.
    Establish IP/Rights/Legal Governance Framework

    Audit and document all AI training data rights, content licensing, and compliance requirements. Current score (1.83) is blocking overall readiness. Target: 3.0+ within 90 days.

  • 2.
    Deploy Security & Privacy Controls

    Current avg 2.4/5.0. Implement redaction/masking, audit trails, and secure environments for AI tool usage. Engage InfoSec early. Target: 3.5+.

  • 3.
    Accelerate Talent & Sales Readiness

    Both departments score 48-58. Design targeted 30-60-90 day upskilling (see People & Capability page). Include change leadership and AI workflow training.

Interactive Heatmap (Hover for Details)

Pillar Marketing PR Talent Distribution Production Operations Sales Finance

Critical Observations

  • A2: IP/Rights/Legal (Avg 1.83/5.0) β€” Universal weakness across all 8 departments. This is the primary floor rule trigger. No department exceeds 2.2. Critical action: CXO-led IP governance audit.
  • A5: Security/Privacy (Avg 2.4/5.0) β€” Dangerous gap. Sales (1.9) is at critical risk. Immediate InfoSec deployment recommended.
  • H1 & H3: Leadership & Skills β€” Strongest areas (avg 3.5-3.8). Leverage these as change enablers. Operations (4.2 leadership, 3.8 skills) can serve as model.
  • Talent & Sales Lag β€” Both departments score <3.0 across most pillars. Recommend dedicated change leadership and mentoring from Marketing/Operations.
ℹ️
GFE 360 Profile System

Each role cluster is mapped to hard skills, soft skills, virtues, values, and a personalized learning development plan. Below are sample profiles for key roles across departments.

Role Clusters & AI Readiness (Sample)

Role Cluster Department(s) Current AI Fluency Critical Skills Gap 30-Day Focus 60-90-Day Focus
Marketing Operations Marketing, Operations Intermediate AI prompt ops, automation testing ChatGPT/Claude workflows; campaign automation Multi-model orchestration; GenAI ROI tracking
Content Creator Marketing, PR Beginner AI ideation, image/video generation, editing Midjourney, Runway ML, D-ID basics Creative workflow integration; brand consistency at scale
Talent Manager Talent, HR Minimal AI interviewing, skills assessment, consent frameworks Talent screening AI; bias audit; consent protocols Talent marketplace integration; ethical AI governance
Sales Operations Sales Minimal Workflow automation, lead scoring, funnel ops CRM automation 101; lead intelligence tools Revenue intelligence; predictive analytics
Data/Compliance Officer Operations, Finance Beginner AI audit trails, data governance, risk frameworks AI risk assessment; compliance checklist Third-party vendor AI audits; policy framework
Production Lead Production Limited AI-assisted editing, effects, sound design AI tools overview; workflow testing Integrated production stack; IP safeguarding in AI workflows

Sample: Marketing Operations Manager (30-60-90 Plan)

Current State: Runs Excel-based campaign workflows, some CRM automation, no AI tooling. GFE Score: 2.1.2 (Marketing 2, Sales 1, Finance 1). Change readiness: Moderate (adaptable, some anxiety about AI displacement).
Days 1–10: Foundation
AI Literacy & Tool Landscape
β€’ Take 2-hour online course (Coursera/LinkedIn Learning): "AI for Marketing Ops"
β€’ Hands-on: Create ChatGPT account, run 5 prompt experiments (campaign copywriting, FAQ generation, workflow documentation)
β€’ Install 3 browser extensions: Magical (automation), Zapier (integration), NotebookLM (doc analysis)
β€’ Weekly coaching: Align on which tool to pilot first
Days 11–30: Pilot & Integration
First GenAI Workflow
β€’ Design 1 repeatable campaign workflow: Campaign Brief β†’ ChatGPT multi-variant copy β†’ AB test framework
β€’ Document: Create SOP (1 page, 30 min)
β€’ Measure: Track time saved (target: 3 hrs/week)
β€’ Share: Present learnings to team (lunch-and-learn format)
β€’ Mindset shift: Reframe as "AI-augmented ops," not replacement
Days 31–60: Scaling & Optimization
Multi-Tool Orchestration
β€’ Expand to 2–3 workflows: Lead scoring (Apollo + ChatGPT), email sequencing (HubSpot + Zapier + Claude), reporting (Data Studio + GPT4)
β€’ Quality gates: Design review checklists (accuracy, brand compliance, bias checks)
β€’ Train 2 peers: Become internal resource (boost confidence, demonstrates mastery)
β€’ Measure: Target 7+ hours/week time savings, 20% faster campaign launch
Days 61–90: Leadership & Innovation
Own GenAI Strategy Workstream
β€’ Propose: "GenAI Ops Playbook" (consolidated best practices, templates, templates, ROI model)
β€’ Lead: Bi-weekly GenAI ops guild (cross-dept: Marketing, Sales, Finance)
β€’ Hire/mentor: Help onboard 1 new GenAI-focused ops hire or intern
β€’ Measure: 30+ hours/week total time savings; 40%+ faster time-to-insight on campaign performance
β€’ Readiness target: Move from GFE 2.1.2 to 3.0.1

Training Delivery Model (Recommended)

Self-Paced Online (30% of time)

Coursera, LinkedIn Learning, Udemy courses on AI fundamentals, prompt engineering, specific tools. Allows flexibility; employees learn at own pace.

Peer Learning Circles (40% of time)

Weekly 60-min cohorts by role cluster. Hands-on experimentation, shared templates, vulnerability-safe space. Led by internal champions.

1:1 Coaching (20% of time)

Weekly 30-min coaching sprints with a GFE-trained guide. Personalized troubleshooting, mindset support, accountability.

Live Projects (10% of time)

Apply learning in real campaigns/workflows. Real ROI, real feedback loops, rapid iteration. On-the-job training accelerates mastery.

Blocker Mitigation & Change Leadership

  • Fear of job displacement: Reframe as "upskilling." Communicate that roles evolve, not disappear. Tie LD to career growth + role expansion (not retrenchment).
    Owner: HR + Department Head | Timeline: Ongoing
  • Time bandwidth: Allocate 10% of work time (4 hrs/week) to LD. Make it a KPI, not "nice-to-have."
    Owner: Department Head | Timeline: Day 1
  • Quality anxiety (AI outputs): Implement quality gates + review templates. Empower people with confidence-building checklists.
    Owner: GFE Trainer + Ops Leads | Timeline: Days 1–30
  • Tool overwhelm: Start with 1 tool per role cluster (not 10). Sequence: ChatGPT β†’ domain-specific tools. Build mastery before expanding.
    Owner: GFE Coach | Timeline: 30 days

Workflow Risk Summary

3
Critical Risk
4
High Risk
3
Medium Risk

Critical Risk Workflows (Require Immediate Mitigation)

Marketing β†’ AI Content Generation β†’ Publication

Multi-dept: Marketing, Legal, Talent (for creator disclosures)

πŸ”΄ CRITICAL
IP Leakage: Proprietary content used in AI training? Content Leak: Confidential project details in AI prompts Creator Disclosure: Talent/influencer consent for AI usage Rights Ambiguity: Who owns AI-generated derivative works?
Mitigation Checklist:
  • Establish "clean data" policy: Only use content with explicit AI training rights
  • Implement prompt auditing: Scan all prompts for confidential keywords before submission
  • Creator consent: Legal template for disclosing AI usage; link to LD on disclosures
  • Rights framework: Licensing agreement defining ownership of AI derivatives (default: Company)
Owner: Head of Legal + Marketing Ops | Timeline: 15 days
Talent Sourcing β†’ AI Screening β†’ Offer

Multi-dept: Talent, Legal, Operations

πŸ”΄ CRITICAL
Bias Risk: AI screening may discriminate on protected attributes Consent Gap: Candidates unaware of AI use in hiring Appeal Process: No transparent way to challenge AI decisions Data Retention: Biometric/interview data stored without consent
Mitigation Checklist:
  • Bias audit: Run AI tool on historical hiring data; measure disparate impact
  • Transparency disclosures: Update job postings to state "AI-assisted screening used"
  • Appeal rights: Document process for candidates to challenge AI decisions
  • Human-in-the-loop: Mandate human review before rejection (not AI-only decisions)
Owner: Head of Talent + Legal | Timeline: 20 days
Distribution β†’ Rights Clearance β†’ Platform Upload

Multi-dept: Distribution, Legal, Production

πŸ”΄ CRITICAL
Liability: Using AI-assisted content without verifying rights clearance Single Person Dependency: Only one person knows licensing matrix (turnover risk) Audit Trail Gap: No tracking of which content used AI; attribution missing Sublicense Risk: Streaming platforms may not accept AI-generated derivatives without explicit consent
Mitigation Checklist:
  • Rights database: Digitize all licensing contracts; tag which allow AI derivatives
  • Mandatory fields: Every asset upload requires "AI Used: Yes/No" + tool name
  • Backup person: Cross-train 2nd person on licensing logic (eliminate single-point dependency)
  • Platform compliance: Verify each platform's AI content policy; tag non-compliant content
Owner: Head of Distribution + Legal | Timeline: 25 days

High Risk Workflows (Active Monitoring)

4 workflows pose high risk but can be managed with controls and monitoring. See full Risk Matrix in downloadable CSV export.

πŸ“Š
Sales β†’ Prospect Research β†’ AI Lead Scoring

Risk: Data privacy (prospect PII in AI systems), Score transparency, Bias (favors certain customer profiles)

🎬
Production β†’ AI Editing Tools β†’ Final Export

Risk: Quality gates (AI outputs may not meet broadcast standards), Tool reliability (crashes, data loss), Attribution (credit to AI tools)

πŸ’°
Finance β†’ AI-Assisted Forecasting β†’ Reporting

Risk: Model transparency (execs unsure how AI arrived at forecast), Data leakage (financial confidentiality), Audit trail (regulators require explainability)

πŸ“
PR β†’ Social Monitoring & AI Response β†’ Publishing

Risk: Brand voice misalignment (AI tone doesn't match brand), Crisis response (AI auto-replies to sensitive posts), Misinformation (AI amplifies false narratives)

Current Tool Inventory by Category

Category Current Tools Department(s) Usage Level Integration Status Known Gaps
Ideation & Brainstorming ChatGPT (free/premium mix), Perplexity, Claude Marketing, Production, Content Active Manual copy-paste No enterprise licensing; audit trail missing
Content Writing & Copywriting ChatGPT, Jasper (some), Grammarly Business Marketing, PR Active Partial (Grammarly β†’ Google Docs) Jasper underused; brand voice inconsistency
Image & Visual Generation Midjourney, DALL-E, Adobe Firefly, Canva Marketing, Production, Design Active Manual exports No version control; rights tracking unclear
Video & Editing Adobe Premiere, DaVinci Resolve, RunwayML (pilot), D-ID (pilot) Production, Marketing Limited None AI editing tools not integrated into production pipeline
Automation & Workflow Zapier, Make, IFTTT, HubSpot workflows Marketing, Sales, Operations Moderate Partial Redundant tools (Zapier + Make); no orchestration layer
CRM & Sales Ops HubSpot, Salesforce (limited), Pipedrive (pilot) Sales, Operations Inconsistent Poor data sync Multiple CRMs; no single source of truth
Analytics & Measurement Google Analytics 4, Tableau (underused), Looker Studio Marketing, Operations, Finance Basic Manual data pulls No real-time dashboards; Excel still primary tool
Data Privacy & Security None (no AI-specific compliance tools) Operations, Legal Missing N/A No audit trails for AI tool usage; no DLP

Critical Gaps (Must Address)

  • πŸ”΄
    AI Audit & Compliance

    Tool to track AI tool usage, inputs, outputs, and compliance flags. Recommendation: Lakehouse AI or Humane Intelligence.

  • πŸ”΄
    Data Loss Prevention (DLP)

    Monitor clipboard, file uploads, and prompt inputs. Recommendation: Nightfall, Forcepoint, or integrated InfoSec solution.

  • πŸ”΄
    Prompt Governance & Orchestration

    Centralized prompt library, version control, and A/B testing. Recommendation: LangChain + PromptFlow or Anthropic/OpenAI Prompt API.

  • 🟑
    Enterprise LLM Access

    Move from consumer ChatGPT to API-based access or VLM (Vertex AI, Azure OpenAI). Ensures audit trails, data retention control.

Redundancies & Consolidation

  • βœ“
    ChatGPT + Perplexity + Claude

    Consolidate: Standardize on Claude (via API) + reserve ChatGPT for consumer testing. Cost savings: $1,200/year.

  • βœ“
    Zapier + Make

    Consolidate: Standardize on Zapier (broader integrations). Migrate Make workflows. Cost savings: $600/year.

  • βœ“
    Salesforce + HubSpot + Pipedrive

    Consolidate: Choose HubSpot (best for mid-market, AI-native). Migrate Salesforce/Pipedrive. Cost savings: $3,000/year.

  • βœ“
    Tableau + Looker Studio

    Consolidate: Move to Looker Studio (free) + light Tableau for complex modeling. Cost savings: $8,000/year.

Minimum Viable Secure Stack (Recommended)

A lean, integrated set of tools that covers ideation, content creation, automation, and security for departments. Prioritize enterprise features (audit trails, data residency, SSO).

1. Ideation & Analytics
  • πŸ“Œ Claude API (via Anthropic) – primary LLM
  • πŸ“Œ Perplexity for research (free tier OK)
  • πŸ“Œ Tavily (for real-time web search in AI flows)
2. Content Creation
  • πŸ“Œ Midjourney Pro (image + brand consistency)
  • πŸ“Œ RunwayML (video editing + effects)
  • πŸ“Œ Eleven Labs (voice synthesis for video)
3. Automation & Orchestration
  • πŸ“Œ Zapier (workflow automation)
  • πŸ“Œ HubSpot (CRM + automation center)
  • πŸ“Œ n8n (self-hosted workflow orchestration)
4. Compliance & Security
  • πŸ“Œ Humanize (AI audit trails + compliance)
  • πŸ“Œ Nightfall (DLP for sensitive data)
  • πŸ“Œ Google Workspace w/ DLP policies
Estimated Annual Cost: ~$180K (for 50-person team)

Breakdown: Claude API ($2K), Midjourney ($600), Runway ($2.4K), Zapier ($12K), HubSpot ($24K), n8n (self-hosted), Humanize ($10K), Nightfall ($15K), Google Workspace ($20K), training & setup ($93.6K).

Savings vs. ad-hoc consolidation: ~$40K/year.

Implementation Roadmap: Tech Stack Consolidation

Week 1–2: Assessment & Planning
Audit current contracts, usage, budget
β€’ Gather all tool subscriptions, costs, user counts from Finance
β€’ Interview department heads on true usage (many tools are zombie subscriptions)
β€’ Design new stack with Ops + IT
β€’ Secure budget approval for new tools + migration
Week 3–4: Pilot & Governance
Onboard 2 pilot departments (Marketing + Operations)
β€’ Set up Claude API + Humanize in sandbox environment
β€’ Test integrations (Zapier β†’ HubSpot β†’ analytics)
β€’ Create policy: How to use Claude (approved use cases), security guardrails
β€’ Identify security gaps; configure DLP
Week 5–8: Full Rollout
Deploy to all departments; train & support
β€’ Migrate workflows from old stack to new
β€’ Run daily office hours for troubleshooting
β€’ Retire old tools (Perplexity free tier, redundant Zapier accounts, etc.)
β€’ Monitor compliance: Audit logs, prompt flagging
Week 9–12: Optimization & Knowledge Lock
Measure savings, share playbooks
β€’ Report: Cost savings achieved, time-to-value
β€’ Create internal knowledge base: playbooks, templates, FAQs
β€’ Identify 2–3 "GenAI ops champions" per dept; formalize their role
β€’ Plan next wave: Advanced automation, custom LLMs
ℹ️
Roadmap Strategy

This roadmap is organized in three phases: Now (Days 1–30), Next (Days 31–60), Later (Days 61–90). Each item includes owner, effort estimate (S/M/L), impact, and which pillar it improves. Success metric: HAI score improves to 73+ and critical pillars (A2, A5, H5) reach 2.8+.

Phase 1: NOW (Days 1–30) – Stabilize Critical Risks

Days 1–5
🎯 Kickoff: Executive Alignment & Governance Structure
Description: Host executive sync to align on AI readiness challenges and secure buy-in for 90-day sprint.
Actions:
  • Brief C-suite on HAI results + floor rule impact
  • Establish AI Steering Committee (CXO + dept heads + Legal)
  • Define decision rights: Who approves AI tool usage?
  • Set clear success metrics for 90 days

Owner: Chief Digital Officer / COO | Effort: S | Impact: High | Improves: H1 (Leadership)
Days 5–15
πŸ”’ IP/Legal Lockdown (Critical Priority)
Description: Audit and document all AI training data rights; establish IP governance framework.
Actions:
  • Conduct rights audit: Which content can be used in AI training?
  • Create "clean data" policy: Only AI-approved content in prompts
  • Draft licensing framework: Who owns AI-generated derivatives?
  • Implement prompt audit tool: Flag confidential keywords before submission

Owner: Head of Legal + VP Marketing | Effort: L | Impact: Critical | Improves: A2 (IP/Legal)
Target: Move A2 from 1.83 β†’ 2.5
Days 15–25
πŸ›‘οΈ Security & DLP Setup
Description: Deploy data loss prevention (DLP) and audit trail tools.
Actions:
  • Evaluate & select DLP tool (Nightfall, Forcepoint, or built-in)
  • Configure DLP policies: Prevent PII, financial data, IP in AI tools
  • Set up Humanize or equivalent for AI audit trails
  • Run pilot: Test DLP on 10 users; measure false positives

Owner: Head of IT Security | Effort: M | Impact: High | Improves: A5 (Security)
Target: Move A5 from 2.4 β†’ 3.0
Days 20–30
πŸ‘₯ Launch: Pilot Cohort Upskilling (Talent & Sales)
Description: Begin 30-60-90 LD for two at-risk departments (Talent, Sales).
Actions:
  • Recruit 5 Talent + 5 Sales volunteers as cohort
  • Enroll in foundational AI literacy course
  • Assign 1:1 coaches (from Marketing or Operations)
  • Set weekly 1-hr peer learning circles

Owner: Head of People + Department Heads | Effort: M | Impact: Medium | Improves: H3 (Skills), H2 (Culture)
Target: Move Talent/Sales from 48/52 β†’ 55+ by day 90
End-of-Phase-1 Check: A2 β‰₯ 2.5, A5 β‰₯ 3.0, LD cohorts enrolled and engaged, Steering Committee meeting cadence established.

Phase 2: NEXT (Days 31–60) – Build Capability & Systems

Days 31–40
πŸ”§ Consolidate Tech Stack
Description: Migrate from ad-hoc tools to minimum viable secure stack.
Actions:
  • Set up Claude API + Humanize in production
  • Migrate 3–5 workflows from consumer ChatGPT β†’ Claude API
  • Decommission redundant tools (Perplexity free, old Zapier accounts)
  • Create internal ChatOps: Slack bot for guardrailed Claude access

Owner: Head of Operations + IT | Effort: M | Impact: Medium | Improves: A3 (Tooling), A4 (Integration)
Days 35–50
πŸ“š Expand LD: All-Hands Cohort Launch
Description: Scale pilot learnings to full organization.
Actions:
  • Onboard 4 new department cohorts (Marketing, PR, Production, Distribution)
  • Establish weekly 1-hr peer learning circles (all depts)
  • Launch internal knowledge base: 10+ AI playbooks per dept
  • Recognize early adopters: "GenAI champion" badges + small incentives

Owner: Chief People Officer + Coaches | Effort: M | Impact: High | Improves: H3 (Skills), H2 (Culture)
Days 45–60
βš™οΈ Workflow Risk Mitigation (Critical Workflows)
Description: Implement controls for 3 critical workflows (Content Gen, Talent Screening, Rights Clearance).
Actions:
  • Content Gen: Mandatory "clean data" approval before every prompt
  • Talent: Add human-in-loop review; audit AI bias; create appeal process
  • Rights: Digitize licensing matrix; tag "AI-approved" content
  • Monitor: Run audit reports every 5 days; escalate breaches

Owner: Department Heads + Steering Committee | Effort: M | Impact: High | Improves: A2, A5, H4, H5
Days 55–60
πŸ“Š Measurement & First ROI Report
Description: Publish first ROI report: time saved, quality improvements, risks mitigated.
Actions:
  • Survey: Teams estimate hours saved; quality improvements
  • Compile: 30 audit logs, 0 IP breaches, 5 workflows automated
  • Report: "AI Readiness: 60-Day Sprint Report" to exec sponsors
  • Celebrate: Share wins; recognize top performers

Owner: Head of Operations | Effort: S | Impact: High | Improves: H1 (Leadership alignment), A6 (Measurement)
End-of-Phase-2 Check: All 8 depts onboarded to LD; 60+ workflows audited; critical risks have controls in place; first ROI published; estimated time savings 8+ hrs/week.

Phase 3: LATER (Days 61–90) – Optimize & Sustain

Days 61–70
πŸŽ“ LD Graduation & Advanced Tracks
Description: First cohort graduates from foundational training; launch advanced tracks.
Actions:
  • Cohort 1 (Pilot) β†’ Advanced: Multi-model orchestration, custom fine-tuning
  • Cohort 2–4 β†’ Foundation graduation
  • Hire / appoint "GenAI Ops Leads" (1 per dept) to own future roadmap
  • Create internal certification: "GFE AI Readiness Certified"

Owner: Chief People Officer | Effort: M | Impact: Medium | Improves: H3 (Skills), H4 (Collaboration)
Days 70–80
πŸš€ Innovation Sprints & New Workflows
Description: Identify and pilot 2–3 high-impact AI workflows in each department.
Actions:
  • Marketing: AI-powered audience segmentation in HubSpot
  • Production: AI-assisted color grading + DaVinci integration
  • Sales: Predictive lead scoring + call coaching AI
  • Talent: AI pre-interview assessments + skills gap analysis
  • Measure: Pilot success, scale winners

Owner: Dept Heads + GenAI Ops Leads | Effort: M | Impact: High | Improves: A4 (Integration), A6 (Measurement)
Days 80–90
πŸ“ˆ Readiness Audit & 90-Day Report
Description: Conduct final readiness assessment; publish comprehensive 90-day report.
Actions:
  • Re-audit: Pillar scores across all 8 departments (target: HAI 73+)
  • Measure: Total time saved, quality improvements, risk incidents
  • Document: Lessons learned, playbooks, templates, governance policies
  • Present: Final report to board + announce next 90-day roadmap

Owner: Chief Digital Officer + GFE Coach | Effort: S | Impact: High | Improves: All pillars (measurement)
Success Metrics:
β€’ HAI: 68 β†’ 73+ (target)
β€’ A2 (IP/Legal): 1.83 β†’ 2.8+
β€’ A5 (Security): 2.4 β†’ 3.2+
β€’ H3 (Skills): 72 β†’ 76+
β€’ Zero IP breaches, 2+ critical workflows secured
β€’ 40+ hrs/week total time savings org-wide
End-of-90-Day Target: HAI 73+, critical pillars β‰₯ 2.8, all depts trained, governance policies locked, 3–5 new workflows in production, full audit trail coverage, zero unmitigated risks.

Resource Allocation & Investment Summary

Budget Estimate (90 Days)
  • External GFE coaching/advisory: $35K
  • LD platform + courses: $12K
  • Security tools (DLP, audit): $18K
  • Tech stack (API, tools): $15K
  • Internal FTE (proj mgmt, ops): ~$40K (salaries)
  • Contingency (10%): $12K
Total: ~$132K
Expected ROI (First Year)
  • Time saved: 40 hrs/week Γ— 50 weeks Γ— $150/hr = $300K
  • Risk avoidance: IP breach prevention, talent litigation risk β‰ˆ $500K (conservative)
  • Quality improvements: 5–10% faster time-to-market β‰ˆ $200K (revenue impact)
  • Tech consolidation savings: $40K/year
  • Talent retention (reduced burnout): 5% lower turnover = $150K saved
Total ROI: ~$1.19M (9x return on investment)

Scoring Configuration

Default: 59. If any critical pillar <2.0, HAI capped at this value regardless of average.

Score below this value triggers floor rule. Default: 2.0.

  • A2: IP/Rights/Legal Readiness
  • A5: Security/Privacy/Leakage Controls
  • H5: Ethical Use & Talent Safeguards

Pillar Weights (for custom HAI calculation)

Default: All pillars weighted equally (1/12 each). Adjust to prioritize certain pillars.

Data Import / Export

Upload a CSV with columns: Department, Respondent, Pillar, Score (0-5), Timestamp. Download template CSV

Demo / Test Data

This dashboard is currently loaded with realistic synthetic data representing a typical Film/Media organization. All scores, workflows, and recommendations are contextual and actionable.

About This Dashboard

The Human Γ— AI Readiness Index (HAI) is a production-ready diagnostic tool designed for Film/Media organizations assessing AI adoption maturity.

Framework: 12 pillars (6 Human, 6 AI) scored 0-5, computed via an additive index with an optional floor rule to prevent masking critical risks. The Floor Rule is a key differentiatorβ€”it ensures that governance gaps (IP/Legal, Security, Talent Safeguards) cannot be hidden by operational strength.

Design Principles: Transparency (every score has a formula), Stability (incomplete data doesn't break the model), and Actionability (every finding links to a concrete 90-day action).

Use Case: Executives and department heads use this to identify readiness gaps, prioritize upskilling, manage workflow risks, and allocate AI investment strategically over quarters. Recur quarterly to track progress and adapt roadmaps.