Skip to content

Interview Question Generator: Technical Whitepaper

Version: 1.2.0
Status: In Production (with known deployment issue)
Last Updated: January 16, 2026


Executive Summary

The Interview Question Generator is a web-based tool that generates behavioral interview questions mapped to GrowthFlowEngineering's proprietary GFE Skill System. It enables hiring managers and recruiters to build structured interview guides based on specific competency tasks across Sales, Marketing, and Finance domains.

Key Metrics

  • 188 tasks covered across 3 domains
  • 346 interview questions with scoring rubrics
  • 5 skill levels (L0-L4: Apprentice to Partner)
  • API response time: <200ms average

1. Problem Statement

The Interview Quality Gap

Most organizations face three critical problems in technical/behavioral interviewing:

  1. Generic Questions: Interviewers ask the same vague questions ("Tell me about yourself") regardless of role
  2. No Calibration: Different interviewers assess candidates using different criteria
  3. Missing Rubrics: No standardized "what good looks like" guidance for scoring responses

Our Solution

The Interview Question Generator addresses these problems by:

  • Mapping questions to specific competency tasks (e.g., "S-230: CRM Hygiene & Deduplication")
  • Providing "what good looks like" criteria for each question
  • Including follow-up probes to dig deeper into candidate responses
  • Organizing by skill level so questions match the seniority being assessed

2. System Architecture

2.1 Technology Stack

ComponentTechnologyPurpose
FrontendVitePress + Vanilla JSStatic site with client-side interactivity
BackendNetlify FunctionsServerless API endpoints
Data StoreJavaScript object (embedded)Question bank compiled into function
Source of TruthTSV fileMaster question bank for editing

2.2 Data Flow

┌─────────────────────────────────────────────────────────────┐
│                    GFE-SkillSystem Repo                      │
│  specs/interview-questions/interview-questions.tsv          │
│  (Master question bank - 346 questions)                      │
└─────────────────────────────────────────────────────────────┘

                              │ generate-function.cjs

┌─────────────────────────────────────────────────────────────┐
│                growth-flow-engineering Repo                  │
│  netlify/functions/interview-questions.js                   │
│  (Compiled function with TASK_QUESTIONS object)             │
└─────────────────────────────────────────────────────────────┘

                              │ Netlify Deploy

┌─────────────────────────────────────────────────────────────┐
│                    Production API                            │
│  /.netlify/functions/interview-questions                    │
│  Actions: list, generate, all                               │
└─────────────────────────────────────────────────────────────┘

                              │ fetch()

┌─────────────────────────────────────────────────────────────┐
│                    Frontend Page                             │
│  /tools/interview-questions                                 │
│  VitePress page with DOM manipulation                       │
└─────────────────────────────────────────────────────────────┘

2.3 File Structure

GFE-SkillSystem/
└── specs/
    └── interview-questions/
        ├── interview-questions.tsv    # Master question bank (source of truth)
        ├── README.md                   # Documentation
        ├── QUESTION_IMPROVEMENT_GUIDE.md  # Quality framework
        ├── generate-function.cjs      # TSV → Netlify function generator
        └── improve-questions.cjs      # Batch quality improvement script

growth-flow-engineering/
├── netlify/
│   └── functions/
│       └── interview-questions.js     # Serverless API (3274 lines)
└── docs/
    └── en/
        └── tools/
            ├── index.md               # Tools landing page
            └── interview-questions.md # Frontend UI (667 lines)

3. API Reference

Base URL

https://growthflowengineering.xyz/.netlify/functions/interview-questions

3.1 List All Tasks

Returns all available tasks for building the selection UI.

Request:

bash
GET /.netlify/functions/interview-questions?action=list

Response:

json
{
  "success": true,
  "action": "list",
  "totalTasks": 188,
  "levels": {
    "0": { "name": "Apprentice", "focus": "Learning fundamentals..." },
    "1": { "name": "Practitioner", "focus": "Executing playbooks..." },
    "2": { "name": "Architect", "focus": "Designing systems..." },
    "3": { "name": "Strategist", "focus": "Setting direction..." },
    "4": { "name": "Partner", "focus": "Shaping organization..." }
  },
  "tasks": [
    {
      "id": "S-001",
      "taskId": "S-001",
      "title": "Qualify Inbound Leads",
      "taskTitle": "Qualify Inbound Leads",
      "domain": "sales",
      "level": 1,
      "level_name": "Practitioner",
      "levelName": "Practitioner",
      "question_count": 2,
      "questionCount": 2
    }
    // ... 187 more tasks
  ]
}

3.2 Generate Questions

Returns interview questions for selected tasks.

Request:

bash
POST /.netlify/functions/interview-questions
Content-Type: application/json

{
  "action": "generate",
  "taskIds": ["S-001", "S-230", "M-100"],
  "questionsPerTask": 2
}

Response:

json
{
  "success": true,
  "action": "generate",
  "totalTasks": 3,
  "totalQuestions": 6,
  "results": [
    {
      "taskId": "S-001",
      "taskTitle": "Qualify Inbound Leads",
      "domain": "sales",
      "level": 1,
      "levelName": "Practitioner",
      "questions": [
        {
          "question": "Describe your lead qualification framework...",
          "whatGoodLooksLike": "References BANT/MEDDIC/similar framework...",
          "probes": ["How do you handle borderline leads?", "What's your MQL to SQL conversion rate?"]
        }
      ]
    }
  ]
}

3.3 Get All Data

Returns complete question bank (useful for exports/debugging).

Request:

bash
GET /.netlify/functions/interview-questions?action=all

4. Question Quality Framework

4.1 The Problem with Template Questions

Initially, we auto-generated questions for tasks without hand-crafted content. The template approach produced generic, useless questions:

BAD (Template):

"How do you set strategy for CRM hygiene & deduplication?" What good looks like: "Clear strategic thinking, balances multiple priorities, aligns with org goals"

This question is worthless because:

  • No specific scenario
  • No numbers or constraints
  • Generic rubric that could apply to anything

4.2 Quality Improvement Framework

We developed a 5-point framework for high-quality questions:

PrincipleDescriptionExample
SPECIFICITYNames real tools, metrics, scenarios"Your CRM has 50,000 contacts but a 15% bounce rate..."
BEHAVIORAL ANCHORUses STAR format"Tell me about a time you found duplicate records causing..."
TENSION/TRADE-OFFForces judgment"Sales wants Gong, marketing wants Chorus, finance says pick cheapest..."
MEASURABLE OUTPUTConcrete assessment criteria"Prioritizes by impact, uses waterfall enrichment, sets auto-suppression..."
LEVEL-APPROPRIATEL0 ≠ L3 question complexityApprentice: "How do you log activities?" vs Strategist: "Design the hygiene strategy"

4.3 Before/After Examples

S-230: CRM Hygiene & Deduplication

BeforeAfter
"How do you set strategy for CRM hygiene?""Your CRM has 50,000 contacts but a 15% email bounce rate. Walk me through how you would clean this database in the next 30 days."
Generic rubric"Prioritizes by impact (high-value accounts first), uses waterfall enrichment (ZoomInfo → Clearbit → manual), sets bounce threshold for auto-suppression, creates ongoing hygiene cadence not one-time fix"

F-132: Deal Profitability & Margin Analysis

BeforeAfter
"How do you approach deal profitability?""Sales is pushing to close a $500K deal at 15% discount plus free implementation. Walk me through your margin analysis framework."
Generic rubric"Calculates net margin after all costs (COGS, implementation, CS, payment terms), models LTV vs CAC, assesses precedent risk, proposes alternative structures"

5. Current Issue: Frontend Not Loading Tasks

5.1 Symptom

The frontend shows "Loading tasks..." indefinitely, even though the API returns 188 tasks correctly.

5.2 Root Cause Analysis

Timeline of changes:

  1. Original API returned task.id, task.title, task.level_name (snake_case)
  2. New API returned task.taskId, task.taskTitle, task.levelName (camelCase)
  3. Frontend expected the old format, couldn't find properties, failed silently

Debugging steps taken:

  1. ✅ Verified API returns 188 tasks with correct data
  2. ✅ Fixed API to return BOTH formats (dual properties: id AND taskId)
  3. ✅ Fixed frontend to accept either format: task.id || task.taskId
  4. ⏳ Waiting for Netlify to rebuild and deploy the VitePress site

5.3 The Fix (Applied)

API Fix (interview-questions.js):

javascript
function getAllTasks() {
  return Object.entries(TASK_QUESTIONS).map(([id, task]) => ({
    // Dual properties for backward/forward compatibility
    id: id,
    taskId: id,
    title: task.title,
    taskTitle: task.title,
    level_name: LEVELS[task.level]?.name || 'Unknown',
    levelName: LEVELS[task.level]?.name || 'Unknown',
    question_count: task.questions.length,
    questionCount: task.questions.length,
    domain: task.domain,
    level: task.level
  }));
}

Frontend Fix (interview-questions.md):

javascript
taskGrid.innerHTML = filtered.map(task => {
  const taskId = task.id || task.taskId;
  const taskTitle = task.title || task.taskTitle;
  const levelName = task.level_name || task.levelName;
  const questionCount = task.question_count || task.questionCount;
  // ... rest of render
}).join('');

5.4 Verification Steps

  1. API is working (confirmed via curl):

    bash
    curl "https://growthflowengineering.xyz/.netlify/functions/interview-questions?action=list" | grep totalTasks
    # Returns: "totalTasks":188
  2. Dual properties are present (confirmed):

    bash
    curl ... | grep '"id":'
    # Returns both "id" and "taskId" for each task
  3. Frontend build pending: Netlify must rebuild the VitePress site for the frontend fix to deploy.


6. TSV Data Format

6.1 Schema

ColumnTypeDescription
task_idstringUnique identifier (e.g., "S-230")
task_titlestringHuman-readable task name
domainenum"sales", "marketing", or "finance"
levelint0-4 (Apprentice to Partner)
question_numberint1 or 2 (multiple questions per task)
questionstringThe interview question text
what_good_looks_likestringAssessment criteria
probe_1stringFirst follow-up question
probe_2stringSecond follow-up question

6.2 Example Row

tsv
S-230	CRM Hygiene & Deduplication	sales	3	1	Your CRM has 50,000 contacts but a 15% email bounce rate...	Prioritizes by impact (high-value accounts first)...	How do you balance speed vs accuracy?	What downstream systems would be affected?

7. Development Workflow

7.1 Adding/Editing Questions

  1. Edit the TSV (source of truth):

    bash
    code GFE-SkillSystem/specs/interview-questions/interview-questions.tsv
  2. Regenerate the function:

    bash
    cd GFE-SkillSystem/specs/interview-questions
    node generate-function.cjs
  3. Commit both repos:

    bash
    # GFE-SkillSystem (TSV changes)
    git add specs/interview-questions/
    git commit -m "feat: update interview questions for [task-ids]"
    git push
    
    # growth-flow-engineering (function changes)
    git add netlify/functions/interview-questions.js
    git commit -m "feat: regenerate interview questions from TSV"
    git push

7.2 Quality Improvement Batch

To improve multiple template questions at once:

  1. Add improved questions to improve-questions.cjs
  2. Run the script:
    bash
    node improve-questions.cjs
  3. Regenerate and commit as above

8. Future Enhancements

8.1 Planned Features

FeaturePriorityStatus
Export to PDFHighNot started
Save interview guidesMediumNot started
AI-assisted question generationMediumNot started
Candidate self-prep modeLowNot started
Integration with ATSLowNot started

8.2 Question Bank Expansion

  • Current coverage: 188/188 tasks (100%)
  • Hand-crafted questions: ~50 tasks
  • Template questions: ~138 tasks (need improvement)
  • Target: 100% hand-crafted by Q2 2026

9. Conclusion

The Interview Question Generator transforms how GrowthFlowEngineering clients conduct competency-based interviews. By mapping questions to the GFE Skill System's 188 tasks, providing concrete assessment criteria, and organizing by skill level, we enable consistent, high-quality candidate evaluation.

Current Status

  • ✅ API fully functional (188 tasks, 346 questions)
  • ✅ Backend fixes deployed
  • ⏳ Frontend fix awaiting Netlify build completion
  • 📋 Ongoing: Improve template questions with strategy consulting-quality content

Contact

For questions or contributions, contact: Support@growthflowengineering.xyz


This whitepaper is auto-updated as the Interview Question Generator evolves.