Skip to main content

AI Integration

HubHelper's AI layer translates raw security findings into actionable intelligence. Rather than handing you a list of issues, it explains patterns, scores overall risk, and generates prioritised recommendations in plain language.

Copilot SDK Status

The GitHub Copilot SDK (@github/copilot-sdk) is in technical preview. HubHelper ships a fully featured structured-analysis fallback that provides equivalent output without requiring an active Copilot session. When the SDK reaches general availability, HubHelper will automatically use it for richer natural-language responses.


Architecture

AnalysisResult (from SecurityAnalyzer)

├── CopilotService.analyzeWithAI()
│ │
│ ├─ SDK available → Copilot API (natural language)
│ └─ SDK unavailable → fallbackAnalysis() (structured)

└── AIAnalyzer
├── generateInsights() → summary string
├── analyzePatterns() → patterns[], trends[], risk_assessment
└── generateRecommendations() → string[]

Both CopilotService and AIAnalyzer operate on the same AnalysisResult type, so their output is consistent regardless of whether the Copilot SDK is available.


What the AI Detects

The AI layer analyzes patterns across all six issue types produced by SecurityAnalyzer:

Issue TypeWhat the AI Looks For
self-mergeFrequency, affected repos, correlation with security PRs
security-prSeverity distribution, merge review status
unreviewed-security-prCount and urgency — always flags as critical
disabled-actionsPercentage of org with Actions off
paused-workflowWorkflows inactive >60 days (GitHub auto-pause)
disabled-workflowManually disabled workflows still in use

Risk Level Scoring

The AI assigns one of four risk levels based on issue composition:

LevelTrigger ConditionColour
criticalAny issue with severity === 'critical' (e.g. unreviewed security PR)🔴 Red
highMore than 3 high-severity issues🟠 Orange
mediumMore than 10 total issues🔵 Blue
lowFewer than 10 issues, none critical or high🟢 Green
// From CopilotService (src/services/copilot-service.ts)
if (criticalCount > 0) risk_level = 'critical';
else if (highCount > 3) risk_level = 'high';
else if (issues.length > 10) risk_level = 'medium';
else risk_level = 'low';

AI-Generated Insights

Each analysis run produces a structured insight summary. Here's an annotated example:

=== Security Analysis Insights ===

📊 Issue Detection Rate: 9.8% of PRs flagged ← overall health metric
⚠️ Self-Merge Rate: 6.5% (8/123 PRs) ← rate + raw counts
🚨 3 security PRs merged without external review ← critical pattern
⚙️ Actions disabled: 11.1% of repos (5/45) ← coverage gap
📊 Issues concentrated in 7 repositories (15%) ← hotspot identification

The insight text is generated by AIAnalyzer.generateStructuredInsights(), which calculates rates and identifies concentrations automatically.


Pattern Analysis

AIAnalyzer.analyzePatterns() returns three arrays:

patterns

Factual observations about the current state:

  • "Self-merges detected across 4 repositories"
  • "12 security-related PRs identified"
  • "3 repositories have GitHub Actions disabled"

Inferred behaviours and their likely causes:

  • "High frequency of self-merges indicates potential lack of code review culture"
  • "2 critical security issues require immediate attention"
  • "Consider enabling Actions for automated security scanning and CI/CD"

risk_assessment

A single human-readable sentence summarising overall posture:

  • "Critical risk - immediate action required"
  • "High risk - attention needed"
  • "Medium risk - improvements recommended"
  • "Low risk"

AIAnalyzer.generateRecommendations() produces a prioritised list of concrete steps. Actions are scoped to the issue types detected:

When self-merges are present:

  • 🔒 Enable branch protection rules requiring at least one approving review
  • 👥 Consider implementing a CODEOWNERS file for automatic reviewer assignment
  • 📋 Establish team guidelines prohibiting self-merges for production code

When unreviewed security PRs are detected:

  • 🛡️ Require mandatory security team review for security-related changes
  • 🔐 Implement CODEOWNERS for security-sensitive directories
  • 📊 Set up automated security scanning with CodeQL or similar tools

When GitHub Actions are disabled:

  • ⚙️ Enable GitHub Actions for automated CI/CD and security scanning
  • 🤖 Configure Dependabot for automated dependency updates
  • 🔍 Set up automated code quality and security checks

When critical issues exist (always added):

  • ⚠️ [URGENT] Address N critical security issues immediately
  • 📞 Consider engaging security team for incident response review

Issue Explanations

CopilotService.explainIssue() generates a natural-language explanation for any individual SecurityIssue. Examples:

Self-merge:

"This PR was merged by alice who was also the author. Self-merges bypass the code review process and can introduce security vulnerabilities."

Unreviewed security PR:

"Critical: This security-related PR ('Update authentication library') was merged by its author without external review. Security changes should always be reviewed by security-knowledgeable team members."

Paused workflow:

"The workflow 'nightly-scan' has been automatically paused due to repository inactivity. GitHub disables scheduled workflows after 60 days of no repository activity."


Disabling AI Insights

If you want raw detection output without the AI layer, use the --no-ai flag:

npx @sdh100shaun/hubhelper analyze \
--org acme-corp \
--no-ai

This skips both CopilotService and AIAnalyzer, producing only the base SecurityAnalyzer output. Useful for:

  • Fast CI checks where insight generation latency matters
  • Piping raw JSON to your own analysis tooling
  • Environments without GitHub Copilot access

AI in JSON Output

When AI is enabled, the JSON export includes an aiInsights field:

{
"organization": "acme-corp",
"issues": [...],
"statistics": {...},
"aiInsights": {
"insights": "=== Security Analysis Insights ===\n...",
"risk_level": "critical",
"action_items": [
"[URGENT] Address 3 critical security issues immediately",
"Enable branch protection rules requiring at least one approving review",
"..."
]
}
}

Copilot SDK — Full Integration Roadmap

When the Copilot SDK reaches general availability, the full integration will:

  1. Create a Copilot session via CopilotClient
  2. Send a structured prompt containing the complete AnalysisResult as JSON
  3. Request key risks, risk level, actionable recommendations, and patterns/trends
  4. Return the Copilot response as the insights field
  5. Close the session cleanly

The fallback logic (fallbackAnalysis()) will remain as a graceful degradation path.

// Future full implementation (src/services/copilot-service.ts)
import { CopilotClient } from '@github/copilot-sdk';

const client = new CopilotClient();
const session = await client.createSession();

const response = await session.send(`
Analyze these GitHub security findings and provide:
1. Key security risks
2. Risk level (low/medium/high/critical)
3. Actionable recommendations
4. Patterns and trends

${JSON.stringify(analysisResult, null, 2)}
`);

await session.close();
return parseAIResponse(response.message);

The structured fallback already mirrors this output format exactly, so existing consumers of aiInsights will see no breaking changes when the SDK goes GA.