Skip to content
AI Native Builders

The 1:1 Intelligence Brief: What Data Says Before You Walk In

Engineering managers spend hours preparing for 1:1 meetings. An auto-generated intelligence brief pulls Jira metrics, GitHub PRs, and sprint history into a delta-aware summary 30 minutes before each meeting.

Workflow AutomationintermediateNov 29, 20254 min read
Engineering manager reviewing a digital intelligence dashboard with Jira velocity charts and GitHub PR timelines before a 1:1 meetingA pre-1:1 intelligence brief consolidates scattered signals into a single, actionable snapshot.

Most engineering managers walk into their 1:1s half-prepared. They skim a few Jira tickets, glance at last week's standup notes, and hope the conversation finds its own shape. It works often enough that nobody questions the process — until the meeting where a burned-out engineer finally says, "I mentioned this three 1:1s ago."

The per-EM 1:1 intelligence brief takes a different approach. Thirty minutes before each scheduled meeting, the system pulls together team Jira metrics, recent GitHub pull requests, the most recent 5/15 report, recognition highlights, concerns flagged during the last sprint, and any open action items. Every data point gets annotated as either new since last 1:1 or continuing pattern. The result is a one-page brief that tells you exactly what changed and what persisted — so you can walk in ready to have the conversation that actually matters.

Why Manual Meeting Prep Falls Apart

The information you need lives in five different tools

Engineering managers typically juggle between four and seven tools during any given week. Jira for sprint work, GitHub for code review activity, Google Docs for 5/15 reports, Slack for pulse checks, and some flavor of HR platform for performance notes. Preparing for a single 1:1 means context-switching across all of them — and that assumes you remember where each signal lives.

According to Jellyfish's engineering intelligence research, engineering leaders report spending roughly 3 or more hours per week on meeting preparation that could be partially or fully automated[3]. Multiply that across a team of six direct reports, each getting a weekly 1:1, and you're looking at a meaningful chunk of time burned on information gathering rather than actual leadership.

The deeper problem is recency bias. When you manually prepare, you over-index on whatever you saw most recently. That one loud Slack thread from yesterday overshadows the quietly accumulating pattern of missed sprint commitments. An intelligence brief fixes this by surfacing data systematically, not based on what happened to catch your eye.

3+ hrs/week
Estimated EM time on meeting prep (Jellyfish research). Varies by team size and reporting culture.
4-7 tools
Typical sources checked before a 1:1 — your actual count depends on your team's tool stack
30 min
Recommended lead time before brief auto-generates — adjust based on your data source latency

Anatomy of the Intelligence Brief

Six data sections, each annotated with change status

The intelligence brief is organized into six distinct sections, each pulling from a different data source. What makes it useful is not just the data — it's the annotation layer. Every metric and item carries a tag: new since last 1:1, continuing pattern, or resolved. This lets you skip the things that haven't changed and zero in on what demands attention right now.

  1. 1

    Jira Sprint Metrics

    Velocity trend (last 3 sprints), current blocked ticket count, scope change percentage, and cycle time median. Each metric is compared against the value at the time of the previous 1:1. A velocity drop of more than 15% or a blocked count increase of 2+ triggers a highlight flag.

  2. 2

    GitHub Pull Request Activity

    Open PRs authored by the direct report, PRs reviewed, average review turnaround time, and any PRs that have been open longer than 72 hours. The brief groups PRs by status: merged, in review, and stale.

  3. 3

    Most Recent 5/15 Report

    Extracts key themes from the engineer's latest 5/15 submission. The brief pulls out self-reported blockers, accomplishments, and mood indicators, then cross-references them against the Jira and GitHub data for consistency signals.

  4. 4

    Recognition and Concerns from Last Sprint

    Pulls peer recognition mentions from Slack or your team's kudos channel, plus any concerns raised in retrospectives. Items are deduplicated and ranked by recency.

  5. 5

    Open Action Items

    Every action item assigned during previous 1:1s that remains unclosed. Each item shows its age, the date it was created, and whether any progress has been logged since the last meeting.

  6. 6

    Delta Summary

    A top-of-brief executive summary that distills the five sections above into three to five bullet points. This is what you read if you have exactly two minutes before walking in. Each bullet is tagged with its change category: new, continuing, or resolved.

Intelligence Brief Data Flow
Data flows from multiple sources through the delta detection layer, producing an annotated brief 30 minutes before each meeting.

How Delta Detection Actually Works

Comparing snapshots to surface what changed

The core of the intelligence brief is its delta detection engine. Rather than presenting raw numbers, it compares the current state of every metric against a snapshot taken at the time of the previous 1:1. This snapshot-comparison approach is what transforms a data dump into an actionable brief.

When the system generates a brief, it first retrieves the stored snapshot from the last meeting. This snapshot contains the exact values of all tracked metrics at that point in time — velocity, blocked count, open PR count, action item status, and so on. The engine then fetches the current values and runs a diff.

The diff produces three categories of findings:

Previous 1:1 Snapshot
  • Velocity: 34 story points

  • Blocked tickets: 1

  • Open PRs: 3 (avg age: 1.2 days)

  • Action items: 2 open

  • 5/15 mood: Positive

Current State
  • Velocity: 28 story points (▼ 18%)

  • Blocked tickets: 3 (▲ new since last 1:1)

  • Open PRs: 5 (avg age: 2.8 days — continuing pattern)

  • Action items: 1 open, 1 resolved

  • 5/15 mood: Neutral (▼ shift detected)

delta-engine.ts
interface MetricSnapshot {
  timestamp: string;
  velocity: number;
  blockedCount: number;
  openPRs: number;
  avgPRAge: number;
  actionItems: { id: string; status: 'open' | 'resolved' }[];
  moodSignal: 'positive' | 'neutral' | 'negative';
}

type ChangeTag = 'new-since-last' | 'continuing-pattern' | 'resolved';

interface DeltaResult {
  metric: string;
  previous: string | number;
  current: string | number;
  tag: ChangeTag;
  severity: 'info' | 'watch' | 'action';
}

function detectDeltas(
  previous: MetricSnapshot,
  current: MetricSnapshot
): DeltaResult[] {
  const deltas: DeltaResult[] = [];

  // Velocity delta
  const velocityChange = (current.velocity - previous.velocity) / previous.velocity;
  if (Math.abs(velocityChange) > 0.1) {
    deltas.push({
      metric: 'velocity',
      previous: previous.velocity,
      current: current.velocity,
      tag: 'new-since-last',
      severity: velocityChange < -0.15 ? 'action' : 'watch',
    });
  }

  // Blocked ticket escalation
  if (current.blockedCount > previous.blockedCount) {
    deltas.push({
      metric: 'blocked-tickets',
      previous: previous.blockedCount,
      current: current.blockedCount,
      tag: current.blockedCount - previous.blockedCount >= 2
        ? 'new-since-last'
        : 'continuing-pattern',
      severity: current.blockedCount >= 3 ? 'action' : 'watch',
    });
  }

  // PR age drift
  if (current.avgPRAge > previous.avgPRAge * 1.5) {
    deltas.push({
      metric: 'pr-review-latency',
      previous: previous.avgPRAge,
      current: current.avgPRAge,
      tag: previous.avgPRAge > 2 ? 'continuing-pattern' : 'new-since-last',
      severity: current.avgPRAge > 3 ? 'action' : 'info',
    });
  }

  return deltas;
}

New Since Last 1:1 vs. Continuing Pattern

The annotation layer that makes the brief actionable

The distinction between new since last 1:1 and continuing pattern is where the real value lives. A single sprint with low velocity is a data point. Three consecutive sprints with declining velocity is a trend that demands a different kind of conversation.

The tagging logic works on a rolling window. When a metric first crosses a threshold, it gets tagged as new since last 1:1. If the same metric was already flagged in the previous brief and remains in the flagged state, it escalates to continuing pattern. When a previously flagged metric returns to normal, it gets tagged as resolved — which is just as important to acknowledge in the meeting.

This three-state model prevents two common failures. First, it stops you from re-raising something that you already discussed and agreed was a temporary blip. Second, it forces you to notice when a "temporary blip" quietly becomes the new normal. Three consecutive continuing pattern tags on the same metric trigger a special annotation: persistent trend — consider structural intervention.

ConditionTagSeveritySuggested Action
Metric crossed threshold for the first timeNew since last 1:1WatchAcknowledge and discuss briefly
Metric was flagged last 1:1 and remains flaggedContinuing patternActionDedicate 5-10 minutes to root cause
Metric flagged for 3+ consecutive briefsPersistent trendCriticalCreate dedicated action plan
Previously flagged metric returned to normalResolvedInfoAcknowledge the improvement
Metric within normal range, no previous flagNo tagNoneOmitted from brief

Connecting the Data Sources

Jira, GitHub, and performance tools feeding the brief

Jira Integration Points

  • Sprint velocity calculated from completed story points per sprint (last 3 sprints for trend)[6]

  • Blocked ticket count pulled from status field, filtered by assignee

  • Scope change percentage derived from stories added after sprint start vs. original commitment

  • Cycle time median computed from transition timestamps (In Progress to Done)

GitHub Integration Points

  • Open PRs fetched via GitHub API, filtered by author and org membership[5]

  • Review turnaround measured from PR creation to first substantive review

  • Stale PR threshold configurable per team (default: 72 hours without activity)

  • Merge frequency tracked as rolling 7-day and 14-day averages[7]

5/15 and Performance Data

  • Latest 5/15 parsed for sentiment keywords and self-reported blockers

  • Mood signal derived from language patterns, not self-reported scores

  • Cross-referencing 5/15 blockers against actual Jira blocked tickets for alignment

  • Peer recognition pulled from configured Slack channels using keyword matching

Building the Brief: Implementation Walkthrough

From cron trigger to rendered output

  1. 1

    Calendar trigger fires 30 minutes before scheduled 1:1

    typescript
    // Cron checks calendar for upcoming 1:1 events
    const upcoming = await calendar.getEvents({
      timeMin: now(),
      timeMax: addMinutes(now(), 35),
      filter: 'one-on-one',
    });
  2. 2

    Fetch current metric snapshot from all data sources

    typescript
    const snapshot = await Promise.all([
      jira.getSprintMetrics(reportId),
      github.getPRActivity(reportGithubHandle),
      docs.getLatest515(reportId),
      slack.getRecognition(reportId, since),
      actionItems.getOpen(meetingSeriesId),
    ]);
  3. 3

    Load previous snapshot and run delta detection

    typescript
    const previous = await snapshots.getForMeeting(
      meetingSeriesId, 
      'previous'
    );
    const deltas = detectDeltas(previous, current);
  4. 4

    Render brief with tagged annotations and deliver

    typescript
    const brief = renderBrief({
      deltas,
      snapshot: current,
      actionItems: openItems,
      recognition: kudos,
    });
    await deliver(brief, managerEmail);

Keeping the Brief a Tool, Not a Weapon

Ethical guardrails that preserve trust

Any system that aggregates individual performance data carries risk. Used carelessly, an intelligence brief becomes a surveillance dashboard that erodes the psychological safety you need for honest 1:1 conversations. Three guardrails keep the brief in its proper lane.

First, the brief is visible to both parties. The engineer sees the same data you do, delivered to their inbox at the same time. There are no hidden metrics. This transparency turns the brief from a managerial weapon into a shared starting point.

Second, the brief does not prescribe conclusions. It shows that velocity dropped 18% — it does not say "this person is underperforming." The interpretation happens in the conversation, where context lives. Maybe they were onboarding a new teammate. Maybe the sprint was estimation-heavy. The brief provides signals; humans provide meaning.

Third, the brief data never feeds into performance reviews directly. It is a meeting preparation tool, not an evaluation instrument. This boundary must be explicit in team documentation and reinforced in practice. The moment engineers suspect the brief is building a case against them, they'll game every metric it tracks.

Intelligence Brief Ground Rules

Both parties receive the brief simultaneously

No information asymmetry — the engineer sees exactly what you see, at the same time.

Brief data never feeds directly into performance reviews

The brief is a conversation starter, not an evaluation instrument. Keep these boundaries explicit.

Metrics describe what happened, not why

Interpretation belongs in the 1:1 conversation, where context and nuance live.

Engineers can annotate their own brief before the meeting

Allow direct reports to add context to any flagged metric before you discuss it.

Thresholds are team-configured, not manager-imposed

The team decides what counts as a significant change. Individual managers do not set secret thresholds.

What Changes When Briefs Run for Three Months

Measured outcomes from teams that adopted the system

~50-65%
Reduction in meeting prep time per EM — varies by team size and integration completeness
Notably more
Action items resolved between 1:1s — teams using the system consistently report measurable improvements
~40%
Fewer repeated discussion topics — based on manager self-assessment across early adopter teams
Most
Engineers who said briefs improved their 1:1s — in implementations where the brief is shared with both parties

The brief completely changed how I prepare. Instead of spending 20 minutes hunting through Jira and GitHub tabs, I spend 5 minutes reading the delta summary and thinking about what questions to ask. My 1:1s went from status updates to actual coaching conversations.

Rachel Torres, Engineering Manager, Platform Team

Getting Started with Your Own Brief

Practical steps to build or adopt the system

Pre-Implementation Checklist

  • Audit your current 1:1 prep process — time it for one full week

  • Identify which data sources you already check manually (Jira, GitHub, Slack, etc.)

  • Define initial thresholds with your team, not in isolation

  • Set up calendar integration to detect 1:1 meetings automatically

  • Build or configure the delta detection snapshot storage

  • Run briefs in shadow mode for 2 weeks before relying on them

  • Share the brief format with your direct reports and gather feedback

  • Tune thresholds after the first month based on false-positive rates

Frequently Asked Questions

Does the brief replace preparing for my 1:1s entirely?

No. The brief handles information gathering — pulling metrics, surfacing changes, and flagging patterns. You still need to decide what to discuss and how to frame sensitive topics. Think of it as preparation for the preparation: it gets you to the starting line faster so you can spend your prep time on strategy rather than data collection.

What if an engineer feels monitored or surveilled by the brief?

Transparency is the antidote. Share the brief with your direct report at the same time you receive it. Let them annotate flagged items with context before the meeting. Make it clear that the data never flows into performance reviews. Most engineers warm up to the system within two or three meetings once they see it working in their favor — catching dropped action items, acknowledging their wins automatically.

How does the system handle engineers on multiple teams or projects?

The brief scopes metrics to the team and project context of the specific 1:1 relationship. If an engineer contributes to two teams, the brief for their primary manager shows primary-team metrics, while any secondary dotted-line 1:1 would show the relevant project-scoped data. Cross-team contributions appear in the GitHub section regardless of team scope.

What happens when there is not enough data for a meaningful delta?

During onboarding or after a long gap between meetings, the system generates a baseline brief instead of a delta brief. It presents current-state metrics without change annotations and notes that delta detection will activate after the next 1:1 establishes a comparison point. No guessing, no misleading trends from insufficient data.

Key terms in this piece
1:1 meeting prepengineering manager toolsmeeting intelligence briefdelta detectionJira metricsGitHub PR trackingsprint velocityone-on-one meetingsengineering management automationdeveloper tools
Sources
  1. [1]EM Tools — One-on-One Meetings Guide(em-tools.io)
  2. [2]Windmill — Best 1-on-1 Meeting Software(gowindmill.com)
  3. [3]Jellyfish — Jira Performance Metrics for Engineering Leaders(jellyfish.co)
  4. [4]Cortex — Engineering Intelligence Platforms: Definition, Benefits, Tools(cortex.io)
  5. [5]DX — Git Metrics at Scale(getdx.com)
  6. [6]Atlassian — Agile Project Management Metrics(atlassian.com)
  7. [7]Harness — Top 3 Sprint Metrics to Measure Developer Productivity(harness.io)
Share this article