How Dashboard Automator Works
System implementation map for ingestion, synthesis, persistence, and user-facing operations surfaces.
Metadata
- Type
- Note
- Entity Type
- Standard
- Status
- Active
Links
Notes
Source Summary
Document Metadata
- title: How Dashboard Automator Works
- description: System overview of the ingestion pipeline, AI synthesis, data model, and UI surfaces
- status: evolving
- lastUpdated: "2026-02-13 07:22 ET (America/New_York)"
- owner: Engineering # How Dashboard Automator Works This document explains how Dashboard Automator solves the portfolio-status problem at an implem
Imported Context
Document Metadata
- title: How Dashboard Automator Works
- description: System overview of the ingestion pipeline, AI synthesis, data model, and UI surfaces
- status: evolving
- lastUpdated: "2026-02-13 07:22 ET (America/New_York)"
- owner: Engineering
How Dashboard Automator Works
This document explains how Dashboard Automator solves the portfolio-status problem at an implementation level: data flow, modules, persistence model, and user-facing surfaces.
If you want the conceptual strategy layer (problem, goals, and interpretation), see DOCS/strategist-logic.md.
Architecture (One Screen)
- Client: React + Vite single-page app (
client/) - Server: Express API + static frontend serving (
server/) - Database: PostgreSQL (Neon) accessed via Drizzle (
server/db.ts,shared/schema.ts) - Scheduler: in-process cron (every 6h) or external cron hitting
/api/sync/cron-trigger - AI: OpenAI
gpt-5-minifor goal synthesis and portfolio analysis (server/ai-synthesis.ts)
Data Model (What Gets Stored)
Tables are defined in shared/schema.ts:
repos: tracked GitHub repositories (withincludedflag andlastSyncedAt)activities: ingested GitHub activity eventsgoals: AI-synthesized goals per repo (overwritten each synthesis run for that repo)manual_logs: user-entered notes/tasks/conversations (optionally linked to a repo)snapshots: JSON snapshots used by the dashboard and exportsync_runs: sync execution history (status, counts, errors)agent_runs: AI call records (run type, inputs, outputs, tokens, latency, status)agent_evaluations: human quality evaluations for anagent_run
Pipeline Execution (End-to-End Flow)
The pipeline is orchestrated in server/pipeline.ts and typically runs on demand or on schedule.
1) Ingest GitHub activity
For each included repo:
server/github.ts:ingestRepo()pulls commits, issues, PRs, and releases sincerepos.lastSyncedAt(or 30 days back on first run).- Each event is upserted into
activitiesusing a stableexternalId. repos.lastSyncedAtis updated to "now" at end of ingestion.
2) Synthesize per-repo goals (AI)
For each included repo (skipping repos that errored during ingestion):
server/ai-synthesis.ts:synthesizeGoalsForRepo()builds an input context from:- up to 30 most recent
activitiesfor the repo - any
manual_logsattached to the repo - repo metadata (description/language/stars/open issues)
- up to 30 most recent
- It calls OpenAI chat completions with
response_format: json_object. - It persists an
agent_runsrecord (runType=goal_synthesis) with input summary, output JSON, tokens, and latency. - It clears prior goals for the repo and inserts the new goal list into
goals.
3) Analyze the whole portfolio (AI)
Once goals exist:
server/ai-synthesis.ts:analyzePortfolio()composes a summary across included repos using:- 30-day activity counts per repo
- goal counts per repo
- repo metadata (open issues, stars, language)
- It calls OpenAI and persists an
agent_runsrecord (runType=portfolio_analysis). - It returns
{ focusScore, risks, recommendations, summary }.
4) Generate and store a snapshot
server/ai-synthesis.ts:generateSnapshot():- computes KPIs
- computes per-project health score/label heuristics
- generates a 30-day activity timeline
- includes a limited recent-activity feed
- embeds the portfolio analysis output
- The resulting JSON is inserted into
snapshots.
5) Track progress and history
Pipeline visibility is split between:
- In-memory progress for live UI polling:
server/pipeline-progress.ts(GET /api/sync/progress) - Persistent run history:
sync_runs(GET /api/sync/history,GET /api/sync/last)
API Surfaces (User-Facing Contracts)
Routes are registered in server/routes.ts and are authenticated unless noted:
-
Repos
GET /api/reposGET /api/repos/search?q=...POST /api/repos/add(body:{ fullName })PATCH /api/repos/:id(body:{ included: boolean })DELETE /api/repos/:idPOST /api/repos/reset-all(destructive: clears all DB data)
-
Repo data
GET /api/repos/:id/activitiesGET /api/repos/:id/goals
-
Manual logs
GET /api/manual-logsPOST /api/manual-logs
-
Snapshot
GET /api/snapshot(returns latest stored snapshot JSON)POST /api/snapshot/export(generates a fresh snapshot and returns JSON)
-
Sync
POST /api/sync/trigger(starts pipeline)POST /api/sync/cron-trigger(unauthenticated, token-protected via header)GET /api/sync/progressGET /api/sync/lastGET /api/sync/history
-
Agent observability
GET /api/agent-runsGET /api/agent-runs/statsGET /api/agent-runs/:idGET /api/agent-runs/:id/evaluationsPOST /api/agent-evaluations
UI Surfaces (Where The Solution Shows Up)
The primary user-facing pages are:
- Dashboard (
client/src/pages/dashboard.tsx): KPIs, activity timeline, portfolio analysis, recent activity, project health cards. - Goals Board (
client/src/pages/goals.tsx): cross-repo goal list grouped by status. - Sync (
client/src/pages/sync.tsx): step-level pipeline progress and sync history. - Project Detail (
client/src/pages/project-detail.tsx): repo-focused goals with evidence and recent activities. - Agent Evaluation (
client/src/pages/agent-eval.tsx): inspect AI runs and record human evaluations. - Settings (
client/src/pages/settings.tsx): add/search repos and toggle inclusion.
Auth and Scheduling (Operational Behavior)
- Auth is app-managed session auth (Passport local strategy). Most routes require
isAuthenticated. - Scheduled sync can run:
- In-process every 6 hours (default) via
node-cron - Externally by POSTing
/api/sync/cron-triggerwithCRON_TRIGGER_TOKENand settingDISABLE_IN_PROCESS_CRON=true
- In-process every 6 hours (default) via
Deployment checklists and environment variables are documented in:
DOCS/getting-started/DOCS/deployment/render-runbook.md
Provenance
- Source file:
Dashboard-Automator/DOCS/how-it-works.md - Source URL: https://github.com/maggielerman/Dashboard-Automator/blob/main/DOCS/how-it-works.md
Source Extracts
- excerpt-1
--- title: How Dashboard Automator Works description: System overview of the ingestion pipeline, AI synthesis, data model, and UI surfaces status: evolving lastUpdated: "2026-02-13 07:22 ET (America/New_York)" owner: Engineering ---
Path: Dashboard-Automator/DOCS/how-it-works.md - excerpt-2
This document explains how Dashboard Automator solves the portfolio-status problem at an implementation level: data flow, modules, persistence model, and user-facing surfaces.
Path: Dashboard-Automator/DOCS/how-it-works.md - excerpt-3
If you want the conceptual strategy layer (problem, goals, and interpretation), see `DOCS/strategist-logic.md`.
Path: Dashboard-Automator/DOCS/how-it-works.md - excerpt-4
- **Client:** React + Vite single-page app (`client/`) - **Server:** Express API + static frontend serving (`server/`) - **Database:** PostgreSQL (Neon) accessed via Drizzle (`server/db.ts`, `shared/schema.ts`) - **Scheduler:** in-process cron (every 6h) or external cron hitting `/api/sync/cron-trigger` - **AI:** OpenAI `gpt-5-mini` for goal synthesis and portfolio analysis (`server/ai-synthesis.ts`)
Path: Dashboard-Automator/DOCS/how-it-works.md - excerpt-5
Tables are defined in `shared/schema.ts`: - `repos`: tracked GitHub repositories (with `included` flag and `lastSyncedAt`) - `activities`: ingested GitHub activity events - `goals`: AI-synthesized goals per repo (overwritten each synthesis run for that repo) - `manual_logs`: user-entered notes/tasks/conversations (optionally linked to a repo) - `snapshots`: JSON snapshots used by the dashboard and export - `sync_runs`: sync execution history (status, counts, errors) - `agent_runs`: AI call recor...
Path: Dashboard-Automator/DOCS/how-it-works.md