Project Guide Game Home Beta Build

Aurora Galactica Project Guide

A readable, current map of the project: what we are building, how the code and tooling fit together, and what still matters before the polished four-stage 1.0 launch.

Current goal: Ship a polished four-stage 1.0 slice with native replays, a player-ready in-game guide, and a corrected production publish path that matches beta.
Current Build 1.0.2+build.290.sha.831a2c6
Release Line 1.0.2
Updated April 2, 2026
Latest Note Movement reset hotfix

Project Snapshot

The high-level state of the game and release effort right now.

Current Product Target

The current target is a consciously scoped 1.0: four playable stages, one reliable capture/rescue loop, clean high-score/results flow, a polished cabinet-like shell, and public hosting that is trustworthy enough to share.

Where The Game Is Strong

Manual play says the core slice is healthy enough to launch if the final trust items land cleanly: capture/rescue is guarded end-to-end, the shell/control rail is much stronger, native replay is in the game, and the remaining release work is mostly production correctness plus final presentation confidence.

Main Remaining Gameplay Blocker

The center of gravity has shifted from broad gameplay tuning to visible shell, feedback, and trust polish. Stage 2 and Stage 4 tuning are still important, but they are now intentionally treated as likely 1.1 work unless new regressions appear.

How Decisions Are Made

Reference material, harness evidence, and real play all matter. The current rule is to prefer original/manual-backed evidence when a fidelity question exists, then use harness runs and player captures to tune what the game feels like in practice without losing architectural clarity.

Release Scope And Status

What 1.0 means now that the project is prioritizing shell polish, public trust, and player-facing finish work.

  • 1.0 is the four-stage slice, not a full recreation of all Galaga content.
  • Stage 1 through Stage 4 manual play is healthy enough that visual finish and trust work now outrank experimental balance tuning.
  • Stage 2 and Stage 4 fairness work is now intentionally treated as likely 1.1 tuning unless fresh regressions push it back into the launch path.
  • The active 1.0 work is the cabinet shell, icon/control surface, score/guide surfaces, and a short player-focused guide.
  • Capture/rescue now has meaningful automation coverage, but the remaining launch question is mostly whether the live presentation reads clearly enough.
  • The special boss-with-two-escorts bonus attack still needs one more readability pass before 1.0 is called done.
  • The production-vs-beta operational split, the public production deploy path, and the final release-readiness pass remain the real launch-significant items.

Launch Blocking View

Compact release-triage view for the scoped four-stage 1.0 launch after the latest reprioritization toward shell polish, trust, and player-facing finish work.

BucketIssueOwnerStatusEvidenceNextPhase
#61SharedWatchingHosted Stage 3->4 can still hit the empty-playfield edge caseKeep telemetry live and treat the next reproducible bad run as launch-blockingPhase 1
Must#76SharedImplemented on devNon-production lanes now use a production-score read-only mirror with local-only score saves by defaultRefresh beta, confirm the read-only lane messaging and blocked submit path, then close if it holdsPhase 4
Must#85SharedOpenFinal release-readiness still needs security review, code/docs consistency, and a short player guideSchedule the final release-readiness cycle once the shell/polish list is shortPhase 4
Must#86 / #103 / #105CodexActiveThe shell/control surface now owns guide, controls, scores, account, bug report, mute, pause, and settingsFinish and verify the compact icon rail, in-game help surfaces, and settings pause/resume behaviorPhase 3
Must#91 / #48 / #44CodexActiveTop HUD, frame/bezel, and stage indicator now form one display-shell systemTighten the cabinet-like display language and verify it across common desktop sizesPhase 2
Must#82 / #49 / #60CodexQueuedWait-mode and score-view surfaces still need clearer, less fragile presentationRework score-panel layout and make scoreboard state changes more explicitPhase 2
Must#74 / #47 / #38 / #45CodexOpenThree-ship squadron readability and combat-feedback finish work still visibly affect perceived qualityTighten squadron geometry and verify ship-loss / boss-hit feedback in betaPhase 2
Must#87 / #106CodexOpenWait-mode/demo visual stability and between-level message presentation still read as unfinishedStabilize attract presentation and standardize message displayPhase 2
Must#40 / #80 / #88CodexBeta reviewCapture/carry presentation is much healthier, but the remaining visible confidence pass still mattersKeep one more live confirmation while closing the capture presentation clusterPhase 3
Should#71 / #78 / #77 / #73CodexLow riskMute landed cleanly and the challenge/capture mechanic checks are mostly in confirmation territory nowKeep the mechanic checks green and close them as live confidence comes inPhase 3
Post-1.0#18 / #32 / #9SharedDeferredStage 2 and Stage 4 tuning plus broader challenge-stage fidelity are now intentionally treated as 1.1 or later workRevisit as an experimental balance/reference pass after 1.0 shipsPost-1.0

Architecture Map

The main files and what they own.

Game Shell

`src/index.template.html` and `src/styles.css` define the page shell, settings UI, icon rail, in-game guide/controls overlays, layout, and in-game presentation. The served local development build is generated into `dist/dev/index.html` during build.

Boot / State / Metadata

`src/js/00-boot.js` owns constants, build metadata, score/high-score state, audio, input handling, logging, and the UI state that surrounds the game loop.

Gameplay Systems

`src/js/10-gameplay.js` owns stage spawning, enemy movement, scoring, dive logic, capture/rescue, carried-fighter interactions, ship loss, and the main simulation loop.

Rendering

`src/js/20-render.js` owns sprite drawing, overlays, stage banners, HUD rendering, and the visual language that explains what is happening in play.

Harness Hooks

`src/js/90-harness.js` exposes deterministic setup helpers used by local Chrome harness scenarios so we can reproduce challenge, rescue, boss, and Stage 4 cases without depending only on random live play.

Build / Public Sync

`tools/build/build-index.js` assembles the local dev build into `dist/dev/`, including the release dashboard, project guide, and player guide. `tools/build/promote-beta.js` and `tools/build/promote-production.js` snapshot that output into `dist/beta/` and `dist/production/`. `tools/build/check-publish-ready.js` verifies a lane is publishable from the current `HEAD`. `tools/build/publish-lane.js` publishes generated surfaces and the Pages workflow contract into `Aurora-Galactica`, where Pages should package committed artifacts rather than rebuild stale source. `tools/dev/local-resume.js` brings the local game and viewer back up together after a machine handoff.

Log Viewer

`tools/log-viewer/` serves the local artifact review app so repaired session videos, synced event streams, Codex prompts, and issue drafting stay in one place while debugging gameplay. It reads the recursive `harness-artifacts/` tree and expects each reviewable run folder to keep a `summary.json` beside its session log and repaired review video.

Harness Coverage

What is currently measured automatically and why it matters.

  • Opening and pacing scenarios: `stage1-opening`, `stage2-opening`, `stage1-descent`.
  • Challenge fidelity scenario: `stage3-challenge`, including hit rate and upper-band dwell time.
  • Stage 4 guardrails: `stage4-five-ships` and `stage4-survival`.
  • Capture/rescue coverage: `rescue-dual`, `capture-rescue-dual`, `second-capture-current`, `repeat-capture-stage`, `carried-fighter-standby`, and `carried-fighter-attacking`.
  • Boss clarity coverage: `boss-first-hit`.
  • Special arcade-moment coverage: `stage4-squadron-bonus` and `stage12-variety`.
  • Run analysis now includes challenge, rescue, carried-fighter, later-stage, audio, and measured squadron-spacing metrics so regressions are easier to spot.
  • Artifact review coverage now includes repaired `.review.webm` playback, quality checks, clip extraction, synced event browsing, and direct issue drafting through the local log viewer.

Reference Baseline

The evidence sources the project treats as authoritative or useful.

Primary Rule Sources

The 1981 Namco manual and curated original gameplay footage are the first stop for rule questions, scoring behavior, challenge structure, capture/rescue behavior, and visible arcade moments.

Secondary Sources

Walkthroughs and written references are used carefully for later-stage breadth, progression feel, and confirming that the game experience expands beyond the first few stages, but they do not override primary sources on rules.

Durable Analyses

The project already keeps focused notes for the first challenge stage, Stage 4 fairness, manual observations, and capture/debug cases so we stop rediscovering the same lessons from scratch.

Working Workflow

How changes are expected to move from idea to confidence.

  • Edit source files in `src/` and rebuild with `npm run build`.
  • After switching machines or refreshing a workstation, use `npm run local:resume` so the local game and the log viewer come back up together before resuming debugging.
  • Use harness scenarios or imported real-play sessions to validate behavior before treating a tuning pass as complete.
  • Use `npm run log-viewer` when a video-backed review loop is faster than reading raw JSON; the local viewer keeps repaired video, events, clips, and issue drafting aligned, but it only sees runs that live inside the expected `harness-artifacts/` folder structure.
  • Push `main` only when the maintained source, docs, and release metadata are current and the repo is clean.
  • Use the release dashboard for milestone-level status, release notes for player-visible changes, and the public status export for the separate public project page.
  • When a behavior question is unclear, preserve the lesson in a durable note under `reference-artifacts/` or `reference-analysis/` instead of relying on chat history.

Documentation Index

The best entry points for different kinds of work.

Log Viewer

Local artifact review app. Start it with `npm run log-viewer` to inspect repaired videos, synced events, and draft issues. Runs must live under `harness-artifacts/` with a `summary.json` and neighboring session/video files.

Player Guide

Player-facing manual used by the in-game guide icon. This is the right place for controls, HUD reading, survival tips, and capture/rescue explanations.

README

Quick start, local run, harness commands, and repo-level overview.

Plan

Current working plan, known problems, and active priorities.

Release Readiness Review

Explicit final-release review for 1.0 covering security posture, docs/public-surface consistency, lane discipline, and accepted launch risks.

Architecture

Short system layout and how build, deploy, harness, and reference flows fit together.

Source Map

Where specific gameplay systems live in code.

External Services

Current inventory of runtime, hosting, feedback, and release services, plus what stays browser-local.

Reference Baseline

How fidelity questions are grounded in durable source material and issues.

Contributing

Collaborator onboarding and workflow expectations.

Home Machine Setup

How to clone the dev repo on a second machine, stay in sync cleanly, and publish from either workstation.

Home Codex Prompt

Maintained prompt text for starting a home-machine Codex session with the correct repo roles, sync flow, and local service startup commands.

Release Dashboard

Visual timeline of completed, in-progress, and upcoming release steps.

Update Cadence

What updates automatically and what should be intentionally maintained.

  • Build identity updates on every build and appears in the game, settings, public pages, and generated status surfaces.
  • This project guide is regenerated on every build from maintained source content so the hosted page stays aligned with the current repository state.
  • The player guide is also regenerated on every build as a separate shipped page so the in-game guide can stay player-focused without mixing in developer workflow material.
  • The release dashboard page is regenerated on every build, but its milestone data only changes when `release-dashboard.json` is updated after a meaningful step.
  • Release notes should change only when there is a player-visible or milestone-relevant update worth calling out.
  • Public project status should update on pushes to `main` through the sync workflow, using the public status interface rather than direct homepage edits.

Likely Next Polish Steps

The most likely next moves based on current project state.

  • Finish the shell/control-surface cluster: score-panel clarity, wait-mode layout, and remaining message polish.
  • Do one more live-play polish pass on the special boss-and-two-escorts bonus attack now that spacing is tighter.
  • Land the short player-focused guide and final release-readiness pass so the game can ship with cleaner public-facing support surfaces.
  • Protect the current launch slice and move broader Stage 2/Stage 4 balancing and challenge variation work into the post-1.0 lane.

README Source

Quick-start, run/build commands, harness commands, and the repo-level overview are pulled directly from the maintained README.

Generated from README.md during build.

Classic fixed-screen browser shooter with keyboard controls, capture-and-rescue mechanics, multi-stage progression, and arcade-style tuning.

Current shipping target:

  • a polished four-stage slice:
    • Stage 1
    • Stage 2
    • Stage 3 challenging stage
    • Stage 4

Expansion beyond Stage 4 is currently treated as post-1.0 work unless it directly supports polishing this slice.

Release dashboard:

  • live page:
    • https://sgwoods.github.io/Aurora-Galactica/release-dashboard.html
  • source data:
    • /Users/stevenwoods/Documents/Codex-Test1/release-dashboard.json

Project guide:

  • live page:
    • https://sgwoods.github.io/Aurora-Galactica/project-guide.html
  • source data:
    • /Users/stevenwoods/Documents/Codex-Test1/project-guide.json
    • plus the maintained source docs it renders from:
      • /Users/stevenwoods/Documents/Codex-Test1/README.md
      • /Users/stevenwoods/Documents/Codex-Test1/PLAN.md
      • /Users/stevenwoods/Documents/Codex-Test1/PRODUCT_ROADMAP.md
      • /Users/stevenwoods/Documents/Codex-Test1/ARCHITECTURE.md
      • /Users/stevenwoods/Documents/Codex-Test1/SOURCE_MAP.md
      • /Users/stevenwoods/Documents/Codex-Test1/EXTERNAL_SERVICES.md
      • /Users/stevenwoods/Documents/Codex-Test1/REFERENCE_BASELINE.md
      • /Users/stevenwoods/Documents/Codex-Test1/CONTRIBUTING.md
      • /Users/stevenwoods/Documents/Codex-Test1/RELEASE_POLICY.md

Player guide:

  • live page:
    • https://sgwoods.github.io/Aurora-Galactica/player-guide.html
  • source data:
    • /Users/stevenwoods/Documents/Codex-Test1/player-guide.json
  • generated local dev page:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/player-guide.html

Repository roles:

  • development repo:
    • https://github.com/sgwoods/Codex-Test1
  • public release repo:
    • https://github.com/sgwoods/Aurora-Galactica
  • external runtime and tooling services:
    • /Users/stevenwoods/Documents/Codex-Test1/EXTERNAL_SERVICES.md

Live

After GitHub Pages deploys, play at:

  • production:
    • https://sgwoods.github.io/Aurora-Galactica/
  • beta:
    • https://sgwoods.github.io/Aurora-Galactica/beta/

The root Aurora build is the official public production lane, even while the product is still prerelease in SemVer terms. The /beta/ lane is a manually promoted public checkpoint used for less-frequent milestone playtesting. Day-to-day engineering work continues in Codex-Test1 as the pre-production development line.

Current score/data policy:

  • production:
    • live shared leaderboard reads and writes
    • pilot account actions enabled
  • beta and local pre-production:
    • production leaderboard reads only
    • score submission disabled by default
    • pilot account/profile writes disabled unless a configured non-production test pilot is signed in
    • local device scores still save normally

Optional non-production test pilot configuration:

  • set TEST_ACCOUNT_EMAILS to enable one or more specific beta/dev pilot accounts for auth and write-flow testing
  • set TEST_ACCOUNT_USER_IDS to exclude those pilots' scores from shared leaderboard views
  • the local-only example file is:
    • /Users/stevenwoods/Documents/Codex-Test1/.env.local.example
  • when one of those configured test pilots is signed in on beta/dev:
    • My Scores becomes available
    • score submission is enabled for that pilot only
    • the Developer Tools panel can reset that pilot's remote scores

Screenshot

Gameplay Screenshot

Run Locally (macOS / Chrome)

  1. Open Terminal in this folder:
   cd /Users/stevenwoods/Documents/Codex-Test1
  1. Build the current local dev output:
   npm run build
  1. Start the local game and viewer services:
   npm run local:resume
  1. Open:
  • http://localhost:8000
  • http://127.0.0.1:4311/

If you only want the game server, the lower-level command is:

python3 -m http.server 8000 --directory dist/dev

To stop the locally tracked game and viewer services cleanly:

npm run local:stop

Controls

  • Left/Right or A/D: Move
  • Ctrl: Left-handed cabinet-style move left on the web
  • Command: Left-handed cabinet-style move right on the web
  • Space: Fire (arcade-style shot cap)
  • P or pause icon: Pause
  • F: Fullscreen
  • U: Ultra scale toggle
  • Enter: Start / Restart
  • F1 or ?: Open in-game feedback form
  • featured pilot icon: Open the pilot identity/profile card
  • icon: Open the player guide inside the game
  • 🕹 icon: Open the controls reference inside the game
  • 🎞 icon: Watch recent local replays saved on this device
  • Export Log button: Download the current gameplay session as JSON

What Is Implemented

  • Fixed arcade playfield with integer scaling and fullscreen letterboxing
  • Stage progression with challenge stages
  • Stage 1 scripted opening timing for consistency
  • Boss capture beam, ship capture, rescue, and dual-fighter fire mode
  • In-game pilot profile card with sign-in, sign-out, create-account, and reset-password flows
  • Browser-native pilot replay viewer and in-frame popup surfaces for pilot/help/scores/feedback/settings
  • Enemy dive behavior and tuned missile pacing/spread
  • Pixel-art sprite rendering and starfield
  • Synthesized arcade-style SFX
  • Local high score persistence via browser storage

Development

  • Editable source files live in:
    • src/index.template.html
    • src/styles.css
    • src/js/00-boot.js
    • src/js/10-gameplay.js
    • src/js/20-render.js
    • src/js/90-harness.js
  • Source orientation guide:
    • /Users/stevenwoods/Documents/Codex-Test1/SOURCE_MAP.md
  • Contributor guide:
    • /Users/stevenwoods/Documents/Codex-Test1/CONTRIBUTING.md
  • Home machine setup:
    • /Users/stevenwoods/Documents/Codex-Test1/HOME_MACHINE_SETUP.md
  • Home Codex prompt:
    • /Users/stevenwoods/Documents/Codex-Test1/HOME_CODEX_PROMPT.md
  • Repo-managed Codex skill source:
    • /Users/stevenwoods/Documents/Codex-Test1/codex-skills/aurora-dev-refresh/SKILL.md
  • Architecture overview:
    • /Users/stevenwoods/Documents/Codex-Test1/ARCHITECTURE.md
  • Reference baseline:
    • /Users/stevenwoods/Documents/Codex-Test1/REFERENCE_BASELINE.md
  • Local dev artifact:
    • dist/dev/index.html
  • Build the served file from source with:
  npm run build
  • Build outputs are generated into:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/
    • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/
  • Do not hand-edit generated files under dist/; treat them as disposable build output.
  • Start the local artifact review viewer with:
  npm run log-viewer

Then open:

  • http://127.0.0.1:4311/

The viewer loads repaired run videos, keeps the event stream aligned beside playback, supports paused zoom/pan and region clipping, and can draft Codex context or GitHub issues from the selected moment. It expects run artifacts to live under:

  • /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/

With one summary.json per run folder and neighboring files such as:

  • neo-galaga-session-*.json
  • neo-galaga-video-*.review.webm

The viewer discovers runs recursively, so batch folders may contain nested run folders as long as each run keeps that local structure.

  • Promote the current built artifacts to the hosted beta lane with:
  npm run promote:beta

This creates or refreshes:

  • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/

from the current dev build in:

  • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/

Publish the generated dist/beta/ snapshot into https://github.com/sgwoods/Aurora-Galactica when you want the hosted https://sgwoods.github.io/Aurora-Galactica/beta/ lane to move.

  • Promote the current dev build into the stable production artifact with:
  npm run promote:production

This creates or refreshes:

  • /Users/stevenwoods/Documents/Codex-Test1/dist/production/
  • Sync the separate public repo from the latest build metadata with:
  npm run sync:public

This updates the canonical Aurora public/status files:

  • /Users/stevenwoods/GitPages/public/aurora-galactica.html
  • /Users/stevenwoods/GitPages/public/data/projects/aurora-galactica.json

And it keeps the legacy compatibility aliases current:

  • /Users/stevenwoods/GitPages/public/codex-test1.html
  • /Users/stevenwoods/GitPages/public/data/projects/codex-test1.json

It does not update /Users/stevenwoods/GitPages/public/index.html directly.

  • Verify that the public repo content reflects the current build metadata with:
  npm run verify:public
  • Build script:
    • tools/build/build-index.js
  • Public-pages sync script:
    • tools/build/sync-public-pages.js
  • Public-pages verification script:
    • tools/build/verify-public-sync.js
  • Local dev build metadata output:
    • dist/dev/build-info.json
  • Release notes source:
    • release-notes.json
  • Release/versioning policy:
    • /Users/stevenwoods/Documents/Codex-Test1/RELEASE_POLICY.md
  • Release-readiness review:
    • /Users/stevenwoods/Documents/Codex-Test1/RELEASE_READINESS_REVIEW.md
  • Product roadmap:
    • /Users/stevenwoods/Documents/Codex-Test1/PRODUCT_ROADMAP.md
  • Shared authenticated pilot video publishing is currently planned as a

post-1.0, pre-2.0 stretch goal:

  • canonical run metadata in Supabase
  • optional Aurora-owned YouTube mirroring for selected validated runs
  • pilot-safe public identity using pilot ID / initials instead of email
  • Project guide source:
    • /Users/stevenwoods/Documents/Codex-Test1/project-guide.json
  • Generated local dev project guide:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/project-guide.html
  • Generated local dev player guide:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/player-guide.html
  • This page is regenerated during npm run build from the guide config and the maintained docs above.

End-To-End Workflow

1. Develop In Source

  • Make gameplay, UI, harness, or documentation changes in:
    • /Users/stevenwoods/Documents/Codex-Test1/src/
    • maintained docs such as:
      • /Users/stevenwoods/Documents/Codex-Test1/README.md
      • /Users/stevenwoods/Documents/Codex-Test1/PLAN.md
      • /Users/stevenwoods/Documents/Codex-Test1/ARCHITECTURE.md
      • /Users/stevenwoods/Documents/Codex-Test1/SOURCE_MAP.md
    • structured release/project data such as:
      • /Users/stevenwoods/Documents/Codex-Test1/project-guide.json
      • /Users/stevenwoods/Documents/Codex-Test1/release-dashboard.json
      • /Users/stevenwoods/Documents/Codex-Test1/release-notes.json

2. Build Local Outputs

  • Run:
  npm run build
  • This regenerates:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/index.html
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/project-guide.html
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/release-dashboard.html
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/build-info.json

3. Test Against The Built App

  • Manual local play should use the generated dev build:
  npm run local:resume
  • This brings up:
    • http://localhost:8000
    • http://127.0.0.1:4311/
  • Harness/browser checks also run against dist/dev/, not raw source.
  • The log viewer is separate and reads run artifacts from:
    • /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/

4. Promote A Beta Snapshot

  • When the current build is worth external testing, run:
  npm run promote:beta
  • This creates a beta-ready snapshot in:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/

5. Publish Hosted Beta

  • Preferred:
  npm run publish:beta
  • This command now refreshes the beta lane from the current HEAD automatically:
    • rebuilds dist/dev
    • promotes to dist/beta
    • runs beta preflight
    • publishes the lane
  • Optional manual inspection steps still exist:
  npm run promote:beta
  npm run publish:check:beta
  npm run publish:beta:raw
  • This clones sgwoods/Aurora-Galactica, copies:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/

into the /beta/ folder, commits, and pushes.

  • Then GitHub Pages updates:
    • https://sgwoods.github.io/Aurora-Galactica/beta/

5a. Approve The Beta Candidate

  • After reviewing the hosted beta candidate and deciding it is the one production should come from, run:
  npm run approve:beta
  • This writes:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/approved-build-info.json
  • Production promotion is now gated on that approval marker.

6. Publish Hosted Production

  • Preferred:
  npm run publish:production
  • This command now refreshes the production lane from the approved beta candidate:
    • rebuilds dist/dev
    • expects an already promoted and approved dist/beta
    • promotes approved beta artifacts to dist/production
    • runs production preflight
    • publishes the lane
  • Production publish now fails unless the current beta snapshot has been explicitly approved with:
    • npm run approve:beta
  • Aurora-Galactica Pages should package the committed production and beta artifacts directly; it should not rebuild the live production root from stale public-repo source files.
  • Optional manual inspection steps still exist:
  npm run promote:production
  npm run publish:check:production
  npm run publish:production:raw
  • This clones sgwoods/Aurora-Galactica, copies:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/

into the root published surface, commits, and pushes.

  • Then GitHub Pages updates:
    • https://sgwoods.github.io/Aurora-Galactica/

7. Sync Public Project Status

  • The separate public project-summary repo is not the game host.
  • It is updated from the current production build metadata with:
  npm run sync:public
  • That updates Aurora’s public project/status surfaces in sgwoods/public, not the playable game itself.
  • Release history:
    • /Users/stevenwoods/Documents/Codex-Test1/release-history/
  • Auto deploy workflow: .github/workflows/pages.yml
  • Cross-repo public-pages sync workflow:
    • .github/workflows/sync-public-pages.yml
  • Durable reference material:
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/
    • challenge-stage baseline:
      • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/first-challenge-stage/README.md

8. Reset The Production Leaderboard For True 1.0

  • #130 is an explicit pre-1.0 release operation.
  • The goal is to start the official 1.0 production leaderboard from a clean

baseline rather than carrying forward pre-1.0 scores from moving tuning and rule sets.

  • This operation deletes rows from the shared production scores table. It

does not delete pilot accounts.

  • Dry-run inspection first:
  SUPABASE_SERVICE_ROLE_KEY=... npm run leaderboard:inspect:production
  • Execute the reset only when we are ready to launch from the approved 1.0

candidate:

  SUPABASE_SERVICE_ROLE_KEY=... npm run leaderboard:reset:production
  • Expected order for the true 1.0 launch:
  1. confirm the reviewed production candidate
  2. inspect the current production leaderboard with the dry-run command
  3. reset the production leaderboard
  4. publish the final 1.0 production build
  5. verify the live site and shared leaderboard start from zero
  • The reset command requires an operator-only Supabase service-role key and is

intentionally not runnable from the browser auth path.

Versioning

  • Current versioning uses three release surfaces with build metadata:
    • pre-production:
      • prerelease SemVer from package.json
    • production:
      • stable public label without the prerelease suffix
    • production beta:
      • promoted public beta label
  • Local and deployed builds carry:
    • version
    • build number
    • short commit
    • branch / dirty state
    • Eastern release timestamp
  • The settings drawer also shows the latest human-written release note from:
    • /Users/stevenwoods/Documents/Codex-Test1/release-notes.json
  • Each release can also keep a structured session summary and optional raw transcript under:
    • /Users/stevenwoods/Documents/Codex-Test1/release-history/
  • The separate public project pages repo is synced from dist/production/build-info.json and release-notes.json
    • CI uses the PUBLIC_REPO_SYNC_TOKEN secret when available
    • The token should have contents:write access to sgwoods/public
  • Example production build label:
    • 0.5.0+build.9.sha.457df28
  • Example production beta build label:
    • 0.5.0-beta.1+build.9.sha.457df28.beta
  • See:
    • /Users/stevenwoods/Documents/Codex-Test1/RELEASE_POLICY.md

Session Logging

  • The game records keyboard events, major lifecycle events, and periodic gameplay snapshots
  • Exported logs are downloaded as JSON from the in-game Export Log button
  • Each export includes build metadata, browser/user agent, viewport info, input events, and game-state snapshots
  • Player-triggered exports are browser downloads, not repo-local harness artifacts
    • they usually land in the user’s downloads directory
  • The in-game 🎞 replay surface is separate and keeps recent local replay state in browser storage
  • The canonical developer review archive is still:
    • harness-artifacts/
  • See the formal distinction in:
    • /Users/stevenwoods/Documents/Codex-Test1/ARTIFACT_POLICY.md

Gameplay Harness

  • A local replay harness can run the game in Chrome, replay a saved session JSON, and write fresh .webm and .json artifacts into harness-artifacts/
  • It uses your installed /Applications/Google Chrome.app
  • Harness execution and artifact generation are local-only on your Mac; it does not use cloud compute
  • Run it with a previously exported session:
  npm run harness -- --session /absolute/path/to/neo-galaga-session.json
  • Or run one of the built-in scenarios:
  npm run harness -- --scenario stage3-challenge
  npm run harness -- --scenario stage3-challenge-persona --persona expert
  npm run harness -- --scenario stage3-transition
  npm run harness -- --scenario stage3-perfect-transition
  npm run harness -- --scenario stage4-five-ships
  npm run harness -- --scenario stage4-survival
  npm run harness -- --scenario stage1-descent
  npm run harness -- --scenario rescue-dual
  npm run harness -- --scenario capture-rescue-dual
  npm run harness -- --scenario carried-boss-diving-release
  npm run harness -- --scenario carried-boss-formation-hostile
  npm run harness -- --scenario natural-capture-cycle
  npm run harness -- --scenario stage4-capture-pressure
  npm run harness -- --scenario boss-first-hit
  npm run harness -- --scenario second-capture-current
  npm run harness -- --scenario stage12-variety
  npm run harness -- --scenario stage4-squadron-bonus
  npm run harness -- --scenario carried-fighter-standby
  npm run harness -- --scenario carried-fighter-attacking
  • Or run a seeded batch:
  npm run harness:batch -- --profile personas
  npm run harness:batch -- --profile distribution
  npm run harness:batch -- --profile quick
  npm run harness:batch -- --profile fidelity
  npm run harness:batch -- --profile default
  npm run harness:batch -- --profile deep
  • Output is written to a timestamped folder under:
    • /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/
  • The log viewer reads that same tree recursively. A run folder is considered review-ready when it contains:
    • summary.json
    • a neighboring session log:
      • neo-galaga-session-*.json
    • a browser-friendly repaired review video when available:
      • neo-galaga-video-*.review.webm
    • fallback raw videos such as .webm or .mkv can still be discovered, but the repaired .review.webm is the preferred review artifact.
  • Every new harness run should be treated as incomplete until the artifact-quality check passes:
  npm run harness:check:video-artifact
  • If the quality check fails while reviewing logs or videos:
    • file an immediate bug because the recorder/export path is no longer trustworthy
    • suggest the repair path:
    npm run harness:repair:videos
  • and avoid using the affected video for synchronized analysis until it has a repaired browser-friendly review artifact
  • The harness writes a summary.json beside the generated artifacts, including:
    • seed used for the run
    • selected self-play persona when used:
      • novice
      • advanced
      • expert
      • professional
  • Stage 4 now has a dedicated capture-pressure scenario as well:
    • stage4-capture-pressure
    • purpose: stress a natural capture into carrying-boss return under Stage 4 timing
    • note: this scenario is currently most useful as a focused capture-pressure probe, not a full Stage 4 replacement for stage4-five-ships or stage4-survival
    • stage clears / challenge clears / ship losses
    • per-loss context such as recent attack starts, recent enemy bullets, nearby snapshot counts, and explicit death causes
    • capture/rescue markers such as capture start, fighter captured, and fighter rescued
    • capture-branch markers such as captured-fighter release / auto-recovery versus captured-fighter-turned-hostile
    • transition markers such as challenge-clear-to-next-stage spawn timing, first visible next-stage snapshot, and whether the game ever showed the next stage before enemies were visible
    • rescue-pipeline metrics such as whether a rescue actually turned into active dual-fighter fire
  • Persona self-play is useful for launch-readiness tuning without changing game rules:
  npm run harness -- --scenario stage1-opening --persona novice
  npm run harness -- --scenario stage2-opening --persona advanced
  npm run harness -- --scenario stage4-survival --persona expert
  • full-run persona distributions are useful for overall game-shape tuning:
    npm run harness:batch -- --profile distribution
    npm run harness:batch -- --profile distribution --repeats 4
  • the distribution batch runs Stage 1 -> game over for:
    • novice
    • advanced
    • expert
    • professional
  • and records:
    • ending stage distribution
    • score distribution
    • game-length distribution
    • lives-left distribution
    • stage-clear counts
    • stage-reach rates by persona
    • aggregated loss causes
  • natural capture-cycle metrics such as capture-start-to-capture timing and captured-to-rescue timing
  • carried-fighter scoring metrics such as standby vs attacking destroy counts and total awarded points
  • special attack squadron bonus metrics such as triggered count, total awarded bonus, max escorts present, and measured escort offset
  • dual-fire metrics such as average spread in the rescue scenario
  • descent-speed metrics such as time from attack start to lower-field crossing
  • whether the generated .webm contains audio
  • Batch mode also writes:
    • batch-report.json with aggregate challenge hits, ship losses, total duration, and audio failures
    • tuning-report.json with prioritized findings to guide the next gameplay pass
    • later-stage diagnostics such as first-loss timing, loss clustering, attacker pressure at death, and bullet-vs-collision loss mix
    • the tuning report now considers both ship losses and how much of the stage-pressure scenario survived, so it can distinguish "died early" from "survived the full window but spent too many ships"
  • Historical harness videos can be repaired in place into .review.webm artifacts and have their neighboring summary.json updated with artifact-quality metadata:
  npm run harness:repair:videos
  • Typical batch timings on this machine:
    • quick: about 1.5-2 minutes
    • default: about 3-4 minutes
    • deep: about 5-7 minutes
  • You can re-run the analyzer on an existing run folder:
  npm run harness:analyze -- --run /absolute/path/to/harness-artifacts/run-folder
  • You can also regenerate the tuning summary for an existing batch:
  npm run harness:tune -- --batch /absolute/path/to/harness-artifacts/batch-folder
  • You can import the latest self-play capture pair from your Downloads folder into harness-artifacts/ and analyze it in one step:
  npm run harness:import-latest
  • This import step is the intended bridge from:
    • browser download artifacts in the user’s downloads location
    • to the normalized developer review archive in harness-artifacts/
  • You can also check for a new self-play run without duplicating already imported files:
  npm run harness:check-latest
  • Optional import flags:
  npm run harness:import-latest -- --session-id ngt-1773602145011-2
  npm run harness:import-latest -- --source /absolute/path/to/folder
  • harness:check-latest keeps a small local state file in harness-artifacts/ so scheduled scans can safely skip runs that were already imported
  • Current tuning targets from the latest quick batch:
    • challenge-stage scoring is materially improved, but later-stage visual fidelity still needs work
    • Stage 4 pressure is now much healthier in the five-ship scenario
    • Stage 4 survival is still collision-driven and remains the main gameplay tuning target
    • deeper multi-stage progression is still needed for richer late-stage comparison

Modem Feedback Integration

The game includes a floating Feedback button (top-right).

  • Both Feature Request and Bug Report submissions post to Web3Forms using a local build-time access key
  • If direct send cannot complete, the game falls back to opening a prefilled mailto: draft
  • The submission body includes the report plus game metadata (build, timestamp, stage, score, lives, and user agent)

One-time setup:

  • Create a Web3Forms access key for the destination inbox
  • Store that key locally in .env.local as WEB3FORMS_ACCESS_KEY

Home Machine Setup Source

Second-machine setup, sync strategy, and publish workflow are documented in the dedicated home-machine guide.

Generated from HOME_MACHINE_SETUP.md during build.

This guide is the recommended way to work on Aurora Galactica from a second machine while staying in sync cleanly with the main workstation.

Goal

Use Codex-Test1 as the only development repo on both machines.

  • Codex-Test1
    • source of truth for gameplay, docs, harnesses, and release tooling
  • Aurora-Galactica
    • public release host only
    • do not use it as the normal day-to-day development repo

For a maintained first-session prompt you can paste into the home Codex instance, use:

  • /Users/stevenwoods/Documents/Codex-Test1/HOME_CODEX_PROMPT.md

Prerequisites

Install on the home machine:

  • git
  • node and npm
  • Google Chrome
  • GitHub CLI gh

Optional but useful:

  • Python 3 for a simple local static server

First-Time Setup

  1. Clone the development repo:
   git clone https://github.com/sgwoods/Codex-Test1.git
   cd Codex-Test1
  1. Install dependencies:
   npm install
  1. Authenticate GitHub CLI if you plan to publish beta or production from this machine:
   gh auth login

Local Development Loop

  1. Start each session by syncing:
   git pull --rebase origin main
   npm install
  1. Build the current local dev output:
   npm run build
  1. Bring the local game and viewer back up:
   npm run local:resume
  1. Open:
  • http://localhost:8000
  • http://127.0.0.1:4311/

If you only want the game without the viewer, the lower-level command is:

python3 -m http.server 8000 --directory dist/dev

When you want to stop the locally tracked services cleanly:

npm run local:stop

Local Tools

Harness

Run a scenario:

npm run harness -- --scenario stage1-opening

Run a batch:

npm run harness:batch -- --profile quick

Log Viewer

Start by itself:

npm run log-viewer

Open:

  • http://127.0.0.1:4311/

The viewer expects artifacts under:

  • /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/

Player-triggered exported logs and videos are different:

  • they download through the browser first
  • on macOS that is usually the user’s Downloads folder
  • import them into harness-artifacts/ when you want them in the developer review archive:
  npm run harness:import-latest

See:

  • ~/Documents/Codex-Test1/ARTIFACT_POLICY.md

On the home machine that means the same repo-relative folder inside your local clone.

Staying In Sync Across Machines

The simplest rule is:

  • do not leave meaningful unpushed work on both machines at the same time

Recommended pattern:

  1. Pull before starting work:
   git pull --rebase origin main
  1. Make a small unit of change
  2. Build and test
  3. Commit and push:
   git add ...
   git commit -m "..."
   git push origin main
  1. On the other machine, pull again before continuing

Branching

For larger changes:

git checkout -b codex/my-feature
git push -u origin codex/my-feature

Recommended use:

  • small safe changes:
    • main
  • bigger or riskier changes:
    • codex/...

Publishing Beta

When a build is ready for the hosted beta lane:

npm run build
npm run promote:beta
npm run publish:check:beta
npm run publish:beta

This publishes:

  • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/

into the public Aurora beta surface.

Publishing Production

When a build is ready for the hosted production lane:

npm run build
npm run promote:production
npm run publish:check:production
npm run publish:production

This publishes:

  • /Users/stevenwoods/Documents/Codex-Test1/dist/production/

into the public Aurora production surface.

Important Defaults

  • edit source under src/
  • do not hand-edit dist/
  • use Codex-Test1 for development
  • use Aurora-Galactica only as the release target

Practical Recommendation

If you want the least-friction two-machine workflow:

  1. keep both machines cloned to Codex-Test1
  2. pull before every session
  3. rebuild and run:
   npm run build
   npm run local:resume
  1. push before switching machines
  2. publish from either machine only after the preflight passes

Plan Source

Current state, known problems, active workstreams, and release-critical priorities come directly from the working plan.

Generated from PLAN.md during build.

Current State

  • Browser-based fixed-screen arcade shooter built from readable source modules and served from dist/production/index.html
  • Build identity is now part of the product:
    • prerelease SemVer in package.json for the pre-production engineering line
    • production and production-beta labels derived from that source version during Aurora release builds
    • generated dist/production/build-info.json
    • in-game settings drawer shows the current build label and Eastern release timestamp
  • Hosted on GitHub Pages: https://sgwoods.github.io/Aurora-Galactica/
  • Core gameplay implemented:
    • stage progression
    • capture / rescue / dual-fighter flow
    • challenge stages
    • fullscreen and ultra scale
    • in-game feedback form
  • Local automation harness implemented:
    • session replay in Chrome
    • seeded scenario runs
    • batch execution
    • automatic artifact analysis and tuning reports
  • Feedback delivery currently uses Web3Forms as the direct-send transport
  • If direct send fails, the game falls back to a prefilled mailto: draft
  • High-score entry now supports Galaga-style three-letter initials with arcade-flavored cursor/input behavior
  • Public status export now follows the shared public repo contract:
    • this repo updates its own public project page
  • this repo updates its canonical public status manifest in data/projects/aurora-galactica.json
  • and keeps the legacy compatibility alias in data/projects/codex-test1.json
    • this repo no longer rewrites the shared public homepage directly

Working Assumptions

  • The game should continue to run as a lightweight localhost-friendly Chrome experience
  • Fidelity to original Galaga remains a primary product goal
  • We now have a reproducible local tuning loop and should use it as the default evaluation path
  • GitHub issues should mirror the active roadmap closely enough that we can pick up work later without re-deriving context
  • Release recommendations should follow an explicit policy and roadmap rather than ad hoc judgment
  • Official reference material such as manuals and curated clips should live under /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/ so rules and visual comparisons are not stranded in Downloads
  • Secondary sources such as walkthroughs should inform later-stage breadth and player-facing behavior, but should not override manuals or original gameplay footage on arcade-rule questions

Known Problems

  • We now depend on a valid local WEB3FORMS_ACCESS_KEY for direct feedback delivery
  • Stage 4 remains the main gameplay balance problem for the four-stage 1.0 slice
  • Stage 4 failures are now better localized:
    • early formation-shot fairness
    • later escort / diagonal collision fairness
  • Imported self-play summaries still need capture-driven life-loss accounting and better post-hit pause reporting in the day-to-day workflow
  • Modem feedback is surfacing two current product concerns that should stay visible:
    • hit/explosion/post-hit pause feel
    • whether a stage should allow more than one successful fighter capture
  • The Stage 4+ special three-ship boss squadron behavior is only partially closed:
    • bonus logic exists
    • but we still need it to show up naturally in gameplay and read clearly as a

visible arcade event before the four-stage release is considered complete

  • Stage progression in the five-ship scenario is still too shallow for richer late-stage comparison
  • Later-stage enemy variety is still far below Galaga's broader stage-band content
  • Original reference videos remain helpful because the current automated harness measures outcomes, not visual fidelity on its own

Latest Harness Signals

  • Latest quick batch: /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/batch-quick-2026-03-18T20-22-44-404Z
  • Latest fidelity batch: /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/batch-fidelity-2026-03-18T20-26-14-680Z
  • Audio capture is stable in the harness (0 audio failures in the latest quick batch)
  • Challenge scenario is currently strong at about 26/40 hits (65% hit rate in the latest quick batch)
  • The Stage 4 five-ship scenario still survives the full scenario window, but the average progression is still shallow and losses skew toward collisions
  • The Stage 4 survival scenario still reaches only Stage 4, confirming later-stage progression remains limited even when survivability improves
  • New harness diagnostics now expose first-loss timing, loss clustering, attacker pressure at death, and explicit death causes
  • Fidelity harness now confirms:
    • Stage 1 descent baseline is about 1.20s
    • rescue dual-shot spread is 28px
    • the dedicated second-capture scenario is now blocked as intended
  • Current gameplay work should now focus primarily on collision-driven later-stage survivability, later-stage enemy/content breadth, and visual fidelity against original Galaga footage
  • Latest four-stage 1.0 refresh on 2026-03-21 shows:
    • Stage 1 opening: acceptable
    • Stage 2 opening: acceptable
    • Stage 3 challenge: stable around 23/40
    • Stage 4 still needs work, especially around mixed early-shot / later-escort pressure

Workstreams

0. Release Management

  • Maintain prerelease SemVer in the pre-production source line while deriving cleaner public production and production-beta labels for Aurora release builds
  • Stamp every build with:
    • version
    • build number
    • commit
    • branch / dirty state
    • Eastern build timestamp
  • Use /Users/stevenwoods/Documents/Codex-Test1/RELEASE_POLICY.md for bump guidance
  • Use /Users/stevenwoods/Documents/Codex-Test1/PRODUCT_ROADMAP.md to decide when a minor-version milestone has actually been reached

1. Reference Baseline

  • Use /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/ as the durable home for manuals, curated clips, and analysis notes
  • Pull concrete gameplay rules from the 1981 Namco manual before inventing behavior from memory
  • Cross-check manual rules against original gameplay footage whenever the manual is ambiguous or incomplete
  • Use walkthroughs only as secondary references for later-stage variety, progression patterns, and player-visible behavior in ports
  • Favor measurable baseline facts such as:
    • challenge-stage cadence and bonus structure
    • capture / rescue constraints
    • stage-transition and results-screen flow
    • special attack squadron behavior from Stage 4 onward
    • later-stage enemy family / transform cadence

2. Feedback Delivery

  • Verify whether FormSubmit can be activated against the Modem inbox address
  • If activation is impossible, replace FormSubmit with a more reliable free flow
  • Keep in-game feedback UX intact even if transport changes

3. Gameplay Capture And Comparison

  • Add in-game session logging for keyboard actions and game-state snapshots
  • Add export/download of recorded sessions
  • Add replay or watch mode for recorded sessions
  • Use harness-generated .webm + .json artifacts as the default tuning input
  • Pair local harness results with original Galaga reference clips for direct fidelity comparison

4. Fidelity Tuning

  • Use captured traces plus original reference clips to tune:
    • formation spacing and screen composition
    • missile timing and density
    • dive timing and attack cadence
    • capture beam timing and geometry
    • player ship vertical placement
    • sprite sharpness, collision fairness, and stage/challenge presentation cards

5. Automated Play / Testing

  • Replay harness is now working with seeded scenarios and session replays
  • Keep extending the harness toward stronger coverage and more useful scoring behavior
  • Reuse the same log format for replay and automated regression testing

6. Run Artifact Submission

  • Add a Submit Run flow that packages a gameplay video and matching JSON log together
  • Use external storage for large artifacts instead of trying to send them through email
  • Prefer GitHub issues as the durable tracking surface for submitted gameplay samples
  • Evaluate Google Drive as low-cost public artifact storage, with explicit handling for permissions and link generation
  • Keep manual fallback paths available when upload fails

7. Commentary-Ready Replay Telemetry

  • Expand gameplay logging so a replay can be described beat-by-beat rather than just reconstructed mechanically
  • Preserve stable timestamps and semantic event context so later tools can align commentary to replay or video
  • Capture enough run-state detail to support future narrated replays, notable-play summaries, and richer post-game analysis

GitHub Issue Map

  • #3 Synthetic user agent for headless gameplay with session replay
  • #4 Tune Stage 1 fidelity against original Galaga reference footage
  • #5 Add replay / watch mode for recorded sessions
  • #6 Add gameplay session logging and export
  • #7 Verify FormSubmit activation against Modem inbox
  • #8 Design Submit Run flow using GitHub issues plus Google Drive artifact storage
  • #20 Model manual-accurate captured-fighter destruction scoring
  • #21 Add special three-ship attack squadron bonuses from the manual
  • #22 Implement manual-accurate challenge-stage bonus scoring
  • #23 Add Galaga-style results screen before initials entry
  • #81 Add commentary-ready gameplay telemetry for narrated replays

Revised Plan

Go-Forward Plan

This is now the operating plan for the project.

Immediate Product Goal: 1.0 Four-Stage Slice

The project is now targeting a smaller 1.0 sub-goal first:

  • a polished Stage 1 through Stage 4 experience
  • one cleaned-up challenge stage
  • one solid capture / rescue loop
  • stable local and hosted play
  • persistent high scores and clean end-of-run flow

Expansion beyond Stage 4, new theme systems, and broader content breadth are still valuable, but they are now explicitly post-1.0 work unless they are needed to support this smaller shipped slice.

Post-Launch View

1.0.0 is now live.

Current release checkpoint:

  • beta:
    • 1.0.0-beta.1+build.276.sha.a59c5ad.beta
  • production:
    • 1.0.0+build.276.sha.a59c5ad
  • production promotion path was used successfully:
    • publish:beta -> approve:beta -> publish:production
  • final release-readiness signoff was completed
  • public build metadata no longer exposes non-production test-pilot identity

fields

  • the production leaderboard baseline reset for #130 is complete

Current coding priority:

  1. keep 1.0.0 stable in production
  2. move #44 and the broader refinement/admin/identity work into 1.x
  3. start the structured post-launch quality and platform track deliberately

What changed since the last full review:

  • closed:
    • #48
    • #85
    • #125
    • #61
    • #76
    • #79
    • #82
    • #106
    • #107
    • #108
    • #109
    • #112
    • #113
  • deferred from the 1.0 path:
    • #44
  • release-path hardening is now proven in a real rehearsal, not just implemented
  • remaining pilot/admin/email/platform work is now clearly tracked as post-1.0
  • the production leaderboard reset is complete:
    • #130
BucketIssueOwnerStatusEvidenceNext ActionPlan Stage
Can Slip#96 / #98SharedWatch / close candidateRecorder trust and release-pipeline cleanup both improved significantly. These no longer look like primary launch blockers.Keep the checks green and close if no new regression appears during final signoff.Phase 4
Can Slip#103 / #105 / #110 / #114 / #115 / #116 / #117 / #119 / #120SharedPost-blocker polishThe shell, pilot, popup, and replay surfaces are all much stronger now. Remaining work is polish/expansion unless a new trust bug appears.Keep improving in 1.x unless a concrete launch issue reappears.Phase 3
Post-1.0#44 / #121 / #124 / #126 / #127 / #128 / #129SharedPlannedBottom-right stage indicator, shared pilot media, control-centre/admin tooling, cleaner non-production backend split, permanent pilot identity/account deletion, branded email polish, and version-aware leaderboard tracking all belong to the 1.x refinement track.Keep them in the structured 1.x program, not the 1.0 blocker path.Post-1.0

Items currently treated as post-1.0 unless they become necessary for external playtesting or operational stability:

  • #69 remote gameplay logs and optional video artifacts
  • #70 homepage recent plays / watch links
  • #121 shared authenticated pilot media and Aurora-owned YouTube publishing
  • #17 broader reference baseline work
  • #18 / #32 experimental Stage 2 / 4 tuning for 1.1
  • #9 broader challenge-stage fidelity / variation work
  • #19 later-run collision-chain regression outside the four-stage slice
  • replay, submission, theme-system, and broader public-workflow enhancements
  • treat shared authenticated video publishing as a post-1.0, pre-2.0

stretch goal rather than a launch requirement

Track A. Autonomous Original-Galaga Baseline

  1. Build and maintain a durable reference baseline using:
  • manuals
  • curated gameplay footage
  • emulator captures when available
  • secondary written references only where they help fill later-stage context
  1. Map fidelity questions back to:
  • durable source artifacts
  • open GitHub issues
  • harness scenarios and measurable targets
  • the first challenge-stage baseline note now lives at:
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/first-challenge-stage/README.md
  1. Prefer reference-backed iteration over blind tuning whenever a rule or behavior question exists

Track B. Collaborator Readiness

  1. Keep onboarding docs, architecture docs, and reference maps current
  2. Use issue labels and PR structure so work can be divided cleanly across collaborators
  3. Keep harness evidence and source maps good enough that a new collaborator can move without reconstructing project history

Working Rule

After each material step:

  1. restate the high-level plan
  2. note how the last change moved the roadmap
  3. recommend the next best step in that context

Phase 1. Stabilize The 1.0 Play Slice

  1. Treat Stage 4 as the end of the current 1.0 game loop
  2. Keep using the harness as the default loop for:
  • Stage 1 opening fidelity
  • Stage 2 opening pressure
  • Stage 3 challenge-stage fidelity
  • Stage 4 survival / fairness
  1. Continue reducing collision-driven Stage 4 failures without breaking the

stronger Stage 1-3 experience

  1. Preserve the improved challenge-stage behavior and avoid broad difficulty

sweeps that undo Stage 4 progress

  1. Use Modem/manual-play feedback to decide where the harness needs one more scenario instead of guessing

Phase 2. Raise Visual Fidelity

  1. Compare our current board composition directly against the original reference sheets
  2. Improve enemy silhouette/readability and tighten formation presentation
  3. Refine explosions, starfield, and stage/challenge/game-over presentation until they feel more cabinet-authentic
  4. Use the remaining prelaunch time to make the shell feel intentionally shipped:
  • top HUD alignment
  • playfield frame/bezel clarity
  • right-panel / settings / players-guide presentation
  • wait-mode visual stability
  1. Pull forward only the narrow structural work that supports those polish passes:
  • centralize HUD/frame/chrome values
  • keep renderer-owned style tokens separate from gameplay logic where the seam is real
  • avoid a full theme/brand-package abstraction before launch

Phase 3. Finish 1.0 Arcade Systems

  1. Revisit capture/rescue and dual-fighter behavior against original footage
  2. Keep high-score entry, results flow, and release/build identity polished
  3. Ensure the hosted build, public pages, and release metadata feel reliable
  4. Add only the scenario coverage needed to validate the four-stage slice

Phase 4. Ship The 1.0 Slice

  1. Run a final 4-stage polish pass with both harness and real play
  2. Confirm deployment / public page sync / release notes are stable
  3. Cut a deliberate 1.0 candidate for the four-stage slice
  4. Move expansion work into the post-1.0 roadmap

Updated Priority Order

  1. Finish Stage 4 fairness for the four-stage 1.0 slice
  2. Fix capture-driven life-loss accounting in imported/self-play summaries
  3. Evaluate ship-hit feel and post-hit pause from short manual runs, then add or adjust metrics if needed
  4. Add repeat-capture-per-stage validation so capture-rule issues are measurable
  5. Move into 1.0 finishing polish once Stage 4 is good enough
  1. Fix Stage 4 survivability and fairness so the four-stage loop feels winnable
  2. Finish challenge-stage fidelity for the Stage 3 experience without

destabilizing hit rate or readability

  1. Polish capture/rescue usefulness and clarity within the four-stage slice
  2. Make sure special three-ship boss squadron attacks and bonus presentation are

present in the live four-stage slice, not just harness coverage

  1. Tighten game-over, results, initials, high-score persistence, and release

identity so the product feels intentionally shippable

  1. Keep improving harness coverage only where it reduces guesswork for the

four-stage 1.0 slice

Phase 5. Productize The Workflow

  1. Keep improving harness summaries and tuning reports where they meaningfully reduce guesswork
  2. Return to artifact submission / Modem transport questions after the four-stage gameplay slice is stable
  3. Treat deeper stage expansion, theme/template work, and broader content

breadth as post-1.0 roadmap items

Roadmap Source

Release milestones and what belongs to 1.0 versus later are pulled from the product roadmap.

Generated from PRODUCT_ROADMAP.md during build.

Current Release Train

  • Current line:
    • 1.0.x
  • Current phase:
    • post-launch stabilization
  • Goal of this phase:
    • keep the shipped four-stage arcade slice stable while moving the broader

identity, admin, and media work into the planned 1.x track

Immediate Target: 1.0 Four-Stage Slice

Definition:

  • Stage 1
  • Stage 2
  • Stage 3 challenging stage
  • Stage 4

Quality bar:

  • feels coherent and fair as one complete loop
  • challenge stage is readable and rewarding
  • capture / rescue is useful and understandable
  • high scores / initials / game-over flow feel finished
  • hosted build and public project pages feel reliable

This is the current product target. Expansion beyond Stage 4 is intentionally secondary until this slice is polished.

Milestone A: Four-Stage 1.0 Alpha

Target outcome:

  • Strong Stage 1 through Stage 4 feel
  • Stable Stage 3 challenge stage
  • Capture/rescue behavior feels useful and understandable
  • High-score/results/release flow feels shippable

Key issue groups:

  • Gameplay tuning
    • #4 Stage 1 fidelity
    • #9 challenge-stage fidelity
    • #18 Stage 4 survivability
    • #32 Stage 2 opening pressure / spacing feel
  • Current Modem-driven follow-up work inside the same milestone
    • #35 capture-driven life-loss accounting in self-play summaries
    • #38 ship-hit explosion / sound / post-hit pause feel
    • #39 repeated fighter capture behavior within a stage
  • Manual-backed arcade rules
    • #14 second captured-fighter behavior research
  • Capture/rescue rules
    • remaining rescue usefulness / clarity polish
    • make the released-fighter / rescue target read more prominently in live play
  • Manual-backed visible arcade moments
    • ensure the Stage 4+ three-ship boss squadron appears naturally in regular

gameplay and shows its bonus clearly during the four-stage slice

  • Product polish
    • #31 release date display refinement
    • high-score/results/initials polish as needed
    • move the temporary settings/account panel toward a more centered overlay or otherwise reposition it so long forms are not obscured at the bottom of the playfield
    • add a clearer visible playfield frame and tighten the integrated status / score-view HUD treatment

Suggested versioning:

  • stay on 0.5.x-alpha

Execution model:

  • Use the autonomous reference-baseline track to decide what "closer to Galaga"

means

  • Use the collaborator-readiness track to make sure new contributors can help

without re-learning the whole project

Milestone B: Deployment And Playtest Readiness

Target outcome:

  • Hosted build and companion public pages are trustworthy
  • Feedback, replay, and artifact workflows are good enough for broader testing
  • The four-stage slice is easy to share and evaluate externally

Key issue groups:

  • Feedback and submission workflow
    • #7 FormSubmit / Modem viability
    • #8 structured run submission
  • Replay and testing infrastructure
    • #5 replay / watch mode
    • #3 synthetic user agent / session replay work
  • Operational polish
    • #25 daily status automation
    • #31 release display refinements if still open

Suggested versioning:

  • 0.6.x-alpha

Milestone C: Post-1.0 Expansion Alpha

Target outcome:

  • Later stages are not just “same enemies, more pressure”
  • Better stage-to-stage variety and progression
  • Theme/template work can happen without destabilizing the core shipped slice

Key issue groups:

  • Later-stage survivability beyond Stage 4
    • #19 Stage 2/late-run collision chain regressions
  • Visual/gameplay comparison baseline
    • #17 stronger baseline against original Galaga
  • Replay, telemetry, and commentary-ready artifacts
    • #69 remote gameplay logs and optional video artifacts
    • #70 homepage recent plays and linked run viewing
    • #81 commentary-ready gameplay telemetry for narrated replays
  • Original-scoring / special-pattern research
    • deeper validation of Galaga's bonus-yielding three-fighter attack clusters,

including exact timing, composition, and scoring triggers

  • Theme/template work
    • #26 through #30

Suggested versioning:

  • 0.7.x-alpha and beyond

Milestone D: Broader External Playtest / Beta

Target outcome:

  • The four-stage slice is polished and stable enough that broader outside

testing is worth the overhead

  • Operational tooling is good enough to collect useful feedback at scale

Suggested versioning:

  • 0.8.0-beta.1
  • 0.9.x-rc

Milestone E: 1.0 Release

Target outcome:

  • A polished, consciously scoped four-stage Galaga tribute
  • Core rules and presentation feel stable
  • The hosted build is trustworthy as the canonical public version

Suggested versioning:

  • 1.0.0

Milestone F: Post-1.0 Environment Separation

Target outcome:

  • production and non-production are operationally distinct
  • public score integrity is not blurred by day-to-day development traffic
  • release labeling makes environment intent obvious

Key issue groups:

  • production vs pre-production score/data separation
  • environment-aware build labeling and account/status presentation
  • release workflow hardening between dev, production, and beta lanes

Suggested versioning:

  • 1.0.x
  • 1.1.x

Milestone G: Early Post-1.0 Arcade Platform Extraction

Target outcome:

  • Aurora keeps shipping as the first game pack on a more stable shared runtime
  • replay, shell, harness, input, build, and logging systems become less one-off
  • similar cabinet shooters can reuse mature systems with less churn

Key issue groups:

  • #111 extract a shared arcade platform for Galaga-family cabinet shooters
  • early gameDef / game-pack extraction
  • shared shell / replay / harness / build systems
  • shared left-right cabinet input model for sibling fixed-screen shooters

Suggested versioning:

  • 1.1.x
  • 1.2.x

Milestone H: Shared Pilot Media And Publishing

Target outcome:

  • authenticated pilot runs can become canonical shared run records
  • selected validated runs can be published through the Aurora-owned YouTube channel
  • pilot history can show official runs plus replay/watch links where available
  • local replay remains the immediate playback path and fallback

Key issue groups:

  • #121 shared authenticated pilot media and YouTube publishing
  • canonical Supabase run/video metadata for authenticated runs
  • pilot-safe public identity rules using pilot ID / initials / callsign instead of email
  • selected-run publish workflow and publish-state tracking
  • Aurora-owned YouTube mirroring for approved runs
  • fuller pilot scorebook/history with replay/watch links where available

Suggested versioning:

  • 1.3.x
  • 1.4.x
  • potentially 1.5.x
  • explicitly before any future 2.0.0

Post-1.0 Architecture Themes

These are the architectural themes we should keep capturing incrementally during the v1 push so they can become the basis of a focused post-v1 platform plan.

  • move game rules and tuning further into data-driven gameDef structures
  • use #111 as the early post-1.0 umbrella for turning Aurora into a shared

arcade platform that can support Galaga variants, Galaxian, and other similar cabinet shooters

  • isolate optional systems so capture, challenge stages, special squadrons, and

dual-fighter behavior are less entangled with the core update loop

  • keep mechanic-level harnesses and telemetry stable enough to survive larger

runtime refactors

  • separate engine-like concerns from game-specific rules only when the seam is

real and already helping Aurora

  • decouple visual/stylistic assets toward a swappable brand-package only after

the four-stage slice is stable enough that theming work does not destabilize launch

  • evolve the log viewer from a local artifact tree toward an optional remote

catalog after 1.0, while keeping the current local-first debugging flow as the default

  • add shared authenticated pilot media as a pre-2.0 stretch goal only after

pilot identity, local replay, and the early post-1.0 platform seams are stable enough to support it cleanly

  • use Aurora's ongoing work to reduce the eventual cost of supporting Galaxian

and other future variants without pausing product progress now

How We Should Use This Roadmap

  • Use it to decide when a PATCH bump is enough versus when a MINOR bump is justified
  • Use it to group open issues into meaningful release targets instead of treating the backlog as a flat list
  • Use it as the default basis for Codex release recommendations after each meaningful pass

Architecture Source

System layout, build flow, deploy flow, and harness boundaries come directly from the architecture notes.

Generated from ARCHITECTURE.md during build.

This document is the short technical map for how the project is organized and how work should flow through it.

Runtime Layout

Source Files

  • HTML shell:
    • /Users/stevenwoods/Documents/Codex-Test1/src/index.template.html
  • Styles:
    • /Users/stevenwoods/Documents/Codex-Test1/src/styles.css
  • Boot / metadata / audio / UI / logging:
    • /Users/stevenwoods/Documents/Codex-Test1/src/js/00-boot.js
  • Gameplay / stage flow / scoring / capture / enemy logic:
    • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
  • Rendering / HUD / overlays / sprites:
    • /Users/stevenwoods/Documents/Codex-Test1/src/js/20-render.js
  • Harness hooks:
    • /Users/stevenwoods/Documents/Codex-Test1/src/js/90-harness.js

Generated Output

  • Local dev build:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/index.html
  • Local dev build metadata:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/build-info.json
  • Stable production artifact:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/index.html
  • Stable production build metadata:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/build-info.json
  • Promoted beta snapshot:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/

Build / Deploy Flow

Local Build

  • Build script:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/build/build-index.js
  • Generates:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/index.html
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/build-info.json
  • Production promotion:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/build/promote-production.js
  • Produces:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/index.html
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/build-info.json
  • Local ready-state helper:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/dev/local-resume.js
    • starts the local dist/dev game server and the log viewer together for machine handoff/debugging

Pages Deploy

  • Workflow:
    • /Users/stevenwoods/Documents/Codex-Test1/.github/workflows/pages.yml
  • CI rebuilds the dev repo’s generated outputs, but the publicly shared Aurora production and beta lanes are published from the separate Aurora-Galactica repo.
  • In practice:
    • Codex-Test1 produces generated artifacts in dist/
    • Aurora-Galactica is the public artifact host for:
      • /
      • /beta/
    • lane publishing is now scripted through:
      • /Users/stevenwoods/Documents/Codex-Test1/tools/build/publish-lane.js
    • publish readiness is checked through:
      • /Users/stevenwoods/Documents/Codex-Test1/tools/build/check-publish-ready.js

Public Project Pages Sync

  • Sync script:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/build/sync-public-pages.js
  • Workflow:
    • /Users/stevenwoods/Documents/Codex-Test1/.github/workflows/sync-public-pages.yml
  • This syncs project/status surfaces to sgwoods/public.
  • It does not publish the playable game.

Testing / Evidence Flow

Harness

  • Run gameplay:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/run-gameplay.js
  • Analyze a run:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/analyze-run.js
  • Batch runner:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/run-batch.js
  • Batch prioritization:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/tuning-report.js
  • Scenarios:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/scenarios/

Log Viewer

  • Local review server:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/log-viewer/server.js
  • Viewer UI:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/log-viewer/index.html
    • /Users/stevenwoods/Documents/Codex-Test1/tools/log-viewer/app.js
    • /Users/stevenwoods/Documents/Codex-Test1/tools/log-viewer/styles.css
  • The viewer reads the same recursive artifact tree under:
    • /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/
  • Review-ready runs should include:
    • summary.json
    • neo-galaga-session-*.json
    • neo-galaga-video-*.review.webm when available
  • The viewer uses summary.json as the run index and then resolves the neighboring session/video files from the summary metadata.

Real Play

  • Player-generated .json and .webm are browser download artifacts first
    • they typically land in the user’s downloads location
  • Player-native in-game replay state is browser-local storage, not the harness archive
  • Player-generated .json and .webm can then be imported and analyzed
  • The same analysis pipeline should be used whenever possible so live play and harness results stay comparable
  • The log viewer can inspect those imported runs as long as they are copied into the expected harness-artifacts/ folder structure and have a summary.json beside them.
  • Formal policy:
    • /Users/stevenwoods/Documents/Codex-Test1/ARTIFACT_POLICY.md

Reference Flow

Durable Sources

  • Reference root:
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/
  • Manual:
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/manuals/galaga-1981-namco/
  • Walkthrough notes:
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/walkthroughs/trueachievements-galaga/

Decision Priority

  1. Original/manual-backed rule evidence
  2. Original gameplay footage
  3. Secondary walkthrough/progression references
  4. Tuning inference from our own harness and live play

System Boundaries

Stable Rule Areas

These should change carefully and usually only with reference evidence:

  • challenge stage structure
  • capture / rescue rules
  • carried fighter scoring
  • special attack squadron bonuses
  • results / high-score flow

Tunable Areas

These are expected to change often:

  • dive timing
  • collision fairness
  • challenge readability
  • later-stage band variety
  • visuals and presentation

Collaboration Model

The project is moving toward two parallel tracks:

  1. Reference-baseline / fidelity work
  2. Harness / gameplay tuning / shipping quality

That split should let collaborators work with less conflict and clearer ownership.

Early Post-1.0 Platform Direction

Shortly after 1.0, this codebase should start moving toward a shared arcade platform rather than remaining a one-off Aurora-only runtime.

Tracked umbrella:

  • #111 shared arcade platform extraction for Galaga-family cabinet shooters

The intended stable/shared layer is:

  • cabinet shell / HUD / control rail surfaces
  • replay, logging, and artifact model
  • harness runner and event vocabulary
  • build / publish / local handoff flow
  • left-right cabinet input primitives

The intended configurable/game-pack layer is:

  • formations
  • enemy families
  • scoring tables
  • stage cadence
  • attack scripts
  • optional mechanics such as capture/rescue or challenge stages

The goal is to reduce churn in mature infrastructure while letting future games such as Galaxian, Aurora variants, or similar fixed-screen cabinet shooters reuse the stable platform with smaller game-specific packs.

Source Map

Gameplay ownership and where to edit specific systems are pulled from the source map.

Generated from SOURCE_MAP.md during build.

This file is the quick orientation guide for the current codebase. It is meant to answer "where does this behavior live?" before someone starts changing gameplay or tuning values.

Main Editable Sources

  • /Users/stevenwoods/Documents/Codex-Test1/src/index.template.html
    • base HTML shell for the served game
    • settings drawer, overlays, feedback modal, and HUD containers
  • /Users/stevenwoods/Documents/Codex-Test1/src/styles.css
    • all in-game presentation and overlay styling
  • /Users/stevenwoods/Documents/Codex-Test1/src/js/00-boot.js
    • bootstrapping, constants, build metadata, audio, input, logging, scoreboard
    • release/build identity surfaced to the UI
  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • stage spawning, enemy movement, challenge flow, scoring, capture/rescue,

bullet logic, ship loss, and the main update loop

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/20-render.js
    • sprite rendering, hitbox rendering assumptions, overlays, HUD, banners
  • /Users/stevenwoods/Documents/Codex-Test1/src/js/90-harness.js
    • harness-only hooks exposed on window.__galagaHarness__
    • deterministic setup helpers for scenarios and regression checks
  • /Users/stevenwoods/Documents/Codex-Test1/tools/log-viewer/
    • local artifact review app
    • server.js indexes run folders under harness-artifacts/
    • app.js synchronizes repaired videos, event streams, clips, and issue drafting
    • expects a run folder with summary.json plus neighboring session/video artifacts

Build / Deploy

  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/build-index.js
    • assembles source files into /Users/stevenwoods/Documents/Codex-Test1/dist/dev/index.html
    • writes /Users/stevenwoods/Documents/Codex-Test1/dist/dev/build-info.json
  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/promote-beta.js
    • copies the current dev build into /Users/stevenwoods/Documents/Codex-Test1/dist/beta/
    • rewrites build identity there for the public beta lane
  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/promote-production.js
    • copies the approved dev build into /Users/stevenwoods/Documents/Codex-Test1/dist/production/
    • rewrites build identity there for the stable production lane
  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/publish-lane.js
    • publishes either dist/beta/ or dist/production/ into sgwoods/Aurora-Galactica
    • automates the clone/copy/commit/push release step
  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/check-publish-ready.js
    • verifies the repo is clean and the chosen lane was built from the current HEAD
    • fails early if required generated files are missing or stale
  • /Users/stevenwoods/Documents/Codex-Test1/tools/dev/local-resume.js
    • starts the local dist/dev game server and the log viewer together
    • preferred command when resuming work on a machine
  • /Users/stevenwoods/Documents/Codex-Test1/tools/dev/local-stop.js
    • stops the locally tracked game server and log viewer processes
  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/sync-public-pages.js
    • exports project-status content from the current production build into the separate sgwoods/public repo
    • does not publish the playable Aurora build
  • /Users/stevenwoods/Documents/Codex-Test1/tools/build/sync-public-pages.js
    • syncs the separate sgwoods/public repo from build metadata
  • /Users/stevenwoods/Documents/Codex-Test1/.github/workflows/pages.yml
    • builds and deploys the actual playable GitHub Pages site
  • /Users/stevenwoods/Documents/Codex-Test1/.github/workflows/sync-public-pages.yml
    • updates the public project-summary pages in sgwoods/public

Gameplay Areas

Stage Flow

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • spawnFormation()
    • spawnChallenge()
    • spawnStage()
    • runStage1Script()

These functions define the main board composition and stage transitions.

Challenge Stages

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • spawnChallenge()
    • updateChallengeEnemy()
    • challengeGroupBonus()

Challenge stages are one of the main fidelity-sensitive systems. Manual-backed rules currently modeled:

  • 40 enemies
  • 5 groups of 8
  • per-group bonus scoring
  • perfect bonus handling

Capture / Rescue / Dual Fighter

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • canCapture()
    • capturePlayer()
    • finishCapture()
    • destroyCarriedFighter()
    • rescue handling inside awardKill(...)
    • rescue completion inside update(...)

Recent manual-backed rule work includes:

  • blocking a second capture when one fighter is already carried
  • scoring for destroying a carried fighter:
    • 500 standby
    • 1000 attacking

Special Attack Squadrons

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • assignEscorts(...)
    • activeEscortCount(...)
    • boss dive scoring inside awardKill(...)

This is where the Stage 4+ special squadron bonus behavior now lives.

Later-Stage Pressure

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • stageTune(...) in /Users/stevenwoods/Documents/Codex-Test1/src/js/00-boot.js
    • stageBandProfile(...) in /Users/stevenwoods/Documents/Codex-Test1/src/js/00-boot.js
    • attack-gap / recovery timing in spawnStage() and loseShip()
    • dive decisions in updateEnemy()

If Stage 4/5 feels wrong, this is usually where to look first.

Later-Stage Variety

  • /Users/stevenwoods/Documents/Codex-Test1/src/js/00-boot.js
    • STAGE_BAND_PROFILES
    • stageBandProfile(...)
    • enemyFamilyForType(...)
  • /Users/stevenwoods/Documents/Codex-Test1/src/js/10-gameplay.js
    • makeEnemy(...)
    • familyMotion(...)
    • spawnStage() stage-profile logging
  • /Users/stevenwoods/Documents/Codex-Test1/src/js/20-render.js
    • enemyPalette(...)
    • FAMILY_PIXELS

This is where stage-banded family progression now lives for issues like later-stage enemy variety and future transform-style fidelity work.

Harness / Measurement

  • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/run-gameplay.js
    • launches Chrome and runs deterministic scenarios
  • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/analyze-run.js
    • derives metrics from recorded .json + .webm
  • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/tuning-report.js
    • rolls per-run analysis into prioritized findings
  • /Users/stevenwoods/Documents/Codex-Test1/tools/harness/scenarios/
    • scenario definitions for challenge, rescue, carried-fighter scoring,

descent timing, Stage 4 pressure, and squadron bonuses

  • /Users/stevenwoods/Documents/Codex-Test1/harness-artifacts/
    • canonical local evidence tree for harness runs and imported manual sessions
    • the log viewer walks this tree recursively and treats each folder containing summary.json as a reviewable run

Reference Material

Primary rule references:

  • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/manuals/galaga-1981-namco/README.md
  • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/manuals/galaga-1981-namco/Galaga_-_1981_-_Namco.pdf

Secondary progression/fidelity notes:

  • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/walkthroughs/trueachievements-galaga/README.md

Use the manual first when a rule question exists. Use walkthrough/reference clips as secondary help for later-stage variety and visual comparison.

External Services Source

Runtime dependencies, hosting, feedback transport, and local-only systems are pulled from the external services inventory.

Generated from EXTERNAL_SERVICES.md during build.

This document is the current source of truth for external services used by Aurora Galactica, what they are used for, and what is local-only instead.

Runtime Services

Supabase

  • Service:
    • https://iddyodcknmxupavnuuwg.supabase.co
  • Used for:
    • pilot account sign-up and sign-in
    • authenticated pilot profile data
    • shared leaderboard data
    • validated score views
    • player-specific score views such as mine
  • Main runtime code:
    • /Users/stevenwoods/Documents/Codex-Test1/src/js/05-supabase.js
  • Build token source:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/build/build-index.js

Web3Forms

  • Service:
    • https://api.web3forms.com/submit
  • Used for:
    • in-game bug reports
    • in-game feature requests
  • Main runtime code:
    • /Users/stevenwoods/Documents/Codex-Test1/src/js/00-boot.js

Hosting And Delivery

GitHub Pages

  • Production:
    • https://sgwoods.github.io/Aurora-Galactica/
  • Beta:
    • https://sgwoods.github.io/Aurora-Galactica/beta/
  • Also hosts:
    • player guide
    • project guide
    • release dashboard
  • Used for:
    • public delivery of the playable game and public documentation surfaces
  • Main policy docs:
    • /Users/stevenwoods/Documents/Codex-Test1/README.md
    • /Users/stevenwoods/Documents/Codex-Test1/RELEASE_POLICY.md

Development And Release Tooling

GitHub

  • Repositories:
    • https://github.com/sgwoods/Codex-Test1
    • https://github.com/sgwoods/Aurora-Galactica
  • Used for:
    • source control
    • issue tracking
    • release publishing
    • public artifact hosting workflow
  • Also used by:
    • the local log viewer issue-creation flow through gh

GitHub API

  • Used by:
    • public sync verification
    • public project-page sync tooling
  • Main scripts:
    • /Users/stevenwoods/Documents/Codex-Test1/tools/build/sync-public-pages.js
    • /Users/stevenwoods/Documents/Codex-Test1/tools/build/verify-public-sync.js

Local-Only Or Browser-Native Systems

These are not external services.

Browser Storage

  • Used for:
    • local high scores
    • local settings
    • local replay catalog
  • Backed by:
    • localStorage
    • IndexedDB

Browser Recording

  • Used for:
    • native local replay capture
  • Backed by:
    • MediaRecorder

Local Dev Services

  • Local game server:
    • http://localhost:8000
  • Local log viewer:
    • http://127.0.0.1:4311/
  • Used for:
    • development
    • artifact review
    • harness/video debugging
  • These are not production dependencies.

Practical Summary

Player-Facing External Runtime Dependencies

  • Supabase
  • Web3Forms

Hosting Dependencies

  • GitHub Pages

Developer And Release Dependencies

  • GitHub
  • GitHub API

Not External

  • replay storage
  • replay capture
  • local dev server
  • local log viewer

Artifact Policy Source

The formal distinction between browser-local replay state, browser download exports, and the canonical harness-artifacts review archive is documented in the artifact policy.

Generated from ARTIFACT_POLICY.md during build.

This project has three distinct artifact locations. Treating them as separate on purpose avoids the recurring confusion between player exports, browser-local replay state, and the developer review archive.

Policy

1. In-Game Player Replay State

This is the browser-native replay feature used by a player who launches the game in-browser with no extra setup.

  • Storage:
    • browser-local IndexedDB
  • Contents:
    • recent replay metadata
    • recent replay video blobs used by the in-game 🎞 replay surface
  • Scope:
    • local to that browser/profile on that device
    • not intended as the canonical developer artifact archive

This is the right default for dev, beta, and production player use because it requires no filesystem access and works inside normal browser constraints.

2. Exported Player Capture Files

This is the explicit user-facing export path for logs and downloaded recordings.

  • Current filenames:
    • neo-galaga-session-*.json
    • neo-galaga-video-*.webm
  • Destination:
    • the browser download location
    • typically the user’s downloads directory
    • on macOS that is usually:
      • ~/Downloads/

This is the correct export destination for dev, beta, and production because the browser controls download placement. The game should not promise a repo-local path for player-triggered downloads.

3. Canonical Developer Review Archive

This is the normalized artifact tree used by the harness, analyzer, and log viewer.

  • Root:
    • <workspace>/harness-artifacts/
  • Review-ready run folders should include:
    • summary.json
    • neo-galaga-session-*.json
    • neo-galaga-video-*.review.webm when available

This is the source of truth for:

  • log viewer inspection
  • synchronized event/video review
  • harness analysis
  • tuning reports
  • durable review evidence inside the repo workspace

Formal Workflow

Dev

  1. Player-triggered exports download through the browser into the user’s downloads location.
  2. If the run should become a durable review artifact, import it into:
  • <workspace>/harness-artifacts/
  1. Use:
   npm run harness:import-latest
  1. Review it in the viewer or analyzer from the imported run folder.

Beta / Production

  1. In-game replay uses browser-local replay storage.
  2. Explicit exports still go to the browser download location.
  3. If we want a run to enter developer review, we import that downloaded pair into <workspace>/harness-artifacts/ on a dev machine.

There is no separate filesystem export location for beta or production.

  • dev: replay lives in browser storage, exports go to browser downloads
  • beta: replay lives in browser storage, exports go to browser downloads
  • production: replay lives in browser storage, exports go to browser downloads

Non-Goals

  • dist/dev/, dist/beta/, and dist/production/ are not runtime capture archives.
  • export.mov.png is a build snapshot artifact, not a session/replay artifact.
  • The game should not imply that exported logs/videos automatically land in <workspace>/harness-artifacts/.

Source of Truth

Going forward, use this distinction:

  • IndexedDB = native local replay feature for players
  • browser download directory = exported log/video files
  • <workspace>/harness-artifacts/ = canonical developer review archive after import/normalization

If documentation or UI text blurs those boundaries, treat it as a documentation bug and correct it.

Reference Baseline Source

Reference priorities, baseline topics, and the evidence model are pulled directly from the fidelity baseline document.

Generated from REFERENCE_BASELINE.md during build.

This document defines how we turn original Galaga behavior into actionable work for this project.

Goal

Create a durable, repeatable baseline for:

  • how original Galaga behaves
  • how our game differs
  • what issue or milestone that difference should map to
  • how improvement will be measured

Source Priority

  1. Manual / cabinet-era rule documents
  2. Original gameplay footage
  3. Emulator captures when available
  4. Secondary written references

Working Process

For each fidelity topic:

  1. Identify the source artifact
  2. Capture the specific observed behavior
  3. Write the current project behavior
  4. Define the gap
  5. Link the gap to:
  • a GitHub issue
  • a harness scenario or metric
  • a target release / roadmap milestone if applicable

Current Baseline Topics

Player Control Principles

  • Source:
    • original cabinet feel as observed in gameplay footage
    • modern keyboard best practice for translating joystick-style intent into

digital input

  • Current rule:
    • keyboard movement should emulate joystick intent, not instantaneous digital

stepping

  • Current implementation target:
    • use a target-velocity model for manual keyboard control
    • apply acceleration on press and stronger deceleration on release
    • preserve neutral behavior when opposite directions are held together
    • tune for:
      • tap = fine correction
      • hold = lane travel
  • Validation questions:
    • can a player line up under descending enemies with small corrections?
    • does the ship still cross the playfield quickly enough for evasive play?
    • does manual control feel smoother than the old full-speed-per-frame step?

Stage 1 Dive Timing

  • Source:
    • original gameplay video
  • Current metric:
    • stage1-descent harness scenario
  • Current use:
    • compare first dive timing and lower-field crossing against original footage

Challenge Stage Fidelity

  • Source:
    • manual-backed structure
    • original challenge-stage footage
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/first-challenge-stage/README.md
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/challenge-stage-reference/README.md
  • Current metrics:
    • stage3-challenge
    • stage6-regular
    • stage7-challenge
    • challenge hit rate
    • average upper-band dwell time per target
  • Current rule note:
    • challenge stages are currently treated as non-lethal bonus rounds, which

matches our present reading of the reference material for #33

  • Active gap:
    • visual and timing fidelity still not close enough to original Galaga

Capture / Rescue / Dual Fighter

  • Sources:
    • manual
    • gameplay footage
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/release-reference-pack/README.md
  • Current metrics:
    • rescue-dual
    • capture-rescue-dual
    • carried-boss-diving-release
    • carried-boss-formation-hostile
    • second-capture-current
    • natural-capture-cycle
    • stage4-capture-pressure
    • carried-fighter scoring scenarios
  • Active gaps:
    • remaining rescue usefulness / edge-case fidelity
    • Stage 4-specific capture pressure still needs stronger measurement and tuning
    • manual-backed confidence is currently strongest for:
      • killing the boss while it is attacking recovers dual fighters
      • shooting the carried fighter itself destroys it for points
    • current implementation now follows that attacking-boss branch more directly:
      • diving boss kill releases the captured fighter into an automatic rejoin path
      • in-formation boss kill spawns an immediate hostile captured fighter branch
    • the exact hostile / in-formation branch still needs stronger primary-source confirmation against original gameplay footage or emulator capture

Post-1.0 Scoring / Special Pattern Research

  • Sources:
    • manual
    • original gameplay footage
    • future emulator captures
  • Current state:
    • special attack squadron bonus logic exists, but we do not yet have enough

evidence to claim original-accurate timing/composition for the bonus-yielding three-fighter attack clusters seen in regular play

  • Planned follow-up:
    • capture stronger evidence after 1.0 and map exact appearance timing,

composition, and scoring behavior before expanding that system further

Important near-term exception:

  • for the scoped four-stage 1.0, the game should still visibly demonstrate the

classic boss-with-two-escorts special attack and show the resulting bonus in normal play, even if deeper late-game pattern research stays post-1.0

Later-Stage Variety

  • Sources:
    • walkthrough notes
    • later gameplay footage
  • Current metrics:
    • stage12-variety
    • stage profile / family logging
  • Current state:
    • first stage-band pass is in
    • deeper transform / family fidelity is still follow-up work

Later-Stage Survivability

  • Sources:
    • original gameplay footage
    • our harness diagnostics
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/stage4-fairness/README.md
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/release-reference-pack/README.md
  • Current metrics:
    • stage4-five-ships
    • stage4-survival
  • Active gap:
    • later-stage collision fairness and progression depth

External Implementation References

  • These are architecture/presentation references, not original-Galaga rules

sources.

  • Current external reference:
    • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/external-galaga5/README.md
  • Use case:
    • borrow ideas about structure or presentation when useful
    • do not treat them as fidelity evidence

What To Add Next

High-value future baseline additions:

  1. Emulator-derived captures and timing notes
  2. Curated reference clips by system:
  • first challenge stage
  • first capture
  • first rescue
  • later-stage pressure
  1. Issue-by-issue reference mapping so open fidelity issues point directly at evidence

Current focused release pack:

  • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/analyses/release-reference-pack/README.md

Collaboration Rule

When a gameplay-fidelity issue is opened or worked, it should eventually point back to at least one entry in this document or to a durable source under:

  • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/

Contributing Source

Collaborator guidance, build expectations, and review defaults are pulled from the contributing guide.

Generated from CONTRIBUTING.md during build.

This project is still in prerelease. We are optimizing for fast iteration, reference-backed fidelity work, and safe collaboration.

Core Rules

  • Edit source files under:
    • /Users/stevenwoods/Documents/Codex-Test1/src/
  • Do not hand-edit generated build output under:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/
  • Rebuild after source changes:
  cd /Users/stevenwoods/Documents/Codex-Test1
  npm run build
  • Prefer evidence-backed gameplay changes:
    • manual
    • reference videos
    • harness scenarios
    • real playtest captures

Useful Orientation

  • Project overview:
    • /Users/stevenwoods/Documents/Codex-Test1/README.md
  • Source orientation:
    • /Users/stevenwoods/Documents/Codex-Test1/SOURCE_MAP.md
  • Architecture:
    • /Users/stevenwoods/Documents/Codex-Test1/ARCHITECTURE.md
  • Reference-to-behavior baseline:
    • /Users/stevenwoods/Documents/Codex-Test1/REFERENCE_BASELINE.md
  • Roadmap:
    • /Users/stevenwoods/Documents/Codex-Test1/PLAN.md
    • /Users/stevenwoods/Documents/Codex-Test1/PRODUCT_ROADMAP.md

Branching

  • Use the codex/ prefix for working branches
  • Suggested examples:
    • codex/challenge-fidelity-pass
    • codex/stage4-collision-tuning
    • codex/reference-baseline-docs

Building And Running

  • Build the game:
  npm run build
  • Promote the current dev build into the stable production artifact:
  npm run promote:production
  • Promote the current built output into a beta snapshot:
  npm run promote:beta
  • Publish a generated lane into the public Aurora repo:
  npm run publish:check:beta
  npm run publish:beta
  npm run publish:check:production
  npm run publish:production
  • The main local playable build is generated at:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/index.html
  • Preferred local handoff/startup command:
  npm run local:resume
  • The beta-ready snapshot is generated at:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/
  • The stable production artifact is generated at:
    • /Users/stevenwoods/Documents/Codex-Test1/dist/production/
  • Replay a saved gameplay session:
  npm run harness -- --session /absolute/path/to/neo-galaga-session.json
  • Run a built-in scenario:
  npm run harness -- --scenario stage3-challenge
  • Run a batch:
  npm run harness:batch -- --profile quick

Gameplay Change Expectations

If a change touches gameplay, include at least one of:

  • a matching harness run
  • a batch report
  • a real playtest note
  • a reference artifact comparison

If a change is fidelity-driven, tie it back to:

  • /Users/stevenwoods/Documents/Codex-Test1/reference-artifacts/
  • an open GitHub issue
  • a measurable harness target where possible

Release / Build Identity

  • Versioning policy:
    • /Users/stevenwoods/Documents/Codex-Test1/RELEASE_POLICY.md
  • Every build carries:
    • version
    • build number
    • commit
    • branch
    • dirty/clean state
    • Eastern timestamp

Good Collaboration Defaults

  • Keep comments targeted and high-value
  • Prefer small, reviewable steps
  • Preserve reference-backed rules once they land
  • Call out uncertainty instead of guessing when original Galaga behavior is not yet settled

Current Working Priorities

  1. Build a stronger autonomous baseline against original Galaga
  2. Keep later-stage survivability and challenge-stage fidelity moving
  3. Make collaboration safer with clearer docs, issue hygiene, and repeatable harness evidence

Release Policy Source

Versioning and release recommendation rules are pulled directly from the release policy document.

Generated from RELEASE_POLICY.md during build.

Version Format

  • We use Semantic Versioning across three release surfaces while the game is still evolving toward a stable arcade-quality release.
  • Pre-production development format:
    • MAJOR.MINOR.PATCH-prerelease
  • Production format:
    • MAJOR.MINOR.PATCH
  • Production beta format:
    • MAJOR.MINOR.PATCH-beta.<number>
  • Current build label format adds build metadata to the surface version:
    • surface-version+build.<number>.sha.<shortcommit>
  • Dirty local builds append:
    • .dirty
  • After 1.0.0, production hotfixes should bump PATCH:
    • 1.0.1
    • 1.0.2
    • and so on

Examples:

  • pre-production:
    • 0.5.0-alpha.1+build.115.sha.b0d812c
  • production:
    • 0.5.0+build.9.sha.457df28
  • production beta:
    • 0.5.0-beta.1+build.9.sha.457df28.beta

Meaning

  • MAJOR
    • reserve 1.x for a public-quality release where the scoped product goal is

stable and shippable

  • current scoped goal:
    • a polished four-stage slice (Stages 1-4) rather than full long-form

Galaga expansion

  • MINOR
    • use for meaningful product milestones
    • examples:
      • major gameplay fidelity improvement
      • large systems addition
      • new stage/content breadth milestone
      • public playtest readiness
  • PATCH
    • use for smaller compatible improvements inside the current milestone
    • examples:
      • bug fixes
      • tuning passes
      • UI polish
      • harness/reporting improvements
  • prerelease
    • alpha: active system building, rules changes, frequent balance changes
    • beta: feature-complete enough for broader testing, still tuning quality and regressions
    • rc: release candidate, only bug fixes and polish before a stable cut

Hosted Release Lanes

  • production
    • published at:
      • https://sgwoods.github.io/Aurora-Galactica/
    • this is the official shared build, even while SemVer remains prerelease before 1.0
  • beta
    • manually promoted at:
      • https://sgwoods.github.io/Aurora-Galactica/beta/
    • this is a distinct public checkpoint lane for less-frequent milestone playtesting
  • pre-production
    • day-to-day development happens in:
      • https://github.com/sgwoods/Codex-Test1
    • this is the active engineering line and not the canonical shared play URL

Repository Roles

  • Codex-Test1
    • development repo
    • active day-to-day engineering, tuning, issues, and harness work
  • Aurora-Galactica
    • public release repo
    • promoted checkpoints intended for broader sharing and less frequent change

Beta Promotion Workflow

  1. Update source content in Codex-Test1:
  • gameplay/UI under src/
  • maintained docs and release metadata under the repo root
  1. Build the current dev artifacts:
  • npm run build
  1. Test the generated dev build in:
  • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/
  • recommended local service startup:
    • npm run local:resume
  1. Promote the current dev build into the beta lane:
  • npm run promote:beta
  1. Review the generated output in:
  • /Users/stevenwoods/Documents/Codex-Test1/dist/dev/
  • /Users/stevenwoods/Documents/Codex-Test1/dist/beta/
  1. Run the beta preflight:
  • npm run publish:check:beta
  1. Publish the promoted beta snapshot with:
  • npm run publish:beta
  1. Let GitHub Pages deploy from Aurora-Galactica so:
  • https://sgwoods.github.io/Aurora-Galactica/beta/

serves the promoted checkpoint

The beta lane is intentionally a snapshot of selected generated artifacts under dist/, not a separate branch or a second build pipeline. Codex-Test1 remains the engineering source of truth; Aurora-Galactica is the public release surface for both production and beta.

Score/Data Lane Policy

  • production
    • full online Supabase path
    • shared leaderboard reads enabled
    • score submission enabled
    • pilot account/auth/profile actions enabled
  • beta
    • production-score read-only mirror by default
    • score submission disabled
    • pilot account/auth/profile writes disabled
    • local device scores still save normally
  • pre-production
    • production-score read-only mirror by default
    • score submission disabled
    • pilot account/auth/profile writes disabled
    • local device scores still save normally

This is the current launch-safe answer to #76: non-production lanes no longer use the same default write path as production, even though they can still mirror production leaderboard reads.

Optional test-pilot override for non-production:

  • TEST_ACCOUNT_EMAIL
    • enables one specific beta/dev pilot account for auth and write-flow testing
  • TEST_ACCOUNT_USER_ID
    • excludes that pilot's scores from shared leaderboard reads
  • when configured:
    • beta/dev may authenticate only that test pilot
    • score submission is enabled only while that pilot is signed in
    • production continues to reject that pilot account for normal use

Production Publish Workflow

  1. Treat production as a promotion from an approved beta candidate, not as a direct publish from arbitrary dev output.
  2. Build and validate the current dev output:
  • npm run build
  • plus whatever harness/manual checks are appropriate for the release
  1. Promote and review the beta candidate first:
  • npm run publish:beta
  1. Once that beta candidate is explicitly approved, mark that exact beta snapshot as approved:
  • npm run approve:beta
  1. Only then promote the matching release into the stable production artifact:
  • npm run publish:production
  1. Let GitHub Pages deploy from Aurora-Galactica so:
  • https://sgwoods.github.io/Aurora-Galactica/

serves the promoted production build

Hotfix Process

A hotfix is a small, controlled production repair. It is not a fast path around the release process.

Goals:

  • fix the live issue without widening scope
  • preserve evidence before mutating data
  • add a regression when the failure can recur
  • validate the exact fix in beta before promoting production

Required process:

  1. Freeze scope to the specific production failure:
  • do not combine unrelated cleanup, refactor, or opportunistic polish
  1. Preserve evidence first:
  • screenshots
  • user-visible symptoms
  • timestamps
  • score/account identifiers if relevant
  • production data inspection before mutation
  1. Assess blast radius:
  • gameplay
  • auth
  • leaderboard/data integrity
  • replay/media
  • release tooling
  1. Patch source in Codex-Test1 first:
  • never treat direct production editing as the real fix
  1. Add or update a focused regression:
  • especially for score submit, auth, replay, and game-over flows
  1. Verify locally:
  • npm run build
  • focused harness checks
  • adjacent sanity checks for nearby behavior
  • when the hotfix touches an external runtime dependency, run a live probe of

that dependency before beta if the probe is available

  • when the hotfix touches obvious gameplay input or motion, run the hotfix smoke suite:
    • npm run harness:check:hotfix-smoke
  • when the hotfix affects controls, overlays, or input lifecycles, run a hosted-lane input probe:
    • npm run harness:check:live-input:beta
  1. Publish to beta first:
  • npm run publish:beta
  1. Manually verify the exact failure in hosted beta
  2. If the hotfix changes behavior that can remain stale in an already-open tab,

provide an in-app refresh reminder:

  • tell the player that a new fix is available
  • tell them how to refresh
  • prefer a direct refresh action when the UI allows it
  1. Approve beta and only then publish production:
  • npm run approve:beta
  • npm run publish:production
  1. Verify the original user flow in production after release
  2. Record the fix, regression coverage, and any production data correction

Production data rule:

  • If a stopgap production data correction is required, inspect first and mutate

minimally.

  • Record exactly what changed.
  • Still ship the real source fix through the normal hotfix path.

Aurora hotfix checklist:

npm run build
# run focused harness checks
npm run harness:check:hotfix-smoke
# if controls/input/overlay behavior changed
npm run harness:check:live-input:beta
# run external dependency probes when the hotfix touches them
npm run publish:beta
# manual hosted beta verification
npm run approve:beta
npm run publish:production

Hotfix smoke suite contents:

  • node tools/harness/check-input-mapping.js
    • verifies playable left/right movement distance, expected key mapping, and no repeated input resets during active movement
  • node tools/harness/check-popup-surfaces.js
    • verifies settings/help/pilot/score/feedback surfaces still open and fit correctly after UI-adjacent hotfixes
  • node tools/harness/check-feedback-submit-path.js
    • verifies direct feedback success and fallback diagnostics are still intact
  • node tools/harness/check-remote-score-submit.js
    • verifies the main game-over remote score path still behaves correctly on success and failure

True 1.0 Launch Baseline Reset

  • #130 is a required pre-1.0 release operation.
  • Before the true 1.0 production launch, reset the shared production

leaderboard so the official public scoreboard starts from zero.

  • Use a dry run first:
    • SUPABASE_SERVICE_ROLE_KEY=... npm run leaderboard:inspect:production
  • Then execute the reset when the launch candidate is approved:
    • SUPABASE_SERVICE_ROLE_KEY=... npm run leaderboard:reset:production
  • This operation deletes rows from the production scores table.
  • It does not delete pilot accounts.
  • It requires operator-only Supabase service-role access and should be treated

as an explicit release step, not a routine browser-side admin action.

Tracked hardening item:

  • #113
    • enforce dev -> beta -> approve -> production as the only supported publish chain
    • production preflight fails if the current production publish is not sourced from the approved beta candidate

Public Project Status Sync Workflow

  1. Build the current dev output and promote the stable production artifact:
  • npm run build
  • npm run promote:production
  1. Run:
  • npm run sync:public
  1. This updates the separate sgwoods/public project-summary pages and manifests from:
  • /Users/stevenwoods/Documents/Codex-Test1/dist/production/build-info.json
  • /Users/stevenwoods/Documents/Codex-Test1/release-notes.json
  1. It does not publish the playable game itself.

Build Number

  • Every build gets a build number, even when there is no major/minor release bump.
  • Local default:
    • git commit count
  • CI/Pages default:
    • GitHub Actions run number if available

This gives every build a unique identity without forcing a SemVer bump for every commit.

Bump Guidance

  • Bump PATCH when:
    • we make a contained improvement that should be visible or testable
    • the user-facing build is worth distinguishing from the last one
  • Bump MINOR when:
    • a milestone is completed or clearly crossed
    • the overall player experience has materially advanced
    • the roadmap target shifts from one product phase to the next
  • Bump prerelease indicator when:
    • we move from alpha to beta
    • or from beta to rc
  • Bump MAJOR only when:
    • we intentionally declare a stable release baseline

Recommendation Policy For Codex

  • Recommend a PATCH bump after:
    • a player-visible gameplay/presentation improvement lands cleanly
    • or a release-worthy bug fix changes behavior
  • Recommend a MINOR bump after:
    • a roadmap milestone closes
    • or a meaningful cluster of issues is completed together
  • Do not recommend a version bump for:
    • purely exploratory work
    • failed tuning experiments
    • internal-only diagnostic changes unless they materially affect shipped behavior

Release History Policy

  • every meaningful release should add an entry under:
    • /Users/stevenwoods/Documents/Codex-Test1/release-history/
  • required:
    • structured session summary
  • optional but preferred:
    • verbatim raw chat transcript if exported from Codex
  • subsequent release entries should capture the incremental work since the previous release entry

Release Targets

  • pre-1.0
    • current phase
    • production remains prerelease in SemVer terms while core gameplay and fidelity for the four-stage slice are still moving
  • beta
    • target when:
      • the four-stage slice is stable enough for broader external playtesting
      • capture/rescue rules are settled for that slice
      • stage challenge/results/high-score flow is in place
      • visuals/audio are consistent enough for broader playtesting
    • hosted expectation:
      • use the manually promoted /beta/ lane for these checkpoint builds rather than updating it on every production change
  • 1.0
    • target when:
      • Stage 1 through Stage 4 feel stable as one coherent game loop
      • the Stage 3 challenge stage is rewarding and readable
      • the Stage 4 endpoint is fair and beatable
      • core rule fidelity gaps inside the four-stage slice are resolved or

consciously documented

  • the hosted build, high-score flow, and public project pages are suitable

for general external use

Post-1.0 Environment Goal

  • keep production as the canonical public Aurora environment
  • keep the current non-production read-only/default-local policy unless there is a strong reason to loosen it
  • preferred order after 1.0:
  1. decide whether beta should remain a production-read mirror or move to its own backend
  2. isolate non-production fully behind a separate Supabase project or clearly separate tables if shared reads stop being useful
  3. keep the active environment obvious in the build label and account/status surfaces