Self-hosting infrastructure for OpenAdapt recursive development
OpenAdapt Bootstrap enables OpenAdapt to build OpenAdapt - a recursive self-improvement system where recorded development workflows can be replayed autonomously via Claude Code integration.
This creates infrastructure for:
- Automated demo generation
- Self-documenting workflows
- Reproducible development tasks
- Mobile-first development (user directs from mobile, desktop executes autonomously)
openadapt-bootstrap/
├── recordings/ # OpenAdapt recordings of dev workflows
│ ├── demo_generation/
│ ├── screenshot_capture/
│ ├── test_execution/
│ └── pr_creation/
├── workflows/ # High-level workflow definitions
│ ├── __init__.py
│ ├── base.py # WorkflowRecorder, WorkflowExecutor base classes
│ ├── demo_generation.py
│ ├── screenshot_workflow.py
│ └── test_workflow.py
├── playback/ # Replay automation scripts
│ ├── __init__.py
│ └── executor.py # Claude Code integration
├── examples/ # Example workflow usage
│ └── generate_benchmark_screenshots.py
├── docs/
│ └── ARCHITECTURE.md
└── README.md
Captures development tasks as OpenAdapt recordings:
from openadapt_bootstrap import WorkflowRecorder
# Record a workflow once (manual demonstration)
with WorkflowRecorder(
name="generate_pr_screenshots",
description="Generate screenshots for benchmark viewer PR"
) as recorder:
# Perform the task manually...
# - Open browser to benchmark viewer
# - Navigate to different states
# - Capture screenshots
# - Save to screenshots/ directory
pass
print(f"Recorded workflow: {recorder.recording_path}")Replays workflows autonomously via Claude Code:
from openadapt_bootstrap import WorkflowExecutor
# Replay workflow without user intervention
executor = WorkflowExecutor(
recording_path="recordings/demo_generation/pr_screenshots.db",
claude_code_enabled=True
)
result = executor.execute()
print(f"Screenshots saved to: {result.output_dir}")Execute workflows in the background while user is on mobile:
from openadapt_bootstrap import ClaudeCodeWorkflow
# Trigger from GitHub issue comment (user on mobile)
workflow = ClaudeCodeWorkflow.from_github_issue(
repo="OpenAdaptAI/openadapt-evals",
issue_number=42
)
# Execute autonomously
result = workflow.execute()
# Post results back to GitHub
workflow.post_result_to_issue(result)Problem: Creating screenshots for PRs is manual and time-consuming
Solution: Record the screenshot workflow once, replay for each PR
from openadapt_bootstrap.workflows import ScreenshotWorkflow
workflow = ScreenshotWorkflow(
target_url="http://localhost:8080/viewer.html",
viewports=["desktop", "tablet", "mobile"],
states=["overview", "task_detail", "log_expanded"]
)
screenshots = workflow.execute()
workflow.commit_and_push(screenshots, branch="pr-screenshots")Problem: Demos need to be regenerated when UI changes
Solution: Record demo workflow, replay on demand
from openadapt_bootstrap.workflows import DemoGenerationWorkflow
workflow = DemoGenerationWorkflow(
demo_script="scripts/notepad_demo.py",
output_format="gif",
duration_seconds=15
)
demo_path = workflow.execute()Problem: Tests pass but UI might be broken
Solution: Record test execution with visual checkpoints
from openadapt_bootstrap.workflows import TestWorkflow
workflow = TestWorkflow(
test_command="pytest tests/",
capture_screenshots=True,
visual_regression_check=True
)
result = workflow.execute()Problem: Creating PRs with screenshots/demos requires multiple manual steps
Solution: Single workflow from code change to PR with all artifacts
from openadapt_bootstrap.workflows import PRCreationWorkflow
workflow = PRCreationWorkflow(
branch="feature/new-viewer",
generate_screenshots=True,
run_tests=True,
create_demo=True
)
pr_url = workflow.execute()
print(f"PR created: {pr_url}")User on mobile can trigger workflows via:
-
GitHub Issues: Comment with workflow trigger
/bootstrap run screenshot_workflow -
GitHub Actions: Manually triggered workflows
name: Generate Screenshots on: workflow_dispatch
-
Tailscale SSH: Direct command execution
ssh desktop "cd ~/oa/src/openadapt-bootstrap && python -m workflows.screenshot_workflow"
Results posted back to GitHub for mobile review.
from openadapt_capture import Recorder
from openadapt_bootstrap import WorkflowRecorder
# WorkflowRecorder wraps openadapt-capture Recorder
# with development-specific metadata and structure# Generate benchmark screenshots automatically
from openadapt_bootstrap.workflows import BenchmarkScreenshotWorkflow
workflow = BenchmarkScreenshotWorkflow(
benchmark_dir="benchmark_results/my_eval_run",
output_dir="screenshots/"
)# Auto-generate viewer artifacts
from openadapt_bootstrap.workflows import ViewerWorkflow
workflow = ViewerWorkflow(
capture_path="my_capture/",
generate_html=True,
generate_gif=True
)cd /Users/abrichr/oa/src/openadapt-bootstrap
uv sync# Start recording
uv run python -m workflows.record --name my_workflow
# Perform task manually...
# Stop recording (Ctrl+C)# Replay autonomously
uv run python -m workflows.replay --name my_workflow
# With Claude Code for decision-making
uv run python -m workflows.replay --name my_workflow --claude-codeuv run python -m workflows.listSee examples/generate_benchmark_screenshots.py for a working example that:
- Launches benchmark viewer HTML
- Uses OpenAdapt to navigate and capture screenshots
- Saves to screenshots/ directory
- Commits and pushes to PR branch
- All without user intervention
uv run python examples/generate_benchmark_screenshots.py \
--html-path benchmark_results/viewer.html \
--output-dir screenshots/ \
--commit-to-pr- Repository structure
- Architecture documentation
- Basic WorkflowRecorder implementation
- Basic WorkflowExecutor implementation
- Proof-of-concept screenshot workflow
- ClaudeCodeWorkflow base class
- GitHub issue trigger system
- Result posting to GitHub
- Error handling and retry logic
- Screenshot workflow (PR automation)
- Demo generation workflow
- Test execution workflow
- PR creation workflow
- Documentation update workflow
- Record workflow for "record workflow"
- Record workflow for "execute workflow"
- Fully recursive self-improvement loop
- Workflow optimization via evaluation
- openadapt-capture - Recording infrastructure
- openadapt-evals - Benchmark evaluation
- openadapt-viewer - Visualization
MIT
See CONTRIBUTING.md