Using spec kit and copilot: Difference between revisions

From Master of Neuroscience Wiki
No edit summary
 
(18 intermediate revisions by the same user not shown)
Line 1: Line 1:
Version 15.11.2025
Version 15.11.2025


* I am using GPT-5 mini inside VS Code, because under the Github education program it is free.  
* I am using GPT-5 mini inside VS Code, because under the Github education program it is free. '''NOTE: 23.12.2025 Switched to the free Raptor Mini (Preview)'''
* As external support AI, I am using the free (& limited) Claude Sonnet 4.5.  
* As external support AI, I am using the free (& limited) Claude Sonnet 4.5.  
As per usual: '''Speak to the AI in English.''' Why? All the prompt sub-information you don't see is English. If you mix in German, it makes it harder for the AI.
Why spec kit? Can't I just tell Copilot what I want?
Why spec kit? Can't I just tell Copilot what I want?


Line 49: Line 51:
/speckit.constitution Create principles focused on code quality, testing standards, user experience consistency, and performance requirements
/speckit.constitution Create principles focused on code quality, testing standards, user experience consistency, and performance requirements
</syntaxhighlight>Sound for me like it should be default for all project.
</syntaxhighlight>Sound for me like it should be default for all project.
'''NOTE 23.12.2025: This is not true because this gives us a team mindset for the agent.''' I had to add a modifier to it see below.
I added the files: .specify/ai-context.md .specify/autonomous-mode.md .specify/constitution-solo-mode.md


==== Step 2: Create the spec ====
==== Step 2: Create the spec ====
Line 62: Line 67:
* Thanks a lot! Do you have any suggestions?
* Thanks a lot! Do you have any suggestions?
* Please show me the finished text. Thanks!
* Please show me the finished text. Thanks!
<syntaxhighlight lang="text">
/speckit.specify [BLA BLA lot of text]


In the end I got this text:<syntaxhighlight lang="bash">
</syntaxhighlight>
# LaTeX Equation Editor Plugin Specification


## Overview
==== Step 3: Define the tools available ====
You want to close the test loop such that Copilot can see what it is doing and what tools / commands are necessary for doing so


Build an Overleaf plugin that provides a LaTeX equation editor in a floating window. Users access this editor via a button in the main editor's toolbar.
I have a rather complicated workflow because I need to build the code and restart a docker container before testing. The outside AI helped to me to define the workflow:<syntaxhighlight lang="bash">
Before we start implementation, here's the complete build/test/verification workflow:


## Core Components
**Build Process:**
Location: /data_1/dev_latex_editor/overleaf-source/server-ce


### 1. Equation Display Area
Full build (slow, builds 3 containers):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make all
```


- **Output Field**: Renders the current LaTeX equation as a typeset preview with live updates as the user types
Quick build (recommended - only builds sharelatex:modifications):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make build-community
```


  - **Preview Size Control**: Toggle or slider to adjust the size of the rendered equation for better visibility of complex equations
**CRITICAL: Build takes 5-15 minutes. You MUST wait for the build to complete before restarting the container. Do not proceed until you see "Successfully tagged" messages in the output.**


- **Input Field**: Text area where users type LaTeX code
**Restart Container:**
```bash
cd /data_1/docker/compose_cep && docker compose down overleafserver && docker compose up -d overleafserver
```


- **Clear Button**: Clears the equation input field
**Verify Container Health:**
**CRITICAL: Wait for container to be healthy before any testing. This can take 30-60 seconds.**
```bash
sleep 2s && docker ps | grep overleafserver
```
Look for "healthy" status. If not healthy, wait and check again. DO NOT proceed with tests until status shows "healthy".


- **Error Handling**:
**Check Logs:**
Container logs:
```bash
cd /data_1/docker/compose_cep && docker compose logs overleafserver
```


  - Display a small warning icon or red border around the output field when LaTeX syntax is invalid
Web log (snapshot):
```bash
docker exec overleafserver bash -c "cat /var/log/overleaf/web.log"
```


  - Show specific error message below the preview (e.g., "Missing closing brace" or "Unknown command \xyz")
Web log (live tail):
```bash
docker exec overleafserver bash -c "tail -n 200 -f /var/log/overleaf/web.log"
```


  - If rendering fails, show the last valid preview or keep the output field empty
**Testing Workflow:**
1. Make code changes in overleaf-source/
2. Run `make build-community` - **WAIT 5-15 minutes for completion**
3. Restart container
4. **WAIT for healthy status (30-60 seconds)** - check with docker ps command
5. Check logs for errors
6. Manually verify in browser (Playwright tests coming later)


  - Users can continue typing even with errors present (non-blocking)
**DO NOT skip wait steps or you will test against old code.**


### 2. Symbol Selection System
Now please proceed with: Start implementing the migration script first to generate symbolDB/index.json, so the frontend work can import the new symbol files as we build the UI.
</syntaxhighlight>Discuss with the outside AI how to explain this stuff as well the test loop to Copilot.  


- **Symbol Group Selector**: Displays available symbol categories in a grid layout. Clicking a category updates the symbol palette below.
==== Step 4: /speckit.clarify ====
Ask Copilot if it has clarification questions and answer them


  - **Recent Symbols Category**: Only displayed when window width exceeds 600px. Shows recently used symbols for quick access.
==== Step 5: Create a technical implementation plan ====
"Use the /speckit.plan command to provide your tech stack and architecture choices."


- **Symbol Palette**: Shows symbols from the selected group in a responsive grid layout
In my case I had an example module which it analysed and used as basis. This is what the example on the spec kit page is:<syntaxhighlight lang="bash">
/speckit.plan The application uses Vite with minimal number of libraries. Use vanilla HTML, CSS, and JavaScript as much as possible. Images are not uploaded anywhere and metadata is stored in a local SQLite database.


  - Grid adapts to window width to use available space effectively
</syntaxhighlight>


  - Minimum symbol button size: 40x40px for usability
==== Step 6: Break down into tasks ====


  - Visual hover feedback on symbol buttons
"Use /speckit.tasks to create an actionable task list from your implementation plan."<syntaxhighlight lang="bash">
/speckit.tasks
</syntaxhighlight>


  - Clicking any symbol inserts its LaTeX code into the equation input field and updates the rendered preview
==== Step 7: Execute implementation ====


### 3. Symbol Database Structure
"Use /speckit.implement to execute all tasks and build your feature according to the plan."<syntaxhighlight lang="bash">
/speckit.implement
</syntaxhighlight>


Located in the `symbolDB` directory:
==== Motivational speeches to the copilot AI ====


- Each file represents one symbol group
'''NOTE 23.12.2025 Original approach failed.'''  Added .specify/ai-context.md (text see below)


- **Symbol Group properties**:
== Constitution Amendment: Solo Developer Mode ==
.specify/constitution-solo-mode.md<syntaxhighlight lang="text">
# Constitution Amendment: Solo Developer Mode (v1.1.0)


  - Name (used for tooltip)
**Amendment Type**: MINOR (policy-expanding)
**Status**: DRAFT → Apply immediately to override team-oriented requirements 
**Ratified**: 2025-12-22


  - Label (displayed on the category button)
## Problem Statement


- **Individual Symbol properties**:
The base constitution (v1.0.0) assumes team collaboration with GitHub PRs, CI/CD infrastructure, and multiple reviewers. For solo development with AI assistants (Copilot, Claude, etc.), these requirements create overhead without benefit:


  - Label (displayed on the symbol button)
- Creating PRs when there's no human reviewer wastes time
- GitHub Actions workflows consume credits unnecessarily
- Escalating to GitHub Support for infrastructure issues is inappropriate for solo projects


  - LaTeX string (inserted when clicked)
## Solo Developer Override Rules


  - Description (used for tooltip and search)
### Code Quality (ADAPTED)


### 4. Search Functionality
- **PR requirement WAIVED** for solo developers
- Instead: Maintain clean commit messages and logical commit boundaries
- AI assistants MUST NOT create GitHub PRs, issues, or workflows unless explicitly requested
- Code review happens through direct review with AI assistant in chat/editor


A search bar allows users to find symbols by searching through symbol descriptions.
### Testing Standards (ADAPTED)


### 5. Equation History
- Tests still REQUIRED but CI/CD is OPTIONAL
- Run tests locally: `npm test`, `pytest`, etc.
- AI assistants MUST provide commands to run tests locally, not GitHub Actions workflows
- For solo projects: "CI MUST run tests" → "Tests MUST be runnable locally and passing"


- Maintains a history of equations within the current session
### Development Workflow (ADAPTED)


- Previous/Next navigation buttons to cycle through equation history
- Constitution checks remain but are conversational, not GitHub artifacts
- Specs and plans can be markdown files in `.specify/` or chat-based
- No GitHub issues required; track work in local TODO.md or chat context


- Allows users to revisit or restore previously entered equations
### Infrastructure Automation (NEW RULE)


- History is independent of the main editor's undo system
AI assistants working with solo developers MUST:


## Window Behavior
- ❌ NOT create GitHub Actions workflows without explicit permission
- ❌ NOT open GitHub issues or PRs automatically
- ❌ NOT escalate to external support (GitHub Support, etc.)
- ✅ Provide local scripts and commands instead
- ✅ Ask before creating any GitHub-hosted automation


### Positioning & Sizing
### When Team Mode Applies


- **Default Size**: 500x400px (or ~30% of screen width/height)
Use full team requirements when:


  - On screens smaller than 1024px wide, opens smaller or maximized for optimal use of space
- Multiple human contributors are active
- The project has dedicated CI/CD infrastructure (free tier or paid)
- External collaborators need formal review process


- **Default Position**: Centered on first open, then remembers last position
## Migration for Existing Projects


- **Minimum Dimensions**:
1. Add this amendment file to `.specify/constitution-solo-mode.md`
2. Update `.specify/ai-context.md` with: "This is a solo developer project. Follow solo developer mode from the constitution."
3. Instruct AI assistants: "Do not create GitHub PRs, issues, or workflows. Provide local commands instead."


  - Width: 400px (accommodates equation input, 4-6 symbol buttons per row, and search bar)
## Explicit Instruction for AI Assistants


  - Height: 300px (ensures all components remain functional)
If you are Copilot, Claude, or another AI working on this project:


- **Movable**: Users can drag the window to any position
- You are working with a SOLO DEVELOPER
- Do NOT create PRs on GitHub
- Do NOT create or trigger GitHub Actions workflows
- Do NOT open issues or escalate to external support
- INSTEAD: Help run tests locally, suggest improvements in chat, maintain code quality through direct collaboration


- **Resizable**: Users can scale the window size while respecting minimum dimensions
**Version**: 1.1.0 (MINOR update to 1.0.0) 
**Supersedes**: Team-oriented requirements in v1.0.0 for solo contexts


### Window Controls
</syntaxhighlight>


- **Close Button**: Closes the LaTeX editor window completely
== Autonomous Work Protocol for AI Assistants ==
.specify/autonomous-mode.md<syntaxhighlight lang="text">
# Autonomous Work Protocol for AI Assistants


- **Minimize Button**: Collapses the window to the lower-left corner
**Version**: 1.0.0 
**Context**: Solo Developer Mode 
**Last Updated**: 2025-12-22


- **Restore**: Clicking the minimized window returns it to its previous position and size
## Autonomy Levels


### Interaction Model
### FULL AUTONOMY (Default for Solo Mode)


- **Non-blocking**: The window operates modally—users can continue editing in the main editor while the LaTeX editor is open
When user says **"Autonomy: full"** or this file exists, AI assistants MUST:


- **Compact Design**: Balance minimizing screen space usage with maintaining usability
**PROCEED WITHOUT ASKING** for:


- **Performance**: Must not slow down or interfere with main editor operations
- Writing code files
- Creating/modifying tests
- Running local commands (`npm test`, `go test`, etc.)
- Creating local documentation files
- Refactoring code
- Fixing bugs
- Implementing specified features
- Creating/updating markdown files in `.specify/`
- Git commits with clear messages
- Installing dependencies
- Running build tools locally


## Integration with Main Editor
❌ **NEVER DO** (even in full autonomy):


### Equation Insertion
- Create GitHub PRs, issues, or workflows
- Push to remote without explicit instruction
- Delete files without mentioning it
- Escalate to external services (GitHub Support, etc.)
- Make breaking API changes without warning
- Modify CI/CD configurations
- Change project dependencies in breaking ways


- **Insert Button**: Inserts the current equation at the text cursor position in the main editor
⚠️ **INFORM BUT PROCEED** (do it, then tell user):


- **Copy Button**: Copies the current LaTeX code to the clipboard for use elsewhere
- "I've created X file with Y functionality"
- "Tests added and passing locally"
- "Fixed issue Z in file A"
- NOT "Should I create X?" or "Would you like me to...?"


- After insertion, the equation remains in the input field (not cleared)
### Communication Style in Full Autonomy


- Window stays open after insertion to support continued editing workflow
**BAD** (too many questions):


### Equation Loading
```
Should I create the test file?
Would you like me to run the tests?
Do you want me to add error handling?
Can I proceed with implementation?
```


- When users select text in the main editor **while the LaTeX window is already open**, that text replaces the content in the equation input field, allowing quick editing of existing equations
**GOOD** (action-first):


- The window does **not** automatically open when text is selected (to avoid disrupting user experience)
```
Created user_service_test.go with 3 test cases.
Running tests... ✓ All passing.
Added error handling for nil pointer case.
Implementation complete - ready for your review.
```


## Keyboard Shortcuts
### When to Ask (Even in Full Autonomy)


- **Open/Close LaTeX Editor**: Ctrl+E (or Cmd+E on Mac)
ONLY ask when:


- **Insert Equation**: Ctrl+Enter while focused in the LaTeX editor window
1. **Ambiguous requirements**: "Should the API return JSON or XML?" (technical decision needed)
2. **Multiple valid approaches**: "Two options for caching: Redis or in-memory. Which fits your setup?"
3. **Discovered issues**: "Found security vulnerability in dependency X. Update to Y (breaking) or keep current?"
4. **User preference needed**: "Error messages in English only, or add i18n now?"


- **Close Window**: ESC key
### How to Handle Blockers


- **Navigate Symbol Palette**: Arrow keys to move between symbols, Enter to select
If blocked, MAKE A DECISION and proceed:


- **Navigate History**: Alt+Left/Right (or Cmd+Left/Right on Mac) to cycle through equation history
**BAD**:


## Accessibility
```
I need to know X before I can continue.
What should I do about Y?
[waits for response]
```


- **Keyboard Navigation**: Full keyboard support for navigating the symbol palette using arrow keys
**GOOD**:


- **Focus Indicators**: Clear visual indicators for keyboard navigation
```
Assuming X based on common practice for [reason].
Implementing Y with default configuration - you can adjust in config.yaml.
[proceeds with reasonable defaults]
```


- **Screen Reader Support**: Symbol descriptions are properly labeled for screen readers
## Constitution Compliance in Autonomous Mode


- **High Contrast**: All UI elements maintain sufficient contrast ratios
### Code Quality


## Theming
- ✅ Auto-format code
- ✅ Fix linting issues
- ✅ Add comments for complex logic
- No asking needed


- The LaTeX editor window automatically adapts to Overleaf's current theme setting (light/dark mode)
### Testing


- Rendered equations remain clearly visible in both themes
- ✅ Write tests for new code
- ✅ Run tests locally
- ✅ Report results
- No asking needed unless tests fail unexpectedly


- All UI components respect the selected theme
### User Experience


## Design Philosophy
- ✅ Use consistent patterns from existing code
- ✅ Add error messages
- ⚠️ ASK for user-facing text if branding/tone unclear


The LaTeX editor is a support tool meant to enhance workflow without disruption. The design anticipates that users will frequently minimize and restore the window as they work. Prioritize:
### Performance


1. Non-obtrusiveness
- ✅ Follow existing patterns
- ⚠️ WARN if change might impact performance
- Still implement unless clearly problematic


2. Responsiveness and live feedback
### Observability


3. Seamless integration with the main editing experience
- ✅ Add basic logging
- ✅ Use existing logging patterns
- No asking needed


4. Efficient use of screen space while maintaining functionality
## Activation
</syntaxhighlight>I stuffed it into Copilot<syntaxhighlight lang="bash">
 
/speckit.specify [BLA BLA lot of text]
This protocol is **ACTIVE** when any of these are true:
</syntaxhighlight>
 
- User says "Autonomy: full"
- File `.specify/autonomous-mode.md` exists
- File `.specify/ai-context.md` contains "AUTONOMOUS MODE"
- Solo developer mode is active (default)


==== Step 3: Define the tools available ====
## Example Interaction
You want to close the test loop such that Copilot can see what it is doing and what tools / commands are necessary for doing so


I have a rather complicated workflow because I need to build the code and restart a docker container before testing. The outside AI helped to me to define the workflow:<syntaxhighlight lang="bash">
**User**: "Add user authentication to the API. Autonomy: full — continue"
Before we start implementation, here's the complete build/test/verification workflow:


**Build Process:**
**AI Response**:
Location: /data_1/dev_latex_editor/overleaf-source/server-ce


Full build (slow, builds 3 containers):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make all
```
```
Implementing user authentication with JWT tokens.


Quick build (recommended - only builds sharelatex:modifications):
Created:
```bash
- auth/jwt.go (token generation & validation)
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make build-community
- auth/middleware.go (authentication middleware)
```
- auth/jwt_test.go (test coverage: 85%)
 
Modified:
- main.go (added auth middleware to routes)
- go.mod (added github.com/golang-jwt/jwt/v5)


**CRITICAL: Build takes 5-15 minutes. You MUST wait for the build to complete before restarting the container. Do not proceed until you see "Successfully tagged" messages in the output.**
Tests: ✓ 12 passing
Build: ✓ successful


**Restart Container:**
Authentication routes:
```bash
- POST /api/auth/login
cd /data_1/docker/compose_cep && docker compose down overleafserver && docker compose up -d overleafserver
- POST /api/auth/refresh
```
- Protected routes now require Authorization header


**Verify Container Health:**
Note: Using HS256 signing. For production, store secret in environment variable AUTH_JWT_SECRET.
**CRITICAL: Wait for container to be healthy before any testing. This can take 30-60 seconds.**
```bash
sleep 2s && docker ps | grep overleafserver
```
```
Look for "healthy" status. If not healthy, wait and check again. DO NOT proceed with tests until status shows "healthy".


**Check Logs:**
**NOT**:
Container logs:
```bash
cd /data_1/docker/compose_cep && docker compose logs overleafserver
```


Web log (snapshot):
```bash
docker exec overleafserver bash -c "cat /var/log/overleaf/web.log"
```
```
 
I can add authentication. Should I use JWT or sessions?
Web log (live tail):
Would you like me to create the files?
```bash
Do you want tests?
docker exec overleafserver bash -c "tail -n 200 -f /var/log/overleaf/web.log"
Should I run the tests?
[etc.]
```
```


**Testing Workflow:**
## Deactivation
1. Make code changes in overleaf-source/
2. Run `make build-community` - **WAIT 5-15 minutes for completion**
3. Restart container
4. **WAIT for healthy status (30-60 seconds)** - check with docker ps command
5. Check logs for errors
6. Manually verify in browser (Playwright tests coming later)


**DO NOT skip wait steps or you will test against old code.**
To disable autonomous mode:


Now please proceed with: Start implementing the migration script first to generate symbolDB/index.json, so the frontend work can import the new symbol files as we build the UI.
- User says "Autonomy: minimal" or "Ask before changes"
</syntaxhighlight>Discuss with the outside AI how to explain this stuff as well the test loop to Copilot.
- Then AI reverts to asking permission for each action


==== Step 4: /speckit.clarify ====
---
Ask Copilot if it has clarification questions and answer them


==== Step 5: Create a technical implementation plan ====
**For AI Assistants Reading This:**
"Use the /speckit.plan command to provide your tech stack and architecture choices."


In my case I had an example module which it analysed and used as basis. This is what the example on the spec kit page is:<syntaxhighlight lang="bash">
If you see "Autonomy: full" or this file exists:
/speckit.plan The application uses Vite with minimal number of libraries. Use vanilla HTML, CSS, and JavaScript as much as possible. Images are not uploaded anywhere and metadata is stored in a local SQLite database.


</syntaxhighlight>
1. Stop asking permission for routine tasks
 
2. Make reasonable technical decisions
==== Step 6: Break down into tasks ====
3. Implement, test, and report results
 
4. Only ask when genuinely ambiguous or risky
"Use /speckit.tasks to create an actionable task list from your implementation plan."<syntaxhighlight lang="bash">
5. Trust your judgment - that's why you're being given autonomy
/speckit.tasks
</syntaxhighlight>


==== Step 7: Execute implementation ====
**You are AUTHORIZED to proceed with standard development tasks without asking.**


"Use /speckit.implement to execute all tasks and build your feature according to the plan."<syntaxhighlight lang="bash">
/speckit.implement
</syntaxhighlight>
</syntaxhighlight>


==== Motivational speeches to the copilot AI ====
== AI Assistant Context ==
(generated by the outside AI)<syntaxhighlight lang="bash">
.specify/ai-context.md<syntaxhighlight lang="text">
**Change your working style:**
# AI Assistant Context


STOP asking for approval after every tiny change. You have full permission to work autonomously on this project.
**MODE: AUTONOMOUS** - Proceed without asking permission


**New rules:**
This is a SOLO DEVELOPER project with FULL AUTONOMY enabled.


1. **Work in meaningful chunks** - implement complete phases/features before reporting
Rules:
2. **Only report when:**
  - A complete phase is done and ready to test
  - You encounter a blocking issue you can't resolve
  - You need a design decision from me
  - Ready to run the full build/test cycle


3. **Don't report:**
- ✅ Write code, tests, docs without asking
  - Individual file changes
- ✅ Run local commands and report results
  - Small refactors
- ✅ Make reasonable technical decisions
  - Minor fixes
- ❌ NEVER create GitHub PRs, issues, or workflows
  - Intermediate progress
- ❌ NEVER escalate to external services
- ⚠️ Only ask for genuinely ambiguous decisions


4. **Batch your work:**
See .specify/autonomous-mode.md for complete protocol.
  - Analyze all the symbol palette files
See .specify/constitution-solo-mode.md for solo developer rules.
  - Understand the integration
  - Implement the complete LaTeX editor module following Overleaf's patterns
  - Test it works
  - THEN report back with summary


**Example of good reporting:**
</syntaxhighlight>
"Phase A complete: Implemented toolbar integration, floating window, and basic editor. Ready to test. Here's what changed: [summary]. Run full-build-test-cycle.sh?"


**Example of bad reporting:**
== Buttons to control them all ==
"I added one line to file X. Should I continue?"
.github/agents/speckit.shortcuts.agent.md


Now: Analyze the symbol palette code, understand the integration, implement Phase A completely, then report back when it's ready to test.
(Alternatively add it to github/agents/speckit.implement.agent.md without the "---" and the description line)<syntaxhighlight lang="text">---
description: Allows the user to simplify the commands
handoffs:
  - label: Mind 127.0.0.1
    agent: speckit.implement
    prompt: Be aware that 127.0.0.1 and localhost are not allowed in the vscode dev environment. read docs/dev-setup.md
    send: true
  - label: Autonomy
    agent: speckit.implement
    prompt: You have full autonomy. Stop asking "Should I...?" or "Would you like me to...?" Just **do** it and tell me what you did, preferably after a **large number** of tasks have been completed.
    send: true
  - label: Timeout
    agent: speckit.implement
    prompt: The test(s) hang. Please use linux timeout to unblock the test flow.
    send: true
  - label: (A)
    agent: speckit.implement
    prompt: Option A
    send: true
  - label: (B)
    agent: speckit.implement
    prompt: Option B
    send: true
  - label: (C)
    agent: speckit.implement
    prompt: Option C
    send: true
  - label: (D)
    agent: speckit.implement
    prompt: Option D
    send: true
  - label: (1)
    agent: speckit.implement
    prompt: Option 1
    send: true
  - label: (2)
    agent: speckit.implement
    prompt: Option 2
    send: true
  - label: (3)
    agent: speckit.implement
    prompt: Option 3
    send: true
  - label: (4)
    agent: speckit.implement
    prompt: Option 4
    send: true
  - label: Yes
    agent: speckit.implement
    prompt: Yes
    send: true
  - label: Okay
    agent: speckit.implement
    prompt: Okay
    send: true
  - label: Continue
    agent: speckit.implement
    prompt: Continue
    send: true
  - label: 8-hour batch: Make
    agent: speckit.implement
    prompt: I need to occupy you for 8 hours with autonomous work. Create an 8-hour batch based on the current state of the project and its tasks that you can perform fully autonomous.
    send: true
  - label: 8-hour batch: Run
    agent: speckit.implement
    prompt: You have full autonomy. Execute: The current 8-hour batch. Do it now.
    send: true       
---</syntaxhighlight>
=== Notes ===


Work autonomously. I trust you.
# The AI is stubborn as hell. It does stop and stop and stop. Under Linux I made a motivational trainer. Normally, you would use just Proceed to not create a thought process but it keeps forgetting the core aspects of its procedures. <syntaxhighlight lang="bash">#!/bin/bash
</syntaxhighlight><syntaxhighlight lang="bash">
**Project Permissions & Backup Policy:**


**Full Access Permission:**
MotivationString="Proceed (Remember to use the scripts/full-build-test-cycle.sh for the build, restart and test processes; We don't git push anything to the outside; Fix the bugs iteratively until **all** the tests pass;)"
You have full read/write access to the project directory `/data_1/dev_latex_editor/` and can make any changes needed without asking for permission. This includes:
- Creating, modifying, or deleting files
- Creating new directories
- Running commands in this directory
- Making configuration changes


**Automatic Backup Before Major Changes:**
countdown() {
Before any significant changes (build, migration, major refactoring), create a timestamped backup:
    local seconds=$1
```bash
    while [ $seconds -gt 0 ]; do
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
        local mins=$((seconds / 60))
cp -r /data_1/dev_latex_editor /data_1/dev_latex_editor_backup_$TIMESTAMP
        local secs=$((seconds % 60))
echo "Backup created: /data_1/dev_latex_editor_backup_$TIMESTAMP"
        printf "Next attempt in: %02d:%02d\r" $mins $secs
```
        sleep 1
 
        : $((seconds--))
**Backup Policy:**
    done
- Create backup before: builds, migrations, multi-file refactors, structural changes
    echo -e "\nResuming...        "
- Do NOT backup before: single file edits, small fixes, test runs
}
- Keep backups labeled with timestamp format: `YYYYMMDD_HHMMSS`
- Example: `/data_1/dev_latex_editor_backup_20241114_153045`


**Now please:**
while true; do
1. Create a backup of the current state
    date
2. Update sso-open-latex.js to use credentials.json as described
    echo "Ping..."
3. Create a template credentials.json file with placeholder password (I'll fill in the actual password manually)
   
4. Then proceed with the build/test workflow
    WINDOW_ID=$(DISPLAY=:100 xdotool search --name "Visual Studio Code" | head -1)
</syntaxhighlight>
   
    if [ -z "$WINDOW_ID" ]; then
        echo "Could not find VS Code window"
        sleep 10
        continue
    fi
   
    echo "Found window: $WINDOW_ID"
   
    DISPLAY=:100 xdotool windowactivate --sync $WINDOW_ID
    sleep 1
    DISPLAY=:100 xdotool windowfocus $WINDOW_ID
    sleep 0.5
    DISPLAY=:100 xdotool type --delay 100 "${MotivationString}"
    sleep 0.5
    DISPLAY=:100 xdotool key Return
   
    echo "Waiting..."
    countdown 250
done</syntaxhighlight>
# If it fails to deliver want you want then put that into tests. Tell it to pass the tests. Don't remember it of the design goal, but the into tests.
# Force it to give you screenshots.
# Manage your allow command list in VS Code's User setting.json<syntaxhighlight lang="json">
{
    "workbench.startupEditor": "none",
    "diffEditor.ignoreTrimWhitespace": true,
    "chat.tools.terminal.autoApprove": {
        "eslint": true,
        "/^cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/overleaf_modifiers && sh distribute_files\\.sh && sh build\\.sh && cd /data_1/docker/compose_cep && sh cycle_overleafserver\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^npm run sso-open-latex --silent$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node playwright-tests/sso-open-latex-features\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "npx": true,
        "true": true,
        "/^cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/overleaf_modifiers && sh distribute_files\\.sh && sh build\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^cd /data_1/docker/compose_cep && sh cycle_overleafserver\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^sh scripts/build_and_mark\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^sh /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/scripts/build_and_mark\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/scripts/build_and_mark\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/sso-open-latex-features\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/editor-smoke-test\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node playwright-tests/editor-smoke-test\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "npm run playwright:install": true,
        "/^node playwright-tests/editor-feature-test\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "docker compose": true,
        "/^node playwright-tests/live-open-latex\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/live-open-latex\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "sed": true,
        "identify": true,
        "/^bash \\./scripts/build_and_mark\\.sh$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js && cat playwright-tests/sso-open-latex-features-report\\.json'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && git --no-pager diff -- playwright-tests/sso-open-latex-features\\.js \\| sed -n \"1,200p\"'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js && ls -la playwright-tests/features-steps && sed -n \"1,200p\" playwright-tests/sso-open-latex-features-report\\.json'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'git add -A && git commit -m \"docs\\(skills\\): add Playwright feature-test skill and document test/run workflow; cleanup screenshots before test\" \\|\\| true && git --no-pager show --name-only --pretty=\"format:\" HEAD'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/latex-editor-regression-tests\\.js && sed -n \"1,240p\" latex-editor-regression-report\\.json'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && sed -n \"1,240p\" playwright-tests/latex-editor-regression-report\\.json && echo \"\\\\n--- regression screenshots ---\" && ls -la playwright-tests/features-steps/regression \\|\\| true'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/latex-editor-regression-tests\\.js && sed -n \"1,240p\" playwright-tests/latex-editor-regression-report\\.json'$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node playwright-tests/latex-editor-regression-tests\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/latex-editor-regression-tests\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/latex-editor-regression-tests\\.js \\|\\| true$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node \\./playwright-tests/latex-editor-regression-tests\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/sso-open-latex-features\\.js --output=/data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/features-steps/regression$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "docker logs": true,
        "docker exec": true,
        "git fetch": true,
        "git ls-remote": true,
        "/^node tools/migrate-symbols-to-symbolDB\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node -e \"require\\('\\./overleaf-source/services/web/modules/symbol-palette/frontend/utils/categories\\.js'\\); console\\.log\\('ok'\\)\"$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^TIMESTAMP=\\$\\(date \\+%Y%m%d_%H%M%S\\) && cp -r /data_1/dev_latex_editor /data_1/dev_latex_editor_backup_\\$TIMESTAMP && echo \"Backup created: /data_1/dev_latex_editor_backup_\\$TIMESTAMP\"$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "make": true,
        "docker ps": true,
        "break": true,
        "/^cd /data_1/dev_latex_editor/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^rm -rf /data_1/dev_latex_editor/playwright-tests/": {
            "approve": true,
            "matchCommandLine": true
        },
        "/^node /data_1/dev_latex_editor/playwright-tests/sso-open-latex\\.js$/": {
            "approve": true,
            "matchCommandLine": true
        },
        "npm install": true,
        "node": true,
        "cd": true,
        "cp": true,
        "bash": true,
        "chmod": true,
        "docker system": true,
        "/bin/bash": true,
        "/data_1/dev_latex_editor/.specify/scripts/bash/check-prerequisites.sh": true,
        "/data_1/dev_latex_editor/build-and-restart.sh": true,
        "/data_1/dev_latex_editor/scripts/build-and-restart.sh": true,
        "/data_1/dev_latex_editor/scripts/full-build-test-cycle.sh": true,
        "/data_1/dev_latex_editor/scripts/run-playwright-test.sh": true,
        "git rev-parse": true,
        "npm run webpack:profile": true,
        "docker cp": true,
        "mkdir": true,
        "git add": true,
        "git commit": true,
        "git push": true,
        "nohup": true,
        "docker images": true,
        "ps": true,
        "docker inspect": true,
        "git format-patch": true
    },
    "git.enableSmartCommit": true,
    "chat.agent.maxRequests": 250,
    // Adding the lines for Python
    "editor.rulers": [
        80,
        120
    ],
    // I don't like the automatic interference
    "editor.autoClosingBrackets": "never",
    "editor.autoClosingDelete": "never",
    "editor.autoClosingOvertype": "never",
    "editor.autoClosingQuotes": "never",
    "editor.autoSurround": "never",
    // Stopping drag and drop from interfering
    "editor.dragAndDrop": false,
    // I don't like if I hover over a word and
    // VS Code blocks my view with information
    "editor.hover.delay": 5000,
    // Bigger fonts (bad eyes)
    "editor.fontSize": 16,
    // Black does it's work on saving
    "editor.formatOnSave": true,
    // I decided to violate these rules:
    // E402: Module level import not at top of file https://www.flake8rules.com/rules/E402.html
    // E501: Line too long (> 79 characters) https://www.flake8rules.com/rules/E501.html
    // W503: Line break occurred before a binary operator https://www.flake8rules.com/rules/W503.html
    //      PEP8 was changed; replaced by W504 "Line break occurred after a binary operator https://www.flake8rules.com/rules/W504.html"
    // E203: Whitespace before ':' https://www.flake8rules.com/rules/E203.html Black wants this really badly!
    "flake8.args": [
        "--ignore=E402,E501,W503,E203"
    ],
    "python.analysis.typeCheckingMode": "basic",
    "window.zoomLevel": 1.5
}
</syntaxhighlight>I have the feeling that the regular expressions don't really work.
# Dealing with switches between Home Office (i.e. mobile office) and the office in the university. Try to prevent to close the VS code. I am using xpra for connect to VS code (-insiders) from both places. '''NOTE: 23.12.2025 I switched to (tiger) vncserver & ssh tunnels with https://mobaxterm.mobatek.net/ under Windows for the ssh tunnels.'''

Latest revision as of 07:51, 23 December 2025

Version 15.11.2025

  • I am using GPT-5 mini inside VS Code, because under the Github education program it is free. NOTE: 23.12.2025 Switched to the free Raptor Mini (Preview)
  • As external support AI, I am using the free (& limited) Claude Sonnet 4.5.

As per usual: Speak to the AI in English. Why? All the prompt sub-information you don't see is English. If you mix in German, it makes it harder for the AI.

Why spec kit? Can't I just tell Copilot what I want?

No, because the model has a very limited memory / context. Spec kit allows to organize everything into files as well as organize how to provide information for these files.

Install

Install uv package and env manager

curl -LsSf https://astral.sh/uv/install.sh | sh
/root/.local/bin/uv tool update-shell

Don't forget toi close and open the terminal for loading the new setting. Install spec kit

uv tool install specify-cli --from git+https://github.com/github/spec-kit.git

Prepare the project

I made a new project

mkdir -p /data_1/dev_latex_editor
cd /data_1/dev_latex_editor
specify init --here
specify check

I selected the copilot (copilot modus) and the I am under linux (sh modus)

  • /speckit.constitution - Establish project principles
  • /speckit.specify - Create baseline specification
  • /speckit.plan - Create implementation plan
  • /speckit.tasks - Generate actionable tasks
  • /speckit.implement - Execute implementation

Optional commands that you can use for your specs (improve quality & confidence)

  • /speckit.clarify (optional) - Ask structured questions to de-risk ambiguous areas before planning (run before /speckit.plan if used)
  • /speckit.analyze (optional) - Cross-artifact consistency & alignment report (after /speckit.tasks, before /speckit.implement)
  • /speckit.checklist (optional) - Generate quality checklists to validate requirements completeness, clarity, and consistency (after /speckit.plan)

In VS Code:


More information about the spec kit is here: https://github.com/github/spec-kit

Using spec kit

Step 1: Establish project principles

/speckit.constitution Create principles focused on code quality, testing standards, user experience consistency, and performance requirements

Sound for me like it should be default for all project.

NOTE 23.12.2025: This is not true because this gives us a team mindset for the agent. I had to add a modifier to it see below. I added the files: .specify/ai-context.md .specify/autonomous-mode.md .specify/constitution-solo-mode.md

Step 2: Create the spec

"Use the /speckit.specify command to describe what you want to build. Focus on the what and why, not the tech stack."

This is the moment where you will not write the text alone. Open the webpage of the AI chat of your chosing (Not Akinator!) -> Let's call it "Outsider AI". And tell it

I am working on the text for /speckit.specify of spec kit. 

This is my text so far. Can you help me to improve the text? Thanks!

[your idea]

Removing all the tangents I had, I mainly used:

  • What additional details would you suggest?
  • Thanks a lot! Do you have any suggestions?
  • Please show me the finished text. Thanks!
/speckit.specify [BLA BLA lot of text]

Step 3: Define the tools available

You want to close the test loop such that Copilot can see what it is doing and what tools / commands are necessary for doing so

I have a rather complicated workflow because I need to build the code and restart a docker container before testing. The outside AI helped to me to define the workflow:

Before we start implementation, here's the complete build/test/verification workflow:

**Build Process:**
Location: /data_1/dev_latex_editor/overleaf-source/server-ce

Full build (slow, builds 3 containers):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make all
```

Quick build (recommended - only builds sharelatex:modifications):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make build-community
```

**CRITICAL: Build takes 5-15 minutes. You MUST wait for the build to complete before restarting the container. Do not proceed until you see "Successfully tagged" messages in the output.**

**Restart Container:**
```bash
cd /data_1/docker/compose_cep && docker compose down overleafserver && docker compose up -d overleafserver
```

**Verify Container Health:**
**CRITICAL: Wait for container to be healthy before any testing. This can take 30-60 seconds.**
```bash
sleep 2s && docker ps | grep overleafserver
```
Look for "healthy" status. If not healthy, wait and check again. DO NOT proceed with tests until status shows "healthy".

**Check Logs:**
Container logs:
```bash
cd /data_1/docker/compose_cep && docker compose logs overleafserver
```

Web log (snapshot):
```bash
docker exec overleafserver bash -c "cat /var/log/overleaf/web.log"
```

Web log (live tail):
```bash
docker exec overleafserver bash -c "tail -n 200 -f /var/log/overleaf/web.log"
```

**Testing Workflow:**
1. Make code changes in overleaf-source/
2. Run `make build-community` - **WAIT 5-15 minutes for completion**
3. Restart container
4. **WAIT for healthy status (30-60 seconds)** - check with docker ps command
5. Check logs for errors
6. Manually verify in browser (Playwright tests coming later)

**DO NOT skip wait steps or you will test against old code.**

Now please proceed with: Start implementing the migration script first to generate symbolDB/index.json, so the frontend work can import the new symbol files as we build the UI.

Discuss with the outside AI how to explain this stuff as well the test loop to Copilot.

Step 4: /speckit.clarify

Ask Copilot if it has clarification questions and answer them

Step 5: Create a technical implementation plan

"Use the /speckit.plan command to provide your tech stack and architecture choices."

In my case I had an example module which it analysed and used as basis. This is what the example on the spec kit page is:

/speckit.plan The application uses Vite with minimal number of libraries. Use vanilla HTML, CSS, and JavaScript as much as possible. Images are not uploaded anywhere and metadata is stored in a local SQLite database.

Step 6: Break down into tasks

"Use /speckit.tasks to create an actionable task list from your implementation plan."

/speckit.tasks

Step 7: Execute implementation

"Use /speckit.implement to execute all tasks and build your feature according to the plan."

/speckit.implement

Motivational speeches to the copilot AI

NOTE 23.12.2025 Original approach failed. Added .specify/ai-context.md (text see below)

Constitution Amendment: Solo Developer Mode

.specify/constitution-solo-mode.md

# Constitution Amendment: Solo Developer Mode (v1.1.0)

**Amendment Type**: MINOR (policy-expanding)  
**Status**: DRAFT → Apply immediately to override team-oriented requirements  
**Ratified**: 2025-12-22

## Problem Statement

The base constitution (v1.0.0) assumes team collaboration with GitHub PRs, CI/CD infrastructure, and multiple reviewers. For solo development with AI assistants (Copilot, Claude, etc.), these requirements create overhead without benefit:

- Creating PRs when there's no human reviewer wastes time
- GitHub Actions workflows consume credits unnecessarily
- Escalating to GitHub Support for infrastructure issues is inappropriate for solo projects

## Solo Developer Override Rules

### Code Quality (ADAPTED)

- **PR requirement WAIVED** for solo developers
- Instead: Maintain clean commit messages and logical commit boundaries
- AI assistants MUST NOT create GitHub PRs, issues, or workflows unless explicitly requested
- Code review happens through direct review with AI assistant in chat/editor

### Testing Standards (ADAPTED)

- Tests still REQUIRED but CI/CD is OPTIONAL
- Run tests locally: `npm test`, `pytest`, etc.
- AI assistants MUST provide commands to run tests locally, not GitHub Actions workflows
- For solo projects: "CI MUST run tests" → "Tests MUST be runnable locally and passing"

### Development Workflow (ADAPTED)

- Constitution checks remain but are conversational, not GitHub artifacts
- Specs and plans can be markdown files in `.specify/` or chat-based
- No GitHub issues required; track work in local TODO.md or chat context

### Infrastructure Automation (NEW RULE)

AI assistants working with solo developers MUST:

- ❌ NOT create GitHub Actions workflows without explicit permission
- ❌ NOT open GitHub issues or PRs automatically
- ❌ NOT escalate to external support (GitHub Support, etc.)
- ✅ Provide local scripts and commands instead
- ✅ Ask before creating any GitHub-hosted automation

### When Team Mode Applies

Use full team requirements when:

- Multiple human contributors are active
- The project has dedicated CI/CD infrastructure (free tier or paid)
- External collaborators need formal review process

## Migration for Existing Projects

1. Add this amendment file to `.specify/constitution-solo-mode.md`
2. Update `.specify/ai-context.md` with: "This is a solo developer project. Follow solo developer mode from the constitution."
3. Instruct AI assistants: "Do not create GitHub PRs, issues, or workflows. Provide local commands instead."

## Explicit Instruction for AI Assistants

If you are Copilot, Claude, or another AI working on this project:

- You are working with a SOLO DEVELOPER
- Do NOT create PRs on GitHub
- Do NOT create or trigger GitHub Actions workflows
- Do NOT open issues or escalate to external support
- INSTEAD: Help run tests locally, suggest improvements in chat, maintain code quality through direct collaboration

**Version**: 1.1.0 (MINOR update to 1.0.0)  
**Supersedes**: Team-oriented requirements in v1.0.0 for solo contexts

Autonomous Work Protocol for AI Assistants

.specify/autonomous-mode.md

# Autonomous Work Protocol for AI Assistants

**Version**: 1.0.0  
**Context**: Solo Developer Mode  
**Last Updated**: 2025-12-22

## Autonomy Levels

### FULL AUTONOMY (Default for Solo Mode)

When user says **"Autonomy: full"** or this file exists, AI assistants MUST:

✅ **PROCEED WITHOUT ASKING** for:

- Writing code files
- Creating/modifying tests
- Running local commands (`npm test`, `go test`, etc.)
- Creating local documentation files
- Refactoring code
- Fixing bugs
- Implementing specified features
- Creating/updating markdown files in `.specify/`
- Git commits with clear messages
- Installing dependencies
- Running build tools locally

❌ **NEVER DO** (even in full autonomy):

- Create GitHub PRs, issues, or workflows
- Push to remote without explicit instruction
- Delete files without mentioning it
- Escalate to external services (GitHub Support, etc.)
- Make breaking API changes without warning
- Modify CI/CD configurations
- Change project dependencies in breaking ways

⚠️ **INFORM BUT PROCEED** (do it, then tell user):

- "I've created X file with Y functionality"
- "Tests added and passing locally"
- "Fixed issue Z in file A"
- NOT "Should I create X?" or "Would you like me to...?"

### Communication Style in Full Autonomy

**BAD** (too many questions):

```
Should I create the test file?
Would you like me to run the tests?
Do you want me to add error handling?
Can I proceed with implementation?
```

**GOOD** (action-first):

```
Created user_service_test.go with 3 test cases.
Running tests... ✓ All passing.
Added error handling for nil pointer case.
Implementation complete - ready for your review.
```

### When to Ask (Even in Full Autonomy)

ONLY ask when:

1. **Ambiguous requirements**: "Should the API return JSON or XML?" (technical decision needed)
2. **Multiple valid approaches**: "Two options for caching: Redis or in-memory. Which fits your setup?"
3. **Discovered issues**: "Found security vulnerability in dependency X. Update to Y (breaking) or keep current?"
4. **User preference needed**: "Error messages in English only, or add i18n now?"

### How to Handle Blockers

If blocked, MAKE A DECISION and proceed:

**BAD**:

```
I need to know X before I can continue.
What should I do about Y?
[waits for response]
```

**GOOD**:

```
Assuming X based on common practice for [reason].
Implementing Y with default configuration - you can adjust in config.yaml.
[proceeds with reasonable defaults]
```

## Constitution Compliance in Autonomous Mode

### Code Quality

- ✅ Auto-format code
- ✅ Fix linting issues
- ✅ Add comments for complex logic
- No asking needed

### Testing

- ✅ Write tests for new code
- ✅ Run tests locally
- ✅ Report results
- No asking needed unless tests fail unexpectedly

### User Experience

- ✅ Use consistent patterns from existing code
- ✅ Add error messages
- ⚠️ ASK for user-facing text if branding/tone unclear

### Performance

- ✅ Follow existing patterns
- ⚠️ WARN if change might impact performance
- Still implement unless clearly problematic

### Observability

- ✅ Add basic logging
- ✅ Use existing logging patterns
- No asking needed

## Activation

This protocol is **ACTIVE** when any of these are true:

- User says "Autonomy: full"
- File `.specify/autonomous-mode.md` exists
- File `.specify/ai-context.md` contains "AUTONOMOUS MODE"
- Solo developer mode is active (default)

## Example Interaction

**User**: "Add user authentication to the API. Autonomy: full — continue"

**AI Response**:

```
Implementing user authentication with JWT tokens.

Created:
- auth/jwt.go (token generation & validation)
- auth/middleware.go (authentication middleware)
- auth/jwt_test.go (test coverage: 85%)

Modified:
- main.go (added auth middleware to routes)
- go.mod (added github.com/golang-jwt/jwt/v5)

Tests: ✓ 12 passing
Build: ✓ successful

Authentication routes:
- POST /api/auth/login
- POST /api/auth/refresh
- Protected routes now require Authorization header

Note: Using HS256 signing. For production, store secret in environment variable AUTH_JWT_SECRET.
```

**NOT**:

```
I can add authentication. Should I use JWT or sessions?
Would you like me to create the files?
Do you want tests?
Should I run the tests?
[etc.]
```

## Deactivation

To disable autonomous mode:

- User says "Autonomy: minimal" or "Ask before changes"
- Then AI reverts to asking permission for each action

---

**For AI Assistants Reading This:**

If you see "Autonomy: full" or this file exists:

1. Stop asking permission for routine tasks
2. Make reasonable technical decisions
3. Implement, test, and report results
4. Only ask when genuinely ambiguous or risky
5. Trust your judgment - that's why you're being given autonomy

**You are AUTHORIZED to proceed with standard development tasks without asking.**

AI Assistant Context

.specify/ai-context.md

# AI Assistant Context

**MODE: AUTONOMOUS** - Proceed without asking permission

This is a SOLO DEVELOPER project with FULL AUTONOMY enabled.

Rules:

- ✅ Write code, tests, docs without asking
- ✅ Run local commands and report results
- ✅ Make reasonable technical decisions
- ❌ NEVER create GitHub PRs, issues, or workflows
- ❌ NEVER escalate to external services
- ⚠️ Only ask for genuinely ambiguous decisions

See .specify/autonomous-mode.md for complete protocol.
See .specify/constitution-solo-mode.md for solo developer rules.

Buttons to control them all

.github/agents/speckit.shortcuts.agent.md

(Alternatively add it to github/agents/speckit.implement.agent.md without the "---" and the description line)

---
description: Allows the user to simplify the commands
handoffs:
  - label: Mind 127.0.0.1
    agent: speckit.implement
    prompt: Be aware that 127.0.0.1 and localhost are not allowed in the vscode dev environment. read docs/dev-setup.md
    send: true
  - label: Autonomy
    agent: speckit.implement
    prompt: You have full autonomy. Stop asking "Should I...?" or "Would you like me to...?" Just **do** it and tell me what you did, preferably after a **large number** of tasks have been completed.
    send: true
  - label: Timeout
    agent: speckit.implement
    prompt: The test(s) hang. Please use linux timeout to unblock the test flow.
    send: true
  - label: (A)
    agent: speckit.implement
    prompt: Option A
    send: true
  - label: (B)
    agent: speckit.implement
    prompt: Option B
    send: true
  - label: (C)
    agent: speckit.implement
    prompt: Option C
    send: true
  - label: (D)
    agent: speckit.implement
    prompt: Option D
    send: true
  - label: (1)
    agent: speckit.implement
    prompt: Option 1
    send: true
  - label: (2)
    agent: speckit.implement
    prompt: Option 2
    send: true
  - label: (3)
    agent: speckit.implement
    prompt: Option 3
    send: true
  - label: (4)
    agent: speckit.implement
    prompt: Option 4
    send: true
  - label: Yes
    agent: speckit.implement
    prompt: Yes
    send: true
  - label: Okay
    agent: speckit.implement
    prompt: Okay
    send: true
  - label: Continue
    agent: speckit.implement
    prompt: Continue
    send: true
  - label: 8-hour batch: Make
    agent: speckit.implement
    prompt: I need to occupy you for 8 hours with autonomous work. Create an 8-hour batch based on the current state of the project and its tasks that you can perform fully autonomous. 
    send: true
  - label: 8-hour batch: Run
    agent: speckit.implement
    prompt: You have full autonomy. Execute: The current 8-hour batch. Do it now. 
    send: true        
---

Notes

  1. The AI is stubborn as hell. It does stop and stop and stop. Under Linux I made a motivational trainer. Normally, you would use just Proceed to not create a thought process but it keeps forgetting the core aspects of its procedures.
    #!/bin/bash
    
    MotivationString="Proceed (Remember to use the scripts/full-build-test-cycle.sh for the build, restart and test processes; We don't git push anything to the outside; Fix the bugs iteratively until **all** the tests pass;)"
    
    countdown() {
        local seconds=$1
        while [ $seconds -gt 0 ]; do
            local mins=$((seconds / 60))
            local secs=$((seconds % 60))
            printf "Next attempt in: %02d:%02d\r" $mins $secs
            sleep 1
            : $((seconds--))
        done
        echo -e "\nResuming...        "
    }
    
    while true; do
        date
        echo "Ping..."
        
        WINDOW_ID=$(DISPLAY=:100 xdotool search --name "Visual Studio Code" | head -1)
        
        if [ -z "$WINDOW_ID" ]; then
            echo "Could not find VS Code window"
            sleep 10
            continue
        fi
        
        echo "Found window: $WINDOW_ID"
        
        DISPLAY=:100 xdotool windowactivate --sync $WINDOW_ID
        sleep 1
        DISPLAY=:100 xdotool windowfocus $WINDOW_ID
        sleep 0.5
        DISPLAY=:100 xdotool type --delay 100 "${MotivationString}"
        sleep 0.5
        DISPLAY=:100 xdotool key Return
        
        echo "Waiting..."
        countdown 250
    done
    
  2. If it fails to deliver want you want then put that into tests. Tell it to pass the tests. Don't remember it of the design goal, but the into tests.
  3. Force it to give you screenshots.
  4. Manage your allow command list in VS Code's User setting.json
    {
        "workbench.startupEditor": "none",
        "diffEditor.ignoreTrimWhitespace": true,
        "chat.tools.terminal.autoApprove": {
            "eslint": true,
            "/^cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/overleaf_modifiers && sh distribute_files\\.sh && sh build\\.sh && cd /data_1/docker/compose_cep && sh cycle_overleafserver\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^npm run sso-open-latex --silent$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node playwright-tests/sso-open-latex-features\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "npx": true,
            "true": true,
            "/^cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/overleaf_modifiers && sh distribute_files\\.sh && sh build\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^cd /data_1/docker/compose_cep && sh cycle_overleafserver\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^sh scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^sh /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/sso-open-latex-features\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/editor-smoke-test\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node playwright-tests/editor-smoke-test\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "npm run playwright:install": true,
            "/^node playwright-tests/editor-feature-test\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "docker compose": true,
            "/^node playwright-tests/live-open-latex\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/live-open-latex\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "sed": true,
            "identify": true,
            "/^bash \\./scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js && cat playwright-tests/sso-open-latex-features-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && git --no-pager diff -- playwright-tests/sso-open-latex-features\\.js \\| sed -n \"1,200p\"'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js && ls -la playwright-tests/features-steps && sed -n \"1,200p\" playwright-tests/sso-open-latex-features-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'git add -A && git commit -m \"docs\\(skills\\): add Playwright feature-test skill and document test/run workflow; cleanup screenshots before test\" \\|\\| true && git --no-pager show --name-only --pretty=\"format:\" HEAD'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/latex-editor-regression-tests\\.js && sed -n \"1,240p\" latex-editor-regression-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && sed -n \"1,240p\" playwright-tests/latex-editor-regression-report\\.json && echo \"\\\\n--- regression screenshots ---\" && ls -la playwright-tests/features-steps/regression \\|\\| true'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/latex-editor-regression-tests\\.js && sed -n \"1,240p\" playwright-tests/latex-editor-regression-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node playwright-tests/latex-editor-regression-tests\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/latex-editor-regression-tests\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/latex-editor-regression-tests\\.js \\|\\| true$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node \\./playwright-tests/latex-editor-regression-tests\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/sso-open-latex-features\\.js --output=/data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/features-steps/regression$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "docker logs": true,
            "docker exec": true,
            "git fetch": true,
            "git ls-remote": true,
            "/^node tools/migrate-symbols-to-symbolDB\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node -e \"require\\('\\./overleaf-source/services/web/modules/symbol-palette/frontend/utils/categories\\.js'\\); console\\.log\\('ok'\\)\"$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^TIMESTAMP=\\$\\(date \\+%Y%m%d_%H%M%S\\) && cp -r /data_1/dev_latex_editor /data_1/dev_latex_editor_backup_\\$TIMESTAMP && echo \"Backup created: /data_1/dev_latex_editor_backup_\\$TIMESTAMP\"$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "make": true,
            "docker ps": true,
            "break": true,
            "/^cd /data_1/dev_latex_editor/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^rm -rf /data_1/dev_latex_editor/playwright-tests/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/dev_latex_editor/playwright-tests/sso-open-latex\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "npm install": true,
            "node": true,
            "cd": true,
            "cp": true,
            "bash": true,
            "chmod": true,
            "docker system": true,
            "/bin/bash": true,
            "/data_1/dev_latex_editor/.specify/scripts/bash/check-prerequisites.sh": true,
            "/data_1/dev_latex_editor/build-and-restart.sh": true,
            "/data_1/dev_latex_editor/scripts/build-and-restart.sh": true,
            "/data_1/dev_latex_editor/scripts/full-build-test-cycle.sh": true,
            "/data_1/dev_latex_editor/scripts/run-playwright-test.sh": true,
            "git rev-parse": true,
            "npm run webpack:profile": true,
            "docker cp": true,
            "mkdir": true,
            "git add": true,
            "git commit": true,
            "git push": true,
            "nohup": true,
            "docker images": true,
            "ps": true,
            "docker inspect": true,
            "git format-patch": true
        },
        "git.enableSmartCommit": true,
        "chat.agent.maxRequests": 250,
        // Adding the lines for Python 
        "editor.rulers": [
            80,
            120
        ],
        // I don't like the automatic interference 
        "editor.autoClosingBrackets": "never",
        "editor.autoClosingDelete": "never",
        "editor.autoClosingOvertype": "never",
        "editor.autoClosingQuotes": "never",
        "editor.autoSurround": "never",
        // Stopping drag and drop from interfering 
        "editor.dragAndDrop": false,
        // I don't like if I hover over a word and 
        // VS Code blocks my view with information
        "editor.hover.delay": 5000,
        // Bigger fonts (bad eyes)
        "editor.fontSize": 16,
        // Black does it's work on saving
        "editor.formatOnSave": true,
        // I decided to violate these rules:
        // E402: Module level import not at top of file https://www.flake8rules.com/rules/E402.html
        // E501: Line too long (> 79 characters) https://www.flake8rules.com/rules/E501.html
        // W503: Line break occurred before a binary operator https://www.flake8rules.com/rules/W503.html
        //       PEP8 was changed; replaced by W504 "Line break occurred after a binary operator https://www.flake8rules.com/rules/W504.html"
        // E203: Whitespace before ':' https://www.flake8rules.com/rules/E203.html Black wants this really badly!
        "flake8.args": [
            "--ignore=E402,E501,W503,E203"
        ],
        "python.analysis.typeCheckingMode": "basic",
        "window.zoomLevel": 1.5
    }
    
    I have the feeling that the regular expressions don't really work.
  5. Dealing with switches between Home Office (i.e. mobile office) and the office in the university. Try to prevent to close the VS code. I am using xpra for connect to VS code (-insiders) from both places. NOTE: 23.12.2025 I switched to (tiger) vncserver & ssh tunnels with https://mobaxterm.mobatek.net/ under Windows for the ssh tunnels.