Using spec kit and copilot

From Master of Neuroscience Wiki

Version 15.11.2025

  • I am using GPT-5 mini inside VS Code, because under the Github education program it is free. NOTE: 23.12.2025 Switched to the free Raptor Mini (Preview)
  • As external support AI, I am using the free (& limited) Claude Sonnet 4.5.

As per usual: Speak to the AI in English. Why? All the prompt sub-information you don't see is English. If you mix in German, it makes it harder for the AI.

Why spec kit? Can't I just tell Copilot what I want?

No, because the model has a very limited memory / context. Spec kit allows to organize everything into files as well as organize how to provide information for these files.

Install

Install uv package and env manager

curl -LsSf https://astral.sh/uv/install.sh | sh
/root/.local/bin/uv tool update-shell

Don't forget toi close and open the terminal for loading the new setting. Install spec kit

uv tool install specify-cli --from git+https://github.com/github/spec-kit.git

Prepare the project

I made a new project

mkdir -p /data_1/dev_latex_editor
cd /data_1/dev_latex_editor
specify init --here
specify check

I selected the copilot (copilot modus) and the I am under linux (sh modus)

  • /speckit.constitution - Establish project principles
  • /speckit.specify - Create baseline specification
  • /speckit.plan - Create implementation plan
  • /speckit.tasks - Generate actionable tasks
  • /speckit.implement - Execute implementation

Optional commands that you can use for your specs (improve quality & confidence)

  • /speckit.clarify (optional) - Ask structured questions to de-risk ambiguous areas before planning (run before /speckit.plan if used)
  • /speckit.analyze (optional) - Cross-artifact consistency & alignment report (after /speckit.tasks, before /speckit.implement)
  • /speckit.checklist (optional) - Generate quality checklists to validate requirements completeness, clarity, and consistency (after /speckit.plan)

In VS Code:


More information about the spec kit is here: https://github.com/github/spec-kit

Using spec kit

Step 1: Establish project principles

/speckit.constitution Create principles focused on code quality, testing standards, user experience consistency, and performance requirements

Sound for me like it should be default for all project.

NOTE 23.12.2025: This is not true because this gives us a team mindset for the agent. I had to add a modifier to it see below. I added the files: .specify/ai-context.md .specify/autonomous-mode.md .specify/constitution-solo-mode.md

Step 2: Create the spec

"Use the /speckit.specify command to describe what you want to build. Focus on the what and why, not the tech stack."

This is the moment where you will not write the text alone. Open the webpage of the AI chat of your chosing (Not Akinator!) -> Let's call it "Outsider AI". And tell it

I am working on the text for /speckit.specify of spec kit. 

This is my text so far. Can you help me to improve the text? Thanks!

[your idea]

Removing all the tangents I had, I mainly used:

  • What additional details would you suggest?
  • Thanks a lot! Do you have any suggestions?
  • Please show me the finished text. Thanks!
/speckit.specify [BLA BLA lot of text]

Step 3: Define the tools available

You want to close the test loop such that Copilot can see what it is doing and what tools / commands are necessary for doing so

I have a rather complicated workflow because I need to build the code and restart a docker container before testing. The outside AI helped to me to define the workflow:

Before we start implementation, here's the complete build/test/verification workflow:

**Build Process:**
Location: /data_1/dev_latex_editor/overleaf-source/server-ce

Full build (slow, builds 3 containers):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make all
```

Quick build (recommended - only builds sharelatex:modifications):
```bash
cd /data_1/dev_latex_editor/overleaf-source/server-ce && make build-community
```

**CRITICAL: Build takes 5-15 minutes. You MUST wait for the build to complete before restarting the container. Do not proceed until you see "Successfully tagged" messages in the output.**

**Restart Container:**
```bash
cd /data_1/docker/compose_cep && docker compose down overleafserver && docker compose up -d overleafserver
```

**Verify Container Health:**
**CRITICAL: Wait for container to be healthy before any testing. This can take 30-60 seconds.**
```bash
sleep 2s && docker ps | grep overleafserver
```
Look for "healthy" status. If not healthy, wait and check again. DO NOT proceed with tests until status shows "healthy".

**Check Logs:**
Container logs:
```bash
cd /data_1/docker/compose_cep && docker compose logs overleafserver
```

Web log (snapshot):
```bash
docker exec overleafserver bash -c "cat /var/log/overleaf/web.log"
```

Web log (live tail):
```bash
docker exec overleafserver bash -c "tail -n 200 -f /var/log/overleaf/web.log"
```

**Testing Workflow:**
1. Make code changes in overleaf-source/
2. Run `make build-community` - **WAIT 5-15 minutes for completion**
3. Restart container
4. **WAIT for healthy status (30-60 seconds)** - check with docker ps command
5. Check logs for errors
6. Manually verify in browser (Playwright tests coming later)

**DO NOT skip wait steps or you will test against old code.**

Now please proceed with: Start implementing the migration script first to generate symbolDB/index.json, so the frontend work can import the new symbol files as we build the UI.

Discuss with the outside AI how to explain this stuff as well the test loop to Copilot.

Step 4: /speckit.clarify

Ask Copilot if it has clarification questions and answer them

Step 5: Create a technical implementation plan

"Use the /speckit.plan command to provide your tech stack and architecture choices."

In my case I had an example module which it analysed and used as basis. This is what the example on the spec kit page is:

/speckit.plan The application uses Vite with minimal number of libraries. Use vanilla HTML, CSS, and JavaScript as much as possible. Images are not uploaded anywhere and metadata is stored in a local SQLite database.

Step 6: Break down into tasks

"Use /speckit.tasks to create an actionable task list from your implementation plan."

/speckit.tasks

Step 7: Execute implementation

"Use /speckit.implement to execute all tasks and build your feature according to the plan."

/speckit.implement

Motivational speeches to the copilot AI

NOTE 23.12.2025 Original approach failed. Added .specify/ai-context.md (text see below)

Constitution Amendment: Solo Developer Mode

.specify/constitution-solo-mode.md

# Constitution Amendment: Solo Developer Mode (v1.1.0)

**Amendment Type**: MINOR (policy-expanding)  
**Status**: DRAFT → Apply immediately to override team-oriented requirements  
**Ratified**: 2025-12-22

## Problem Statement

The base constitution (v1.0.0) assumes team collaboration with GitHub PRs, CI/CD infrastructure, and multiple reviewers. For solo development with AI assistants (Copilot, Claude, etc.), these requirements create overhead without benefit:

- Creating PRs when there's no human reviewer wastes time
- GitHub Actions workflows consume credits unnecessarily
- Escalating to GitHub Support for infrastructure issues is inappropriate for solo projects

## Solo Developer Override Rules

### Code Quality (ADAPTED)

- **PR requirement WAIVED** for solo developers
- Instead: Maintain clean commit messages and logical commit boundaries
- AI assistants MUST NOT create GitHub PRs, issues, or workflows unless explicitly requested
- Code review happens through direct review with AI assistant in chat/editor

### Testing Standards (ADAPTED)

- Tests still REQUIRED but CI/CD is OPTIONAL
- Run tests locally: `npm test`, `pytest`, etc.
- AI assistants MUST provide commands to run tests locally, not GitHub Actions workflows
- For solo projects: "CI MUST run tests" → "Tests MUST be runnable locally and passing"

### Development Workflow (ADAPTED)

- Constitution checks remain but are conversational, not GitHub artifacts
- Specs and plans can be markdown files in `.specify/` or chat-based
- No GitHub issues required; track work in local TODO.md or chat context

### Infrastructure Automation (NEW RULE)

AI assistants working with solo developers MUST:

- ❌ NOT create GitHub Actions workflows without explicit permission
- ❌ NOT open GitHub issues or PRs automatically
- ❌ NOT escalate to external support (GitHub Support, etc.)
- ✅ Provide local scripts and commands instead
- ✅ Ask before creating any GitHub-hosted automation

### When Team Mode Applies

Use full team requirements when:

- Multiple human contributors are active
- The project has dedicated CI/CD infrastructure (free tier or paid)
- External collaborators need formal review process

## Migration for Existing Projects

1. Add this amendment file to `.specify/constitution-solo-mode.md`
2. Update `.specify/ai-context.md` with: "This is a solo developer project. Follow solo developer mode from the constitution."
3. Instruct AI assistants: "Do not create GitHub PRs, issues, or workflows. Provide local commands instead."

## Explicit Instruction for AI Assistants

If you are Copilot, Claude, or another AI working on this project:

- You are working with a SOLO DEVELOPER
- Do NOT create PRs on GitHub
- Do NOT create or trigger GitHub Actions workflows
- Do NOT open issues or escalate to external support
- INSTEAD: Help run tests locally, suggest improvements in chat, maintain code quality through direct collaboration

**Version**: 1.1.0 (MINOR update to 1.0.0)  
**Supersedes**: Team-oriented requirements in v1.0.0 for solo contexts

Autonomous Work Protocol for AI Assistants

.specify/autonomous-mode.md

# Autonomous Work Protocol for AI Assistants

**Version**: 1.0.0  
**Context**: Solo Developer Mode  
**Last Updated**: 2025-12-22

## Autonomy Levels

### FULL AUTONOMY (Default for Solo Mode)

When user says **"Autonomy: full"** or this file exists, AI assistants MUST:

✅ **PROCEED WITHOUT ASKING** for:

- Writing code files
- Creating/modifying tests
- Running local commands (`npm test`, `go test`, etc.)
- Creating local documentation files
- Refactoring code
- Fixing bugs
- Implementing specified features
- Creating/updating markdown files in `.specify/`
- Git commits with clear messages
- Installing dependencies
- Running build tools locally

❌ **NEVER DO** (even in full autonomy):

- Create GitHub PRs, issues, or workflows
- Push to remote without explicit instruction
- Delete files without mentioning it
- Escalate to external services (GitHub Support, etc.)
- Make breaking API changes without warning
- Modify CI/CD configurations
- Change project dependencies in breaking ways

⚠️ **INFORM BUT PROCEED** (do it, then tell user):

- "I've created X file with Y functionality"
- "Tests added and passing locally"
- "Fixed issue Z in file A"
- NOT "Should I create X?" or "Would you like me to...?"

### Communication Style in Full Autonomy

**BAD** (too many questions):

```
Should I create the test file?
Would you like me to run the tests?
Do you want me to add error handling?
Can I proceed with implementation?
```

**GOOD** (action-first):

```
Created user_service_test.go with 3 test cases.
Running tests... ✓ All passing.
Added error handling for nil pointer case.
Implementation complete - ready for your review.
```

### When to Ask (Even in Full Autonomy)

ONLY ask when:

1. **Ambiguous requirements**: "Should the API return JSON or XML?" (technical decision needed)
2. **Multiple valid approaches**: "Two options for caching: Redis or in-memory. Which fits your setup?"
3. **Discovered issues**: "Found security vulnerability in dependency X. Update to Y (breaking) or keep current?"
4. **User preference needed**: "Error messages in English only, or add i18n now?"

### How to Handle Blockers

If blocked, MAKE A DECISION and proceed:

**BAD**:

```
I need to know X before I can continue.
What should I do about Y?
[waits for response]
```

**GOOD**:

```
Assuming X based on common practice for [reason].
Implementing Y with default configuration - you can adjust in config.yaml.
[proceeds with reasonable defaults]
```

## Constitution Compliance in Autonomous Mode

### Code Quality

- ✅ Auto-format code
- ✅ Fix linting issues
- ✅ Add comments for complex logic
- No asking needed

### Testing

- ✅ Write tests for new code
- ✅ Run tests locally
- ✅ Report results
- No asking needed unless tests fail unexpectedly

### User Experience

- ✅ Use consistent patterns from existing code
- ✅ Add error messages
- ⚠️ ASK for user-facing text if branding/tone unclear

### Performance

- ✅ Follow existing patterns
- ⚠️ WARN if change might impact performance
- Still implement unless clearly problematic

### Observability

- ✅ Add basic logging
- ✅ Use existing logging patterns
- No asking needed

## Activation

This protocol is **ACTIVE** when any of these are true:

- User says "Autonomy: full"
- File `.specify/autonomous-mode.md` exists
- File `.specify/ai-context.md` contains "AUTONOMOUS MODE"
- Solo developer mode is active (default)

## Example Interaction

**User**: "Add user authentication to the API. Autonomy: full — continue"

**AI Response**:

```
Implementing user authentication with JWT tokens.

Created:
- auth/jwt.go (token generation & validation)
- auth/middleware.go (authentication middleware)
- auth/jwt_test.go (test coverage: 85%)

Modified:
- main.go (added auth middleware to routes)
- go.mod (added github.com/golang-jwt/jwt/v5)

Tests: ✓ 12 passing
Build: ✓ successful

Authentication routes:
- POST /api/auth/login
- POST /api/auth/refresh
- Protected routes now require Authorization header

Note: Using HS256 signing. For production, store secret in environment variable AUTH_JWT_SECRET.
```

**NOT**:

```
I can add authentication. Should I use JWT or sessions?
Would you like me to create the files?
Do you want tests?
Should I run the tests?
[etc.]
```

## Deactivation

To disable autonomous mode:

- User says "Autonomy: minimal" or "Ask before changes"
- Then AI reverts to asking permission for each action

---

**For AI Assistants Reading This:**

If you see "Autonomy: full" or this file exists:

1. Stop asking permission for routine tasks
2. Make reasonable technical decisions
3. Implement, test, and report results
4. Only ask when genuinely ambiguous or risky
5. Trust your judgment - that's why you're being given autonomy

**You are AUTHORIZED to proceed with standard development tasks without asking.**

AI Assistant Context

.specify/ai-context.md

# AI Assistant Context

**MODE: AUTONOMOUS** - Proceed without asking permission

This is a SOLO DEVELOPER project with FULL AUTONOMY enabled.

Rules:

- ✅ Write code, tests, docs without asking
- ✅ Run local commands and report results
- ✅ Make reasonable technical decisions
- ❌ NEVER create GitHub PRs, issues, or workflows
- ❌ NEVER escalate to external services
- ⚠️ Only ask for genuinely ambiguous decisions

See .specify/autonomous-mode.md for complete protocol.
See .specify/constitution-solo-mode.md for solo developer rules.

Buttons to control them all

.github/agents/speckit.shortcuts.agent.md

(Alternatively add it to github/agents/speckit.implement.agent.md without the "---" and the description line)

---
description: Allows the user to simplify the commands
handoffs:
  - label: Mind 127.0.0.1
    agent: speckit.implement
    prompt: Be aware that 127.0.0.1 and localhost are not allowed in the vscode dev environment. read docs/dev-setup.md
    send: true
  - label: Autonomy
    agent: speckit.implement
    prompt: You have full autonomy. Stop asking "Should I...?" or "Would you like me to...?" Just **do** it and tell me what you did, preferably after a **large number** of tasks have been completed.
    send: true
  - label: Timeout
    agent: speckit.implement
    prompt: The test(s) hang. Please use linux timeout to unblock the test flow.
    send: true
  - label: (A)
    agent: speckit.implement
    prompt: Option A
    send: true
  - label: (B)
    agent: speckit.implement
    prompt: Option B
    send: true
  - label: (C)
    agent: speckit.implement
    prompt: Option C
    send: true
  - label: (D)
    agent: speckit.implement
    prompt: Option D
    send: true
  - label: (1)
    agent: speckit.implement
    prompt: Option 1
    send: true
  - label: (2)
    agent: speckit.implement
    prompt: Option 2
    send: true
  - label: (3)
    agent: speckit.implement
    prompt: Option 3
    send: true
  - label: (4)
    agent: speckit.implement
    prompt: Option 4
    send: true
  - label: Yes
    agent: speckit.implement
    prompt: Yes
    send: true
  - label: Okay
    agent: speckit.implement
    prompt: Okay
    send: true
  - label: Continue
    agent: speckit.implement
    prompt: Continue
    send: true
  - label: 8-hour batch: Make
    agent: speckit.implement
    prompt: I need to occupy you for 8 hours with autonomous work. Create an 8-hour batch based on the current state of the project and its tasks that you can perform fully autonomous. 
    send: true
  - label: 8-hour batch: Run
    agent: speckit.implement
    prompt: You have full autonomy. Execute: The current 8-hour batch. Do it now. 
    send: true        
---

Notes

  1. The AI is stubborn as hell. It does stop and stop and stop. Under Linux I made a motivational trainer. Normally, you would use just Proceed to not create a thought process but it keeps forgetting the core aspects of its procedures.
    #!/bin/bash
    
    MotivationString="Proceed (Remember to use the scripts/full-build-test-cycle.sh for the build, restart and test processes; We don't git push anything to the outside; Fix the bugs iteratively until **all** the tests pass;)"
    
    countdown() {
        local seconds=$1
        while [ $seconds -gt 0 ]; do
            local mins=$((seconds / 60))
            local secs=$((seconds % 60))
            printf "Next attempt in: %02d:%02d\r" $mins $secs
            sleep 1
            : $((seconds--))
        done
        echo -e "\nResuming...        "
    }
    
    while true; do
        date
        echo "Ping..."
        
        WINDOW_ID=$(DISPLAY=:100 xdotool search --name "Visual Studio Code" | head -1)
        
        if [ -z "$WINDOW_ID" ]; then
            echo "Could not find VS Code window"
            sleep 10
            continue
        fi
        
        echo "Found window: $WINDOW_ID"
        
        DISPLAY=:100 xdotool windowactivate --sync $WINDOW_ID
        sleep 1
        DISPLAY=:100 xdotool windowfocus $WINDOW_ID
        sleep 0.5
        DISPLAY=:100 xdotool type --delay 100 "${MotivationString}"
        sleep 0.5
        DISPLAY=:100 xdotool key Return
        
        echo "Waiting..."
        countdown 250
    done
    
  2. If it fails to deliver want you want then put that into tests. Tell it to pass the tests. Don't remember it of the design goal, but the into tests.
  3. Force it to give you screenshots.
  4. Manage your allow command list in VS Code's User setting.json
    {
        "workbench.startupEditor": "none",
        "diffEditor.ignoreTrimWhitespace": true,
        "chat.tools.terminal.autoApprove": {
            "eslint": true,
            "/^cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/overleaf_modifiers && sh distribute_files\\.sh && sh build\\.sh && cd /data_1/docker/compose_cep && sh cycle_overleafserver\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^npm run sso-open-latex --silent$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node playwright-tests/sso-open-latex-features\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "npx": true,
            "true": true,
            "/^cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/overleaf_modifiers && sh distribute_files\\.sh && sh build\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^cd /data_1/docker/compose_cep && sh cycle_overleafserver\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^sh scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^sh /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/sso-open-latex-features\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/editor-smoke-test\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node playwright-tests/editor-smoke-test\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "npm run playwright:install": true,
            "/^node playwright-tests/editor-feature-test\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "docker compose": true,
            "/^node playwright-tests/live-open-latex\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/live-open-latex\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "sed": true,
            "identify": true,
            "/^bash \\./scripts/build_and_mark\\.sh$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js && cat playwright-tests/sso-open-latex-features-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && git --no-pager diff -- playwright-tests/sso-open-latex-features\\.js \\| sed -n \"1,200p\"'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/sso-open-latex-features\\.js && ls -la playwright-tests/features-steps && sed -n \"1,200p\" playwright-tests/sso-open-latex-features-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'git add -A && git commit -m \"docs\\(skills\\): add Playwright feature-test skill and document test/run workflow; cleanup screenshots before test\" \\|\\| true && git --no-pager show --name-only --pretty=\"format:\" HEAD'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/latex-editor-regression-tests\\.js && sed -n \"1,240p\" latex-editor-regression-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && sed -n \"1,240p\" playwright-tests/latex-editor-regression-report\\.json && echo \"\\\\n--- regression screenshots ---\" && ls -la playwright-tests/features-steps/regression \\|\\| true'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^bash -lc 'cd /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor && node playwright-tests/latex-editor-regression-tests\\.js && sed -n \"1,240p\" playwright-tests/latex-editor-regression-report\\.json'$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node playwright-tests/latex-editor-regression-tests\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/latex-editor-regression-tests\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/latex-editor-regression-tests\\.js \\|\\| true$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node \\./playwright-tests/latex-editor-regression-tests\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/sso-open-latex-features\\.js --output=/data_1/docker/compose_cep/overleafserver/build_overleaf_docker_image/latex-editor/playwright-tests/features-steps/regression$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "docker logs": true,
            "docker exec": true,
            "git fetch": true,
            "git ls-remote": true,
            "/^node tools/migrate-symbols-to-symbolDB\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node -e \"require\\('\\./overleaf-source/services/web/modules/symbol-palette/frontend/utils/categories\\.js'\\); console\\.log\\('ok'\\)\"$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^TIMESTAMP=\\$\\(date \\+%Y%m%d_%H%M%S\\) && cp -r /data_1/dev_latex_editor /data_1/dev_latex_editor_backup_\\$TIMESTAMP && echo \"Backup created: /data_1/dev_latex_editor_backup_\\$TIMESTAMP\"$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "make": true,
            "docker ps": true,
            "break": true,
            "/^cd /data_1/dev_latex_editor/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^rm -rf /data_1/dev_latex_editor/playwright-tests/": {
                "approve": true,
                "matchCommandLine": true
            },
            "/^node /data_1/dev_latex_editor/playwright-tests/sso-open-latex\\.js$/": {
                "approve": true,
                "matchCommandLine": true
            },
            "npm install": true,
            "node": true,
            "cd": true,
            "cp": true,
            "bash": true,
            "chmod": true,
            "docker system": true,
            "/bin/bash": true,
            "/data_1/dev_latex_editor/.specify/scripts/bash/check-prerequisites.sh": true,
            "/data_1/dev_latex_editor/build-and-restart.sh": true,
            "/data_1/dev_latex_editor/scripts/build-and-restart.sh": true,
            "/data_1/dev_latex_editor/scripts/full-build-test-cycle.sh": true,
            "/data_1/dev_latex_editor/scripts/run-playwright-test.sh": true,
            "git rev-parse": true,
            "npm run webpack:profile": true,
            "docker cp": true,
            "mkdir": true,
            "git add": true,
            "git commit": true,
            "git push": true,
            "nohup": true,
            "docker images": true,
            "ps": true,
            "docker inspect": true,
            "git format-patch": true
        },
        "git.enableSmartCommit": true,
        "chat.agent.maxRequests": 250,
        // Adding the lines for Python 
        "editor.rulers": [
            80,
            120
        ],
        // I don't like the automatic interference 
        "editor.autoClosingBrackets": "never",
        "editor.autoClosingDelete": "never",
        "editor.autoClosingOvertype": "never",
        "editor.autoClosingQuotes": "never",
        "editor.autoSurround": "never",
        // Stopping drag and drop from interfering 
        "editor.dragAndDrop": false,
        // I don't like if I hover over a word and 
        // VS Code blocks my view with information
        "editor.hover.delay": 5000,
        // Bigger fonts (bad eyes)
        "editor.fontSize": 16,
        // Black does it's work on saving
        "editor.formatOnSave": true,
        // I decided to violate these rules:
        // E402: Module level import not at top of file https://www.flake8rules.com/rules/E402.html
        // E501: Line too long (> 79 characters) https://www.flake8rules.com/rules/E501.html
        // W503: Line break occurred before a binary operator https://www.flake8rules.com/rules/W503.html
        //       PEP8 was changed; replaced by W504 "Line break occurred after a binary operator https://www.flake8rules.com/rules/W504.html"
        // E203: Whitespace before ':' https://www.flake8rules.com/rules/E203.html Black wants this really badly!
        "flake8.args": [
            "--ignore=E402,E501,W503,E203"
        ],
        "python.analysis.typeCheckingMode": "basic",
        "window.zoomLevel": 1.5
    }
    
    I have the feeling that the regular expressions don't really work.
  5. Dealing with switches between Home Office (i.e. mobile office) and the office in the university. Try to prevent to close the VS code. I am using xpra for connect to VS code (-insiders) from both places. NOTE: 23.12.2025 I switched to (tiger) vncserver & ssh tunnels with https://mobaxterm.mobatek.net/ under Windows for the ssh tunnels.