aide.sh
Deploy AI agents, just like Docker.
aide.sh is a CLI tool for packaging, deploying, and managing AI agents. Same agent works with or without AI — add -p to think.
Why aide.sh?
- Docker mental model —
build,run,exec,push,pull. If you know Docker, you know aide.sh. - LLM optional — agents are structured skill runners. Add an LLM and they become autonomous reasoners.
- Local-first — agents run on your machine, secrets stay in your vault.
- MCP native — Claude, Codex, Gemini can control your agents as subagents.
- One binary — no Python, no Node.js, no Docker daemon.
Quick taste
# scaffold & deploy
aide.sh init my-agent
aide.sh build my-agent/
aide.sh run my-agent --name bot
# use it
aide.sh exec bot hello world # explicit — you drive
aide.sh exec -p bot "what's up?" # semantic — AI drives
# monitor
aide.sh dash # web dashboard at localhost:3939
Dashboard

Built-in observability. See every agent's skills, cron jobs, usage analytics, and logs — all in one place.
Next steps
- Installation — get the binary
- Quick Start — build your first agent in 5 minutes
- Concepts — images, instances, skills, vault
Installation
aide.sh ships as a single static binary. No runtime dependencies.
One-line install (recommended)
curl -fsSL https://aide.sh/install | bash
This detects your OS and architecture, downloads the latest release, and places the binary at ~/.local/bin/aide-sh.
Cargo install
If you have a Rust toolchain:
cargo install aide-sh
Build from source
git clone https://github.com/AIDEdotsh/aide.git
cd aide
cargo build --release
cp target/release/aide-sh ~/.local/bin/
Verify
$ aide-sh --version
aide-sh 0.1.0
Make sure ~/.local/bin is in your PATH:
export PATH="$HOME/.local/bin:$PATH"
Shell alias (optional)
For convenience, alias the binary to aide.sh:
echo 'alias aide.sh="aide-sh"' >> ~/.bashrc
source ~/.bashrc
Next
Once installed, proceed to the Quick Start.
Quick Start
Build and run your first agent in under 5 minutes.
1. Scaffold a new agent
$ aide.sh init school
Created school/
Agentfile.toml
persona.md
skills/
hello.sh
2. Edit the Agentfile
Open school/Agentfile.toml. The template looks like this:
[agent]
name = "school"
version = "0.1.0"
description = "My first agent"
author = "you"
[persona]
file = "persona.md"
[skills.hello]
script = "skills/hello.sh"
description = "Say hello"
usage = "hello [name]"
[seed]
dir = "seed/"
[env]
required = []
optional = []
Add a skill, change the description, or leave the defaults -- it works out of the box.
3. Build the image
$ aide.sh build school/
Building school v0.1.0 ...
Image: school:0.1.0 (sha256:a3f8...)
This packages the Agentfile, persona, skills, and seed data into a compressed image stored in ~/.aide/images/.
4. Run an instance
$ aide.sh run school --name school
Instance "school" started (id: school)
5. Execute a skill
$ aide.sh exec school hello
Hello, world!
$ aide.sh exec school hello Alice
Hello, Alice!
6. List available skills
$ aide.sh exec school
Available skills:
hello Say hello
7. Open the dashboard
$ aide.sh dash
Dashboard running at http://localhost:3939
The dashboard shows all running instances, recent logs, and skill status.
What's next?
- Concepts — understand images, instances, vault, and semantic injection
- Agentfile.toml reference — full configuration guide
- Skills — writing script and prompt skills
Concepts
Core ideas behind aide.sh, mapped to Docker equivalents where applicable.
Images vs Instances
| Docker | aide.sh | Description |
|---|---|---|
| Image | Image | Immutable snapshot built from Agentfile.toml |
| Container | Instance | Running copy of an image with its own state |
| Dockerfile | Agentfile.toml | Declarative manifest |
| docker build | aide.sh build | Package into image |
| docker run | aide.sh run | Create instance from image |
| docker exec | aide.sh exec | Run a command inside instance |
Images live in ~/.aide/images/. Instances live in ~/.aide/instances/.
Agentfile.toml
The manifest that defines an agent. Contains:
- [agent] — name, version, description, author
- [persona] — pointer to a markdown file describing the agent's identity
- [skills.NAME] — executable capabilities (scripts or prompts)
- [seed] — static data bundled into the image
- [env] — required and optional environment variables
- [soul] — LLM routing preferences
Skills
A skill is a named capability. Two types:
- Script-based — a shell script (
skills/hello.sh) that receives args via$1,$2, etc. - Prompt-based — a markdown file (
skills/summarize.md) interpreted by an LLM at runtime.
Skills are invoked with aide.sh exec <instance> <skill> [args...].
Persona
A markdown file that describes who the agent is. Used by LLMs when the agent runs in semantic mode. Has no effect in explicit (non-LLM) mode.
Vault
Encrypted secret storage. Secrets are injected as environment variables at skill execution time. Three-tier scoping:
- Per-skill env — highest priority, set in
[skills.NAME] env - Per-agent env — set in
[env] - Vault — global secrets, lowest priority
See Vault & Secrets for details.
Semantic injection (the -p flag)
By default, aide.sh exec runs skills explicitly -- you name the skill and pass args.
Add -p and the input becomes a natural language prompt routed through an LLM:
# explicit: you pick the skill and args
aide.sh exec bot email check
# semantic: LLM picks the skill and args
aide.sh exec -p bot "do I have new mail?"
The agent itself is unchanged. The -p flag wraps it with an LLM reasoning layer. Without -p, no LLM is involved -- the human acts as the reasoning layer.
MCP (Model Context Protocol)
MCP lets LLM hosts (Claude Code, Cursor, etc.) call aide.sh agents as tools. Running aide.sh setup-mcp registers your agents so that an LLM can:
- List available agents and skills
- Execute skills and read output
- Access logs
See MCP Integration for setup instructions.
Agentfile.toml Reference
The Agentfile is the manifest that defines an agent. It is always named Agentfile.toml and lives at the root of the agent directory.
Complete example
[agent]
name = "jenny"
version = "0.1.0"
description = "NTU GIEE PhD student assistant — school work, email, course management"
author = "ydwu"
[persona]
file = "persona.md"
[skills.cool]
script = "skills/cool.sh"
description = "NTU COOL LMS scanning (courses, assignments, grades)"
usage = "cool [courses|assignments|grades|todos|summary|scan]"
schedule = "0 8 * * *"
env = ["NTU_COOL_TOKEN"]
[skills.email]
script = "skills/email.sh"
description = "Email triage (POP3/SMTP)"
usage = "email [check|unread|read N|search Q|send TO SUBJ BODY]"
schedule = "0 */4 * * *"
env = ["SMTP_USER", "SMTP_PASS", "POP3_USER", "POP3_PASS"]
[skills.chrome]
script = "skills/chrome.sh"
description = "Chrome browser automation"
usage = "chrome [open|screenshot|scrape URL]"
[seed]
dir = "seed/"
[env]
required = ["NTU_COOL_TOKEN"]
optional = ["SMTP_USER", "SMTP_PASS", "POP3_USER", "POP3_PASS"]
[soul]
prefer = "claude-sonnet"
min_params = 1
Sections
[agent]
| Field | Required | Description |
|---|---|---|
| name | yes | Agent name (lowercase, alphanumeric, hyphens) |
| version | yes | Semver string |
| description | yes | One-line summary |
| author | no | Author name or handle |
[persona]
| Field | Required | Description |
|---|---|---|
| file | yes | Path to a markdown file describing the agent's identity |
The persona file is used by LLMs in semantic mode (-p). It has no effect in explicit mode.
[skills.NAME]
Each skill is a TOML table under [skills].
| Field | Required | Description |
|---|---|---|
| script | yes* | Path to shell script (mutually exclusive with prompt) |
| prompt | yes* | Path to prompt markdown file (mutually exclusive with script) |
| description | no | Shown in aide.sh exec <instance> skill list and --help |
| usage | no | Usage string for --help |
| schedule | no | Cron expression for periodic execution |
| env | no | List of env var names this skill needs |
[seed]
| Field | Required | Description |
|---|---|---|
| dir | yes | Directory of static files bundled into the image |
Seed data is copied into the instance at aide.sh run time. Useful for config files, templates, or reference data.
[env]
| Field | Required | Description |
|---|---|---|
| required | no | Env vars that must be present; aide.sh run will fail without them |
| optional | no | Env vars that are used if available |
[soul]
Controls LLM behavior when the agent runs in semantic mode.
| Field | Required | Description |
|---|---|---|
| prefer | no | Preferred LLM model identifier |
| min_params | no | Minimum parameters before falling back to LLM reasoning |
Validation
Run aide.sh lint <dir> to check an Agentfile for errors before building:
$ aide.sh lint school/
Agentfile.toml: OK
Skills: 3 found, all scripts exist
Env: NTU_COOL_TOKEN required
Skills
A skill is a named, executable capability of an agent. Skills are the primary unit of work in aide.sh.
Script-based skills
The most common type. A skill backed by a shell script:
[skills.hello]
script = "skills/hello.sh"
description = "Greet someone"
usage = "hello [name]"
The script receives positional arguments via $1, $2, etc:
#!/usr/bin/env bash
# skills/hello.sh
NAME="${1:-world}"
echo "Hello, $NAME!"
$ aide.sh exec bot hello Alice
Hello, Alice!
Prompt-based skills
A skill backed by a markdown prompt file, interpreted by an LLM at runtime:
[skills.summarize]
prompt = "skills/summarize.md"
description = "Summarize text using an LLM"
usage = "summarize <text>"
<!-- skills/summarize.md -->
Summarize the following text in 3 bullet points:
{{input}}
Prompt skills always require an LLM. They are skipped if no LLM is configured.
Execution model
When aide.sh exec <instance> <skill> [args] runs:
- The instance working directory is set to the instance root (
~/.aide/instances/<name>/) - Environment variables are injected in order: vault -> agent env -> skill env
- The script runs as a subprocess with the scoped environment
- stdout/stderr are captured and returned to the caller
- Exit code is preserved (0 = success)
Adding description and usage
The description and usage fields appear when listing skills:
$ aide.sh exec bot
Available skills:
cool NTU COOL LMS scanning (courses, assignments, grades)
Usage: cool [courses|assignments|grades|todos|summary|scan]
email Email triage (POP3/SMTP)
Usage: email [check|unread|read N|search Q|send TO SUBJ BODY]
chrome Chrome browser automation
Usage: chrome [open|screenshot|scrape URL]
Example: skill with argument parsing
#!/usr/bin/env bash
# skills/cool.sh
set -euo pipefail
CMD="${1:-help}"
case "$CMD" in
courses)
curl -s -H "Authorization: Bearer $NTU_COOL_TOKEN" \
"https://cool.ntu.edu.tw/api/v1/courses" | jq '.[].name'
;;
assignments)
curl -s -H "Authorization: Bearer $NTU_COOL_TOKEN" \
"https://cool.ntu.edu.tw/api/v1/courses/${2}/assignments" | jq '.[]'
;;
*)
echo "Usage: cool [courses|assignments COURSE_ID|grades|todos|summary|scan]"
exit 1
;;
esac
Scheduled skills
Add a schedule field with a cron expression to run a skill periodically:
[skills.cool]
script = "skills/cool.sh"
schedule = "0 8 * * *" # daily at 8 AM
Scheduled skills require the daemon (aide.sh up). See Cron & Scheduling.
Vault & Secrets
The vault is aide.sh's encrypted secret store. Secrets are injected as environment variables at skill execution time.
Import from .env file
$ aide.sh vault import .env
Imported 5 secrets from .env
The .env file uses standard KEY=VALUE format:
NTU_COOL_TOKEN=abc123
[email protected]
SMTP_PASS=hunter2
Set individual secrets
$ aide.sh vault set NTU_COOL_TOKEN=abc123
Set NTU_COOL_TOKEN
$ aide.sh vault set [email protected] SMTP_PASS=hunter2
Set SMTP_USER
Set SMTP_PASS
Check vault status
$ aide.sh vault status
Vault: ~/.aide/vault.db (encrypted, AES-256-GCM)
Secrets: 5 stored
NTU_COOL_TOKEN set 2025-06-01
SMTP_USER set 2025-06-01
SMTP_PASS set 2025-06-01
POP3_USER set 2025-06-01
POP3_PASS set 2025-06-01
Rotate encryption key
$ aide.sh vault rotate
Vault key rotated. All secrets re-encrypted.
Three-tier environment scoping
When a skill runs, environment variables are resolved in this order (highest priority first):
- Per-skill env — variables listed in
[skills.NAME] env - Per-agent env — variables listed in
[env] requiredandoptional - Vault — global secrets available to all agents
If the same key exists at multiple levels, the highest-priority value wins.
# Agentfile.toml
[skills.email]
script = "skills/email.sh"
env = ["SMTP_USER", "SMTP_PASS"] # skill-level: checked first
[env]
required = ["NTU_COOL_TOKEN"] # agent-level: checked second
optional = ["SMTP_USER"] # vault: checked last
Credential leak scanning
aide.sh scans skill output for potential secret leaks:
$ aide.sh exec bot email check
[warn] Potential secret detected in output (SMTP_PASS pattern). Use --allow-leak to suppress.
This is a best-effort check. Always review scripts that handle sensitive data.
Security notes
- The vault database is stored at
~/.aide/vault.db - Encryption uses AES-256-GCM with a key derived from your system keychain
- Secrets are never written to disk in plaintext
aide.sh vault exportis intentionally not supported
Cron & Scheduling
Run skills on a schedule using cron expressions.
Defining schedules in Agentfile.toml
Add a schedule field to any skill:
[skills.cool]
script = "skills/cool.sh"
description = "NTU COOL LMS scanning"
schedule = "0 8 * * *" # daily at 8:00 AM
[skills.email]
script = "skills/email.sh"
description = "Email triage"
schedule = "0 */4 * * *" # every 4 hours
The schedule uses standard 5-field cron syntax:
minute hour day-of-month month day-of-week
0 8 * * *
Managing schedules at runtime
List scheduled jobs
$ aide.sh cron ls
INSTANCE SKILL SCHEDULE NEXT RUN
jenny cool 0 8 * * * 2025-06-15 08:00
jenny email 0 */4 * * * 2025-06-15 12:00
Add a schedule
$ aide.sh cron add jenny cool "30 9 * * 1-5"
Schedule set: jenny/cool at 30 9 * * 1-5 (weekdays at 9:30 AM)
Remove a schedule
$ aide.sh cron rm jenny cool
Schedule removed: jenny/cool
Daemon mode
Scheduled jobs only run when the daemon is active:
$ aide.sh up
Daemon started (PID 12345)
Dashboard: http://localhost:3939
Cron scheduler: active (2 jobs)
Without the daemon, schedules defined in Agentfile.toml are stored but not executed.
To stop the daemon:
$ aide.sh down
Daemon stopped.
Viewing cron status in the dashboard
The dashboard at http://localhost:3939 shows a cron panel with:
- All scheduled jobs across all instances
- Next scheduled run time
- Last execution result (exit code, duration, truncated output)
- Execution history (last 10 runs per job)
Cron output and logs
Cron job output is captured in the instance log:
$ aide.sh logs jenny --filter cron
[2025-06-14 08:00:01] cron/cool: 3 courses, 2 new assignments
[2025-06-14 12:00:01] cron/email: 5 unread messages
Failed jobs (non-zero exit code) are flagged in the dashboard and logs.
MCP Integration
MCP (Model Context Protocol) lets LLM hosts like Claude Code, Cursor, and Gemini call aide.sh agents as tools. Your agents become subagents that any LLM can orchestrate.
What is MCP?
MCP is a standard protocol for LLMs to discover and invoke external tools. aide.sh implements an MCP server that exposes your running agents as tools.
Auto-configure for Claude Code
$ aide.sh setup-mcp
Detected: Claude Code
Wrote MCP config to ~/.claude/settings.json
Registered tools: aide_list, aide_exec, aide_logs
This adds aide.sh as an MCP server in your Claude Code configuration.
Manual setup
Add the following to your Claude Code settings.json or equivalent MCP config:
{
"mcpServers": {
"aide": {
"command": "aide-sh",
"args": ["mcp-serve"]
}
}
}
For Cursor, add to .cursor/mcp.json:
{
"mcpServers": {
"aide": {
"command": "aide-sh",
"args": ["mcp-serve"]
}
}
}
Available MCP tools
Once configured, the LLM host sees these tools:
| Tool | Description |
|---|---|
aide_list | List all running instances and their skills |
aide_exec | Execute a skill on a running instance |
aide_logs | Retrieve recent logs for an instance |
Example: Claude Code calling an agent
After setup, Claude Code can use your agents directly:
User: "Check if I have any new assignments on COOL"
Claude: I'll check your COOL LMS for new assignments.
[calling aide_exec: instance="jenny", skill="cool", args=["assignments"]]
You have 2 new assignments:
- VLSI Design: HW3 due 2025-06-15
- ML Lab: Final project proposal due 2025-06-20
The LLM discovers available skills via aide_list, picks the right one, and calls aide_exec.
Running the MCP server manually
$ aide-sh mcp-serve
MCP server listening on stdio
This is what setup-mcp configures to run automatically. You rarely need to invoke it directly.
Debugging
Check that instances are running:
$ aide.sh ps
NAME IMAGE STATUS
jenny jenny:0.1.0 running
Test a skill works before expecting MCP to use it:
$ aide.sh exec jenny cool courses
If the skill works via exec but not via MCP, check the MCP server logs:
$ aide.sh logs --mcp
Dashboard
aide.sh includes a built-in web dashboard for monitoring agents.

Quick start
aide.sh dash # standalone dashboard
aide.sh up # daemon + cron + dashboard
aide.sh up --no-dash # daemon without dashboard
Dashboard serves at http://localhost:3939.
Panels
- Instances — sidebar listing all agents with status dots (green = active)
- Skills — table with name, description, cron schedule, env vars
- Cron Jobs — schedule, skill name, last run time
- Usage — per-skill execution bars with success/fail ratio, CLI vs MCP breakdown
- Logs — real-time log tail with auto-refresh (3s polling)
API
# List all instances
curl http://localhost:3939/api/instances
# Instance detail (skills, cron, metadata)
curl http://localhost:3939/api/instance/jenny.ydwu
# Logs (latest N lines)
curl http://localhost:3939/api/logs/jenny.ydwu?tail=50
# Usage analytics
curl http://localhost:3939/api/stats/jenny.ydwu
Stats response
{
"total_execs": 12,
"by_skill": {
"cool": { "count": 9, "success": 9, "fail": 0 },
"email": { "count": 1, "success": 1, "fail": 0 }
},
"by_source": { "cli": 12, "mcp": 0 }
}
Expose (Email / PWA)
Give your agents an address so anyone can talk to them — no terminal required.
Overview
| Channel | Address | Status |
|---|---|---|
[email protected] | Planned | |
| PWA | app.aide.sh/instance | Planned |
Both channels are platform-controlled — aide.sh manages the infra, not third-party APIs.
Email Gateway (planned)
Every agent gets an email address:
[email protected]
How it works
- Anyone sends an email to
[email protected] - Cloudflare Email Worker receives the message
- Routes to agent:
aide.sh exec -p jenny.ydwu "<email body>" - Agent runs matching skills
- Reply sent back via Resend/SMTP
Why email
- Zero install — everyone already has an email client
- Mobile native — works in any phone's mail app
- We control it — aide.sh domain, our routing, no third-party bot limits
PWA (planned)
A chat interface at app.aide.sh:
app.aide.sh/jenny.ydwu → chat UI → WebSocket → aide.sh exec
Features
- Real-time chat with your agents
- Works on iOS, Android, Desktop (Add to Home Screen)
- Push notifications via Service Worker
- No App Store required
Self-hosted integrations
Power users can integrate their agents with any messaging platform using
the MCP server or direct aide.sh exec calls:
# Your own Telegram bot
your-telegram-bot → aide.sh exec -p jenny.ydwu "message"
# Your own Discord bot
your-discord-bot → aide.sh exec -p jenny.ydwu "message"
# Any webhook
curl -X POST your-server/agent -d "message" → aide.sh exec
The MCP server (aide.sh mcp) is the recommended integration point for
LLM-based callers. For simple message-in/message-out, shell out to aide.sh exec.
aide.sh run
Create and start an agent instance from an image.
Usage
aide.sh run <IMAGE> [--name NAME] [-d]
IMAGE can be:
- A local agent type defined in
aide.toml(e.g.jenny) - A registry reference in
<user>/<type>format (e.g.ydwu/school-assistant)
Options
| Flag | Description |
|---|---|
--name NAME | Set instance name (default: <type>.<user>) |
-d, --detach | Run in background (default for agents) |
Examples
aide.sh run jenny # local type from aide.toml
aide.sh run ydwu/school-assistant # pull from registry, then run
aide.sh run ydwu/school-assistant --name school
What happens
- Resolves the image: local
aide.tomldefinition or registry pull (<user>/<type>format). - Derives instance name. Default is
<type>.<USER>where$USERcomes from the environment. - Creates the instance directory under
~/.aide/instances/<name>/with subdirectoriesmemory/andlogs/. - Copies
persona.mdfrom the agent type if one exists. - Writes
instance.tomlmanifest (name, type, email, role, domains, cron entries, creation timestamp). - Sets up cron schedules declared in the Agentfile.
Instance directory layout
~/.aide/instances/<name>/
instance.toml # manifest
persona.md # copied from agent type
Agentfile.toml # agent package spec
skills/ # executable skill scripts
seed/ # read-only knowledge files
memory/ # writable state
logs/ # daily log files
Errors
- If the instance name already exists, the command fails with a message suggesting
aide rm <name>first. - If the image is a registry reference that has not been pulled, it is pulled automatically.
aide.sh exec
Execute a skill on a running agent instance.
Usage
aide.sh exec [FLAGS] <INSTANCE> [SKILL] [ARGS...]
When called without a skill, lists all available skills for the instance (equivalent to --help).
Options
| Flag | Description |
|---|---|
-i, --interactive | Interactive mode (allocate pseudo-TTY) |
-t, --tty | Allocate pseudo-TTY |
Examples
aide.sh exec jenny.ydwu # list available skills
aide.sh exec jenny.ydwu cool courses # run the "cool" skill with arg "courses"
aide.sh exec jenny.ydwu email check # check email
aide.sh exec -it jenny.ydwu cool scan # interactive mode
Skill resolution
- Looks up the instance under
~/.aide/instances/<instance>/. - Loads
Agentfile.tomlfrom the instance directory. - Finds the skill definition and locates the script at
skills/<skill>.sh. - Executes the script via
bash, passing remaining arguments.
Environment scoping
Secrets from the vault (~/.aide/vault.age) are injected with a tiered scoping model:
- Per-skill env -- If the skill declares its own
envlist in the Agentfile, only those variables are injected. - Per-agent env -- Otherwise, variables from the
[env]section (required+optional) are injected. - No Agentfile -- Legacy mode: all vault variables are injected.
Smart error messages
If you pass a registry-style image reference (e.g. ydwu/school-assistant) instead of an instance name, the command suggests running aide.sh pull and aide.sh run first.
Help output
Running aide.sh exec <instance> with no skill prints:
- Instance name, agent type, and version
- Each skill with its usage string and description
- Per-skill env var requirements
- A hint about semantic mode (
aide.sh exec -p <instance> "<query>")
aide.sh build / push / pull
Build, publish, and fetch agent images.
aide.sh build
Build an agent image from an Agentfile.
aide.sh build [PATH] [-t TAG]
| Flag | Description |
|---|---|
PATH | Directory containing Agentfile.toml (default: .) |
-t, --tag TAG | Tag the image (name:version) |
Build process
- Parse -- Loads and parses
Agentfile.toml. - Validate -- Checks that all referenced files exist (persona, skill scripts, prompt files, seed directory).
- Lint -- Runs the full lint suite including credential leak scanning.
- Collect -- Gathers all files:
Agentfile.toml, persona, skill scripts/prompts, and seed directory contents. - Archive -- Creates
<name>-<version>.tar.gz. - Checksum -- Computes SHA-256 of the archive.
Example
aide.sh build agents/jenny/
aide.sh build . -t jenny:0.2.0
aide.sh push
Push a built agent image to the registry.
aide.sh push [IMAGE]
| Flag | Description |
|---|---|
IMAGE | Directory or image name to push (default: .) |
Push process
- Builds the image (same as
aide.sh build). - Reads registry credentials from the vault (
AIDE_REGISTRY_TOKEN). - Uploads the
.tar.gzarchive tohub.aide.sh.
Requires prior authentication via aide.sh login.
aide.sh pull
Download an agent image from the registry.
aide.sh pull <USER>/<TYPE>[:VERSION]
Pull process
- Resolves the image reference. Version defaults to
latest. - Downloads the archive from
hub.aide.sh. - Extracts to
~/.aide/types/<user>/<type>/.
Example
aide.sh pull ydwu/school-assistant
aide.sh pull ydwu/school-assistant:0.1.0
Related commands
aide.sh images-- List locally available agent images.aide.sh search <query>-- Search the registry.aide.sh login-- Authenticate with the registry.
aide.sh init / lint
Scaffold and validate agent projects.
aide.sh init
Generate a new agent project skeleton.
aide.sh init <NAME>
Creates a directory <NAME>/ with:
<NAME>/
Agentfile.toml # pre-filled manifest with TODO placeholders
persona.md # starter persona template
skills/hello.sh # sample executable skill script
seed/.gitkeep # empty seed directory
The generated Agentfile.toml includes a complete example with [agent], [persona], [skills.hello], [seed], and [env] sections. The hello skill is already executable (chmod 755).
Example
aide.sh init my-agent
cd my-agent
aide.sh lint # validate the scaffold
aide.sh exec . hello # run the sample skill
Fails if the directory already exists.
aide.sh lint
Validate an Agentfile.toml and its referenced files.
aide.sh lint [PATH]
PATH defaults to the current directory.
Checks performed
Errors (block build/push):
| # | Check |
|---|---|
| 1 | Agentfile.toml parses as valid TOML |
| 2 | agent.name is non-empty |
| 3 | agent.version is non-empty |
| 4 | agent.description is present and not a TODO placeholder |
| 5 | agent.author is present and not a TODO placeholder |
| 6 | Each skill has either script or prompt, not both, not neither |
| 7 | Referenced script files exist |
| 8 | Script files are executable (chmod +x) |
| 9 | Referenced prompt files exist |
| 10 | Cron schedule expressions are valid (5-field format) |
| 11 | No credential leaks detected (scans for sk-ant-, sk-proj-, AKIA, ghp_, gho_, eyJhbG, -----BEGIN) |
Warnings (informational):
| # | Check |
|---|---|
| 12 | Skill missing description |
| 13 | Skill missing usage |
| 14 | Seed directory declared but not found |
| 15 | Skills reference env vars but no [env] section exists |
| 16 | Per-skill env var not listed in [env].required or [env].optional |
Example output
$ aide.sh lint
[pass] Agentfile.toml parsed
[pass] agent.name = "jenny"
[pass] agent.version = "0.1.0"
[pass] agent.description present
[pass] agent.author present
[pass] skills/cool.sh exists (executable)
[warn] skills.chrome: missing usage
[fail] skills/draft.sh: not executable
1 warning(s), 1 error(s)
aide.sh mcp / setup-mcp
MCP (Model Context Protocol) server for LLM tool integration.
aide.sh mcp
Start the MCP stdio server.
aide.sh mcp
Reads newline-delimited JSON-RPC 2.0 messages from stdin and writes responses to stdout. This is not meant to be called directly -- it is invoked by an MCP-capable LLM client (e.g. Claude Code).
Protocol
- Transport: stdio (stdin/stdout), newline-delimited JSON
- Protocol version:
2024-11-05 - Server info:
aide.shv0.1.0
Tool schemas
aide_list -- List all running agent instances and their available skills.
{
"name": "aide_list",
"inputSchema": { "type": "object", "properties": {} }
}
Returns instance names, types, status, email, and per-instance skill list with type (script/prompt) and description.
aide_exec -- Execute a skill on an agent instance.
{
"name": "aide_exec",
"inputSchema": {
"type": "object",
"properties": {
"instance": { "type": "string" },
"skill": { "type": "string" },
"args": { "type": "string" }
},
"required": ["instance", "skill"]
}
}
Runs the skill script, applies env scoping from the vault, logs the invocation, and returns stdout/stderr.
aide_logs -- Read recent logs from an agent instance.
{
"name": "aide_logs",
"inputSchema": {
"type": "object",
"properties": {
"instance": { "type": "string" },
"lines": { "type": "number" }
},
"required": ["instance"]
}
}
Returns the last N log lines (default 50).
aide.sh setup-mcp
Auto-configure MCP integration for a target client.
aide.sh setup-mcp [TARGET]
TARGET defaults to claude. Writes the MCP server configuration so that the client can discover and call aide.sh mcp automatically.
For Claude Code, this updates ~/.claude/claude_desktop_config.json (or equivalent) with the aide.sh MCP server entry.
aide.sh dash
Open the agent observability dashboard.
Usage
aide.sh dash [-p PORT]
Options
| Flag | Description |
|---|---|
-p, --port PORT | Port to serve on (default: 3939) |
Description
Starts a local HTTP server serving a web UI for monitoring agent instances. The dashboard provides a read-only view of instance status, skills, cron schedules, and logs.
Static assets are embedded in the binary via rust_embed.
API endpoints
| Method | Path | Description |
|---|---|---|
| GET | / | Dashboard web UI (index.html) |
| GET | /api/instances | List all instances with status, type, email, role, cron count, last activity |
| GET | /api/instance/{name} | Instance detail: metadata, version, description, author, skills, cron entries |
| GET | /api/logs/{name}?tail=N | Recent log lines for an instance (default tail: 100) |
Response examples
GET /api/instances
{
"instances": [
{
"name": "jenny.ydwu",
"agent_type": "jenny",
"status": "active",
"email": "[email protected]",
"role": "PhD assistant",
"cron_count": 2,
"last_activity": "[08:00:01] cool scan completed"
}
]
}
GET /api/logs/jenny.ydwu?tail=5
{
"logs": [
"[08:00:01] cron: cool scan",
"[08:00:03] cool scan completed",
"[12:00:01] cron: email check",
"[12:00:05] email check completed",
"[14:32:10] mcp-exec: cool courses"
]
}
Integration with aide.sh up
When running aide.sh up, the dashboard is spawned as a background task within the daemon unless --no-dash is passed.
aide.sh up # starts daemon + dashboard on port 3939
aide.sh up --no-dash # starts daemon without dashboard
aide.sh mount / unmount
Inject agent context into LLM coding tools.
Usage
aide.sh mount <INSTANCE> <TARGET>
aide.sh unmount <INSTANCE> <TARGET>
TARGET is one of: claude, codex, gemini, all.
What gets injected
The mount command gathers content from the instance directory and writes it as a single markdown document:
- Instance metadata -- agent type, email, role, cron schedules (from
instance.toml). - Persona -- contents of
persona.md. - Seed knowledge -- all
.mdfiles underseed/. - Memory -- all
.mdfiles undermemory/.
Each section is separated by a horizontal rule. The file is marked with <!-- aide-mount --> so it can be cleanly removed on unmount.
Target: claude
Writes to ~/.claude/projects/<cwd-key>/memory/aide_<instance>.md.
The CWD is encoded as a path key (slashes replaced with dashes). If a MEMORY.md index file exists in that directory, an entry is appended under an ## Aide Agents section.
aide.sh mount jenny.ydwu claude
# -> ~/.claude/projects/-Users-ydwu-projects-myapp/memory/aide_jenny.ydwu.md
Target: codex
Writes to ./AGENTS.md in the current working directory.
If AGENTS.md already exists with non-aide content, the agent context is appended after a separator. On unmount, only the aide-marked section is removed.
Target: gemini
Writes to ./GEMINI.md in the current working directory.
Same append/remove behavior as the codex target.
Target: all
Mounts (or unmounts) to all three targets at once.
Examples
aide.sh mount jenny.ydwu claude
aide.sh mount jenny.ydwu all
aide.sh unmount jenny.ydwu codex
aide.sh unmount jenny.ydwu all
Unmount behavior
- claude: Deletes
aide_<instance>.mdand removes the index entry fromMEMORY.md. - codex: Removes the aide-marked section from
AGENTS.md. Deletes the file if no other content remains. - gemini: Same as codex, targeting
GEMINI.md.
Architecture: The Soul Model
An aide.sh agent does not own its LLM. The caller brings the intelligence.
Core insight
In Docker, a container does not own the CPU. The host provides compute at runtime. aide.sh applies the same principle to AI agents: the agent defines skills, persona, and memory, but the LLM that powers reasoning is provided externally.
This means an agent can run in three modes without changing its definition:
Three caller modes
1. MCP mode (frontier)
An MCP-capable client (e.g. Claude Code) calls into the agent via aide.sh mcp. The LLM lives on the client side. The agent's skills run as tools the LLM can invoke.
This is the highest-capability mode. The caller's frontier model handles reasoning, planning, and natural language understanding. The agent provides domain-specific actions.
2. Terminal mode (no LLM)
A human runs aide.sh exec <instance> <skill> [args] directly. No LLM is involved. The skill script executes and returns output. The human is the intelligence layer.
This is the zero-dependency mode. Every agent works here, regardless of whether an LLM is available.
3. Daemon mode (local model)
The aide.sh up daemon runs scheduled tasks using a local model via Ollama. The [soul] section in Agentfile.toml declares preferences:
[soul]
prefer = "llama3.2:3b"
min_params = "1b"
prefer-- the preferred local model identifier.min_params-- minimum model size the agent needs.
The daemon selects the best available model that meets the requirement.
Docker analogy
| Docker | aide.sh |
|---|---|
| Container does not own CPU | Agent does not own LLM |
| Host provides compute | Caller provides intelligence |
| Works on any host with Docker | Works with any LLM (or none) |
| CPU is a runtime resource | LLM is a runtime resource |
Design consequence
Because the LLM is external, agent packages are small and deterministic. A skill is a bash script. A persona is a markdown file. There are no model weights, no inference servers, no GPU requirements in the agent image.
The same agent image that a frontier model orchestrates via MCP can also be operated by a human typing commands in a terminal.
Architecture: Semantic Injection
Same agent, with or without AI. Add -p to think.
Two execution modes
Every aide.sh agent supports two ways of being called:
Explicit mode (default)
aide.sh exec school cool courses
aide.sh exec school email unread
aide.sh exec school cool assignments
The caller names the exact skill and passes structured arguments. The skill script runs directly. No LLM is involved.
Semantic mode (-p flag)
aide.sh exec -p school "what's due this week?"
aide.sh exec -p school "check if any professors replied"
aide.sh exec -p school "summarize my grades"
The -p flag activates semantic injection. The natural language query is routed through an LLM, which selects the appropriate skill(s) and arguments based on the agent's persona and skill descriptions.
How semantic mode works
- The query and the agent's skill catalog (names, descriptions, usage strings) are composed into a prompt.
- The LLM (caller-provided or local Ollama) interprets the query and maps it to one or more skill invocations.
- The skill scripts execute normally.
- Results are returned, optionally summarized by the LLM.
The skill scripts themselves are unchanged between modes. Semantic mode wraps the dispatch layer, not the execution layer.
Why this matters
LLM is a runtime, not a dependency. An agent that works in explicit mode will always work -- on any machine, offline, without API keys. Semantic mode is an enhancement, not a requirement.
This is the opposite of frameworks where the LLM is baked into the agent definition. In aide.sh, the agent is a set of capabilities. The LLM is an optional accelerator that makes those capabilities accessible via natural language.
Comparison with other frameworks
| Property | aide.sh | LangChain / CrewAI / AutoGen |
|---|---|---|
| LLM required? | No (explicit mode works without) | Yes (core dependency) |
| Skill definition | Bash scripts + markdown prompts | Python functions + LLM chains |
| Offline capable? | Yes (explicit mode) | No |
| Human fallback? | Human is the LLM in terminal mode | No equivalent |
| Package size | KB (scripts + markdown) | MB+ (Python deps + model configs) |
The -p mental model
Think of -p as "pipe through intelligence." Without it, you talk to the agent in its native protocol (skill names + args). With it, you talk in natural language and the LLM translates.
Without -p: human -> skill -> output
With -p: human -> LLM -> skill -> output -> LLM -> human
Architecture: Docker Comparison
aide.sh follows Docker's conceptual model, applied to AI agents instead of application containers.
Command mapping
| Docker | aide.sh | Purpose |
|---|---|---|
Dockerfile | Agentfile.toml | Package definition |
docker build | aide.sh build | Create distributable image |
docker run | aide.sh run | Create and start an instance |
docker exec | aide.sh exec | Run a command inside an instance |
docker ps | aide.sh ps | List running instances |
docker stop | aide.sh stop | Stop an instance |
docker rm | aide.sh rm | Remove an instance |
docker logs | aide.sh logs | View instance logs |
docker inspect | aide.sh inspect | Detailed instance metadata |
docker push | aide.sh push | Upload image to registry |
docker pull | aide.sh pull | Download image from registry |
docker images | aide.sh images | List local images |
docker search | aide.sh search | Search the registry |
docker login | aide.sh login | Authenticate with registry |
| Docker Hub | hub.aide.sh | Public registry |
| Docker Desktop | aide.sh dash | GUI dashboard |
Concept mapping
| Docker concept | aide.sh equivalent | Notes |
|---|---|---|
| Image | Agent type | Immutable package: Agentfile + skills + persona + seed |
| Container | Instance | Running agent with its own memory and logs |
| Volumes | memory/ + seed/ | seed/ is read-only; memory/ is read-write |
| Secrets | Vault (~/.aide/vault.age) | Age-encrypted, scoped per-agent and per-skill |
| Entrypoint | Skills (script or prompt) | Each skill is an independent entry point |
| ENV | [env] section | Declares required/optional secrets |
| Registry | hub.aide.sh | Push/pull agent images |
| Compose | aide.toml | Multi-agent configuration |
| Daemon | aide.sh up | Background process for cron + dashboard |
| CPU/RAM | LLM | Runtime resource provided by caller, not owned by agent |
Key differences
Multiple entry points. A Docker container has one entrypoint. An aide.sh instance has many skills, each independently callable.
External compute. Docker containers own their process. aide.sh agents do not own their LLM -- it is provided by the caller (MCP client, terminal user, or local Ollama).
Cross-tool mounting. Docker volumes mount filesystems. aide.sh mount injects agent context into LLM tools (Claude Code, Codex, Gemini) as markdown files.
Deterministic packages. Agent images contain only scripts and markdown. No model weights, no runtime binaries, no language-specific dependencies. A typical image is kilobytes, not gigabytes.
Lifecycle comparison
Docker: build -> push -> pull -> run -> exec -> stop -> rm
aide.sh: build -> push -> pull -> run -> exec -> stop -> rm
The lifecycle is intentionally identical. If you know Docker, you know aide.sh.