Automatic "why did that just fail?" powered by your favorite LLM
⚠️ This plugin may still be unstable and may not work reliably yet. It's a work in progress. Feel free to test it out and contribute!
A Zsh plugin that automatically analyzes failed commands and provides AI-powered explanations and solutions. Never wonder why a command failed again!
- Automatic Analysis: Detects when commands fail (non-zero exit codes) and automatically explains why
- AI-Powered: Uses aichat to provide intelligent explanations and solutions
- Smart Caching: Remembers previous failures to avoid redundant API calls
- Context-Aware: Includes command, exit code, working directory, and git branch information
- Log Browsing: Browse previous failures with
fzf
integration - Configurable: Customize behavior with environment variables
-
aichat - Required for AI analysis
# Install aichat (see https://github.com/sigoden/aichat for more information) brew install aichat # or your package manager of choice
The
aichat
tool requires an API key for an LLM. Consult the aichat repository for more information. -
fzf - Optional, for browsing failure logs
brew install fzf # or your package manager of choice
Antidote
antidote bundle stephenhowells/zsh-postmortem
Oh-My-Zsh
Clone into ~/.oh-my-zsh/custom/plugins
and add zsh-postmortem
to the plugins=(...)
array in .zshrc
.
git clone https://github.com/stephenhowells/zsh-postmortem ~/.oh-my-zsh/custom/plugins/zsh-postmortem
Zinit
zinit light stephenhowells/zsh-postmortem
Manual (no plugin manager)
Clone the repo anywhere (for example ~/.zsh/zsh-postmortem
) and source the plugin file from your .zshrc
:
# Grab the plugin
git clone https://github.com/stephenhowells/zsh-postmortem ~/.zsh/zsh-postmortem
# Add to your .zshrc
source ~/.zsh/zsh-postmortem/zsh-postmortem.plugin.zsh
Configure the plugin by setting environment variables in your .zshrc
before loading the plugin:
# Disable the plugin entirely
export AI_POSTMORTEM_DISABLE=1
# Cache directory (default: $XDG_STATE_HOME/ai-postmortem or ~/.local/state/ai-postmortem)
export AI_POSTMORTEM_CACHE_DIR="$HOME/.cache/ai-postmortem"
# Model arguments passed to aichat (default: --no-stream)
export AI_POSTMORTEM_MODEL_ARGS="--no-stream --model gpt-4"
# Format output as bullet points (default: true)
export AI_POSTMORTEM_BULLETS=false
The plugin works automatically. When a command fails, it will:
- Capture the command, exit code, and context
- Check if this failure has been seen before (cached)
- If new, send the information to your configured LLM
- Display the explanation and cache it for future reference
$ cat missing_file.txt
cat: missing_file.txt: No such file or directory
✖ cat missing_file.txt
- **Reason for Failure:**
- The command `cat missing_file.txt` failed because `missing_file.txt` does not exist in the specified directory `/Users/<your-username>`.
- **Exit Code:**
- An exit code of `1` generally indicates that a file was not found or a general error occurred.
- **How to Fix:**
- Ensure `missing_file.txt` actually exists by checking the directory listing using `ls` or `find`.
- Verify you're in the correct directory. Use `pwd` to confirm your current directory and `cd /path/to/correct/directory` if necessary.
- If the file is in another directory, specify the correct path: `cat /path/to/missing_file.txt`.
- If the file does not exist, create it using `touch missing_file.txt` before attempting to read it.
Use the ai-oops-log
command (or alias aplo
) to browse previous failures with fzf:
$ ai-oops-log
# or
$ aplo
This opens an interactive browser where you can search and view previous failure explanations.
- Hook Registration: Uses Zsh's
precmd
hook to run after each command - Failure Detection: Checks the exit code of the last command
- Context Gathering: Collects command text, exit code, working directory, and git branch
- Deduplication: Creates a hash of the failure context to avoid duplicate API calls
- AI Analysis: Sends context to
aichat
for analysis - Caching: Stores results for future identical failures
- Local Processing: All analysis happens through your local
aichat
configuration - No Data Collection: The plugin doesn't send data anywhere except through your configured LLM
- Cached Results: Previous analyses are stored locally in your cache directory
- Configurable: You control which LLM service is used via
aichat
configuration
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.