Hactar is an augmented coding assistant that lives in your editor or terminal of choice. No AI hype bullshit, no agents, no vibe coding, just a tool that helps you write, debug, and iterate on code faster. An AI coding tool for devs doing serious work in tools they’ve spent decades mastering.
Hactar's core feature is tight integration with a knowledge repository. It leverages well written, meticulously crafted developer documentation that dramatically increases the quality of code generated by SOTA models. You can think of it like devdocs + starter kits + AI assistant. The starter kit era brought into the agent era.
Hactar places an emphasis on hackability and integration. Hactar isn't a closed-source black box or a CLI designed to be used with proprietary protocols. It is a first-class citizen in your developer experience. You can modify it on the fly while you work (and it will help with those modifications). Hactar uses both LLMs and hardcode knowledge to accomplish tasks. We don't wrap things up in a Vscode fork we tightly integrate with Emacs, Neovim, and the CLI. If you are a LISPer, Rubyist etc, you will feel right at home.
Some of Hactar's features:
Smart ContextHactar is the only AI tool integrated with a content repository (paid or bring your own). Millions of notes, error messages and solutions, examples, snippets, etc., pulled directly from realworld projects give Hactar unparalleled skills.
Hackable and ExtensibleHactar is explicitly designed to be extended while you work. When you discover gaps in the LLM's knowledge, you can patch it for later.
Bring Your Own KeyNo pricing changes or being tied to any specific model. Hactar supports all major providers, from Ollama to OpenRouter.
Tight Ecosystem IntegrationHactar is for developers who live and breathe their editors. Neovim and Emacs are integrated using community standards. No Vscode forks here.
Agents that Never BreakHactar agents don't do vibe coding. They are specifically designed to only code what works 99% of the time. Hactar agents manage configs, update dependencies, run tests, etc. These small wins add up when they work flawlessly.
Unix PhilosophyHactar exposes the context as a file that it fully understands. You can add and drop files by writing plain text and linking to them. This makes it trivial to use everything from Emacs tooling to grep and sed to manage your context.
Old School Hard Coded IntelligenceHactar uses "modes" to provide intelligent contextual knowledge to an LLM. With the magic of hard coding we can do things like look up docs for a React API in ms instead of seconds.
The quickest way:
curl -L https://hactar.space/install.sh | bashTip: Pass the --dev flag to the install script to build from source.
Nix:
nix profile install "github:hactar-project/hactar"hactar hactar.initGitHub Releases:
cd ~/.local/bin && wget -qO- https://github.com/hactar-project/hactar/releases/latest/download/hactar-0.1.tar.gz | tar -xz -C hactarhactar hactar.initImportant: You'll need to run hactar hactar.init if you install using anything other than the script.
Now you can run hactar in any git repo:
hactarYou'll want to set API keys. Any of the following will do:
OPENAI_API_KEY: The API key for OpenAI modelsANTHROPIC_API_KEY: The API key for Anthropic modelsGEMINI_API_KEY: he API for vertex/google ai studioOPENROUTER_API_KEY: The OpenRouter api key.
You'll likely want to set a model you can do this with an environment variable HACTAR_MODEL. In this case a githubcopilot model:
env HACTAR_MODEL="github_copilot/gemini-2.5-pro" hactarHactar.el isn't on melpa yet so you can use the standard github method:
(use-package hactar :straight (:host "github" :repo "hactar-project/hactar.el" :files ("*.el")) :config(setq hactar-args '("--model" "gemini/gemini-2.5-pro-preview-06-05")):bind("C-c h" . hactar-transient-menu)("C-x g h s" . hactar-sonnet)("C-x g h d" . hactar-deepseek)("C-x g h g" . hactar-gemini)("C-x g h t" . hactar-gemini-thinking)("C-x g h o" . hactar-o3-mini))Hactar supports all the usual plugin managers:
{ 'hactar-project/hactar.nvim',---@module 'oil'---@type oil.SetupOptsopts = {},-- Optional dependenciesdependencies = { { "echasnovski/mini.icons", opts = {} } },-- dependencies = { "nvim-tree/nvim-web-devicons" }, -- use if you prefer nvim-web-devicons-- Lazy loading is not recommended because it is very tricky to make it work correctly in all situations.lazy = false,}use 'hactar-project/hactar.nvim'require("paq")({{ "hactar-project/hactar.nvim" },})Plug 'hactar-project/hactar.nvim'call dein#add('hactar-project/hactar.nvim')git clone --depth=1 https://github.com/hactar-project/hactar.nvim.git ~/.vim/bundle/git clone --depth=1 https://github.com/hactar-project/hactar.nvim.git"${XDG_DATA_HOME:-$HOME/.local/share}"/nvim/site/pack/hactar/start/hactar.nvim require('hactar').setup({ auto_manage_context = false, default_bindings = false, debug = true, vim = true, ignore_buffers = {}, -- only necessary if you want to change the default keybindings vim.api.nvim_set_keymap('n', '<leader>C', ':HactarOpen --no-auto-commits<CR>', {noremap = true, silent = true})})Hactar is a Common LISP project. This makes it very hackable and you might want to run it from source.
First get all the dependencies. Mainly you will just need a working quicklisp environment. This is easiest on nix, you can use the shell.nix and flake.nix.
Arch:
sudo pacman -Sy git sbcl readline libuv rlwrap pkg-config openssl zlib libyaml libev libevdevUbuntu:
sudo apt updatesudo apt install git sbcl libreadline-dev libuv1-dev rlwrap pkg-config libssl-dev zlib1g-dev libyaml-dev libev-devNow you can run make build to get a build of it. The build script should download all the quicklisp dependencies for you.
Hactar will be built to ./bin/hactar. Please report any issues you run into and read the development guide.
- Rules
Rules are how the system prompt is changed in response to code
- Analyzers
Handle parsing code and extracting details like
is-react?- Processors
Take user input and LLM output and do something with it
- Slash Commands
Slash commands are the primary interactive mechanism in the Hactar REPL e.g
/addfor adding files- Dot Commands
Dot commands operate on the context/prompt itself. e.g .mod! to make code changes
- Modes
how Hactar changes the behavior of commands based on context and environment
- MCP
Model Context Protocol -- this the standard for tool calling that LLM providers have settled on.
- ACP
Agent Context Protocol -- An extension of MCP for agents
Hactar is developed on GitHub and the primary route for help is via Discussions and Issues. You can also hop into the Discord and ask questions.
When reporting issues please include info like hactar version, model, settings etc. You can get all the info like version, model, context, settings etc with the /dump command.
Hactar extensions are simple lisp files that get copied into a folder. Similar to the extension system you find in Emacs or Neovim ecosystems.
To install one simply clone it onto your path somewhere (e.g "/.config/hactar/plugins") and add it to your hactar.lisp:
(load "myplugin/core.lisp")You can manage your plugins using git. Here is an example make file to update all plugins:
PLUGINS_DIR ?= ~/.config/hactar/pluginsupdate-plugins: @echo "Updating plugins in $(PLUGINS_DIR)..." @for dir in $(PLUGINS_DIR)/*; do \ if [ -d "$$dir/.git" ]; then \ echo "--> Updating $$dir"; \ (cd "$$dir" && git pull); \ fi \ done @echo "Done.".PHONY: update-pluginsPass the --watch flag to hactar and then use the following syntax in comments
AI!: Trigger hactar to make changes. Describe the changes in the comment line and following lines.AI?: Trigger hactar to anwser questions about the code.
Use the scan command hactar scan
You can pass the format flag to control what hactar outputs. For example, here is how to generate an org-mode file describing the API of the codebase:
hactar scan -f org-modeHactar has three model types:
main/current/default model: This is the default model used for most tasks
cheap-model: This is the model used for programmatic and architecture tasks. For example, to generate JSON meta data from documentation.
embedding-model: The model used for embedding
You can list available models with:
/models
You can use an AGENTS.md, AGENTS.org, .hactar.guide.md, or .hactar.guide.org in the project's root. It will be added to context
The main workflow for hactar is:
Run it in a github project
hactarAdd files using
/addType instructions/prompt
Hactar generates git patches and applies them based on the prompt
Hactar can be extended with rules. The simplest form of rules is just a file named .hactar.rules. This will be added to the context automatically.
This isn't particulary useful though. To make rules useful you need to have them triggered by something. Hactar provides a large variety of hooks and triggers for rules. Let's apply a rule when the project stack contains react:
(defrule :prefer-custom-hooks "Prefer custom hooks over libraries where possible." :triggers '(:stack (:react)))Hactar is pretty good at extending itself. You can give rule generation a try with the /gen-rule command. Hactar will attempt to write a rule for based on your instructions. Then it will add it to your user.lisp file. Please read any rules it generates first and don't blindly accept them.
A core feature of Hactar is tight integration with a repository of reference docs. Most of the time documentation is automatically added to context as needed. You can manually add docs using /docs-add and you can drop docs from context using /docs-drop (or /drop).
List known documentation relevant to the current project stack. Basically uses *docs* global. If you want to lookup all documentation not just relevant ones use /docs-db and /search-add-files
Add documentation from file, URL, package name (@p/name@ver), or fzf selection. Optionally include metadata:
/docs-add <source> -tags=js -covers=reactAsk the LLM to find documentation for a query, fetch it, and add it.
Delete all documents from the database (requires confirmation).
Show documentation currently added to the context.
Search documentation database by text query and add selected doc to context.
Guess relevant documentation based on the last added file's content.
Generate documentation for the current context using the LLM.
Modes are how Hactar hard codes knowledge instead of relying on an LLM for every little thing. Modes let Hactar know how to look up API docs and tooling specific to the type of project you are working on. Hactar for example, can detect when a React project is being worked on then automatically add the React API docs and list of links to tutorials. A mode is a collection of prompts, analyzers, processors, hooks, and custom hardcoded dev tooling. Think LSP if it was developed in the LLM era.
You can manually enable and disable modes with /mode. You can list currently enabled modes with /modes.
Analyzers take code files an analyze them using a combination of hard coded and LLM models. By default they get enabled and disable automatically by modes.
Processors in Hactar handle processing the output from the LLMs.
For example here is the code that handles search and replace responses
(def-processor search-replace-processor (history) "Parses and applies SEARCH/REPLACE blocks from the last assistant message." (when history (let ((last-message (car (last history)))) (when (string= (cdr (assoc :role last-message)) "assistant") (let* ((content (cdr (assoc :content last-message))) (blocks (parse-search-replace-blocks content))) (when blocks (format t "~&Applying ~A SEARCH/REPLACE block(s)...~%" (length blocks)) (apply-search-replace-blocks blocks) (format t "~&Finished applying blocks.~%")))))))Guides are READMEs for agents. A lot like Claude.md or AGENTS.md but in Hactar they are smart and hooked up to a knowledge repository. You can think of it like those developer guides for a stack. They cover everything from best practices, how to write tests for that specific stack, to API documentation.
Hactar doesn't just add these files to your context, it includes first class tooling for writing, editing, modifying, and slicing them up. It has special agent features and prompts to intelligently add and modify them.
You can add one to your current project by writing a AGENTS.md, AGENTS.org, .hactar.guide.org, .hactar.guide.md. By default it assumes you only want to load one and .hactar.guide will be prioritized.
The global XDG_CONFIG_DIR/hactar/.hactar.org will be loaded together with any project specific one.
Print the content of the currently active guide file.
Search docs tagged 'api', select one, and add/update it under the 'APIs' (:apis:) headline.
Search docs tagged 'docs', select one, and add/update it under the 'Documentation' (:docs:) headline.
Search docs tagged 'example', select one, and add/update it under the 'Examples' (:examples:) headline.
List available .guide. files and select one to activate.
Generate or update the .hactar.guide.* file using the LLM.
Update the .hactar.guide.* file using the LLM based on the current context.
Agents in hactar are instances of hactar that act on a task in a loop with REPL. For example, you might have a lint agent that automatically fixes lint issues.
Hactar comes wih several built in agents and is designed to make building new agents quick and easy. Often you can even have Hactar one shot generate you a new specific agent.
You run agents with the command /agent-run or agent.run. Agents are a simple function created using the defagent macro.
Example:
hactar agent.run cmd "bun run typecheck"Runs a test command in a loop and fixes code with test failures using SEARCH/REPLACE blocks.
You can use
gen.project.configto automatically generate a project config with the correct test command. That way you don't need to keep adding stuff like package.json to the context.
Runs a typecheck command in a loop and fixes code with failures using SEARCH/REPLACE blocks.
You can use
gen.project.configto automatically generate a project config with the correct typecheck command. That way you don't need to keep adding stuff like package.json to the context.
Runs a lint command in a loop and fixes code with lint failures using SEARCH/REPLACE blocks.
You can use
gen.project.configto automatically generate a project config with the correct lint command. That way you don't need to keep adding stuff like package.json to the context.
Enable assistant mode for visual interaction.
Enable TTS audio output for assistant's extractions (used with --assistant).
Author name. Defaults to the value of the HACTAR_AUTHOR environment variable.
Enable all agent-like auto features (--auto-lint, --auto-test, --auto-typecheck).
Enable automatic linting after code changes.
Enable automatic testing after code changes.
Enable automatic type checking after code changes.
List of analyzers to disable (space-separated). [default:]
List of analyzers to enable (space-separated). [default:]
Set the model used for generating embeddings default to nomic-embed-text.
Gotcha: The embedding model is ollama only. Do not prefix the model.
Generate a shell command from the query, execute it, and print its output.
Display usage information and exit.
Port for the HTTP API server [default: 4269]
Turns off all the features that prevent hactar from destroying your codebase or system. Not available except as a flag. It allows Hactar to run as an agent without being in a vm, without a git respository, and execute any commands the LLM decides it wants to. Don't use please.
Project name. Defaults to current directory name.
File path to write assistant's text extractions (used with --assistant).
Display version and exit.
Path to the models configuration file (models.yaml)
Generate a shell command from the query, print it, and copy it to clipboard.
LLM model to use (e.g., ollama/qwen3:14b) [default: ollama/qwen3:14b]
Port for the Slynk server [default: 4005]
Send a query to the LLM, print the result, and exit.
Watch files and make code changes when AI comments are detected.
A comment with
AI!will trigger code changesA comment with
AI?can be used to ask questions
Use Gemini Pro 2.5 (gemini/gemini-2.5-pro-exp-03-25)
Use free Gemini Pro Experimental via OpenRouter (openrouter/google/gemini-2.5-pro-exp-03-25:free)
Use Anthropic Sonnet model (anthropic/claude-3-7-sonnet-20250219)
Use OpenAI GPT-4o Mini model (openai/o4-mini)
Use Deepseek Chat model via OpenRouter (openrouter/deepseek/deepseek-chat-v3-0324)
Use free Deepseek Base model via OpenRouter (openrouter/deepseek/deepseek-v3-base:free)
Hactar commands are divided into three types:
slash commands. these are hactar commands and change the state of hactar e.g
/adddot commands operate on the context and instruct the LLM. They might for example, create a new file.
hactar sub commands. These are commands you pass to hactar on startup e.g
create
Initialize Hactar: clone repo and install default prompts and models.
Display comprehensive help information about Hactar.
Run environment and setup checks and exit with appropriate status.
List all known documentation in the dataabase and select one. In non-interactive mode, prints the path of the selected doc.
Clone the Pro repo (if needed) and copy the selected content DB into db-path. Options: --content, -c VALUE Select which content DB to copy (default: all).
Update the Pro repository by running 'git pull'. Clones first if missing.
Run environment/setup checks (alias for run-all-checks-and-report).
Run an agent by name. Usage: hactar agent.run <agentname> [agentargs…]
Stop a running agent by its ID. Usage: hactar agent.stop <agentid>
List currently running agents.
GitHub Copilot API commands: models, authorize, complete.
Generate a shell command from a query, print it, and copy it to the clipboard.
Generate a shell command from a query and execute it immediately.
NPM package management and documentation.
Fetch news from news.ycombinator.com (Hacker News).
Import documentation from various sources (npm, file, http, github, etc.).
Import a text file into the documentation database.
Import documentation (alias of /docs.import).
Import a starter document and automatically tag it with 'starter'.
Run a shell command and optionally add the output to the chat.
Add files or images to the chat. If no arguments are given, uses fzf to select a file. Can provide image descriptions via -descriptions="desc1,desc2"
Run an agent. With no arguments, it shows a selector. Otherwise, runs the agent named in the first argument.
Select and stop a running agent.
Ask questions about the code base without editing any files.
Manually trigger assistant screen analysis with an optional prompt.
Toggle git autocommit on/off.
Set the cheap model to use when cost is a concern.
Clear the chat history.
Make changes and refactors to code.
Complete the provided text using the configured completion model.
Manually trigger chat history compression.
List files in the current context.
Expose the current context to hactar.{pid}.context.org and keep it synchronized.
Copy the last assistant message to the clipboard.
Estimate the cost of sending the current chat history as input.
Create a new project from a starter. Usage: /create <starter> <prompt…>
Create a new Agent project using the AgentStarter guide. Usage: /create.agent [prompt…]
Toggle debug mode for both hactar and llm packages.
Find and select documentation relevant to the current project stack. In non-interactive mode, prints the path of the selected doc.
Add documentation from file, URL, package name (@p/name@ver), or fzf selection. Optionally include metadata: /docs-add <source> -tags=tag1,tag2 -covers=cover1
Ask the LLM to find documentation for a query, fetch it, and add it.
Delete all documents from the database (requires confirmation).
Show documentation currently added to the context.
Search documentation database by text query and add selected doc to context.
Guess relevant documentation based on the last added file's content.
Import a text file into the documentation database. Usage: /docs.import <uri> -tags=tag1,tag2 -covers=cover1
List all known documentation in DB and select one. In non-interactive mode, prints the path of the selected doc.
Set the model to use for generating documentation metadata.
Remove files or images from the chat session.
Remove an image from the context by its path.
Dump settings, context, and debug info
Print out the API keys for each platform.
Dump the raw context that the LLM sees.
Print out the current dot system prompt.
Open an editor to write a prompt.
Generate a .hactar.toml for the current project using the LLM, based on repository context and files added.
Print the content of the currently active guide file.
Search docs tagged 'api', select one, and add/update it under the 'APIs' (:apis:) headline.
Search docs tagged 'docs', select one, and add/update it under the 'Documentation' (:docs:) headline.
Search docs tagged 'example', select one, and add/update it under the 'Examples' (:examples:) headline.
List available .guide. files and select one to activate.
Generate or update the .hactar.guide.* file using the LLM.
Display available commands and their descriptions.
List images currently in the context.
Import documentation from various sources (npm, file, http, github, etc.). Usage: /import <uri> -tags=tag1,tag2 -covers=cover1
Import documentation (alias of /docs.import). Usage: /import.docs <uri> -tags=tag1,tag2 -covers=cover1
Import a starter document and automatically tag it with 'starter'. Usage: /import.starter <uri> -tags=tag1,tag2 -covers=cover1
List all known files and indicate which are included in the chat session.
Switch to a new LLM. Uses fuzzy-select if no model name is provided.
Search the list of available models.
Plays an audio file. Usage: /playaudio <filepath>
Exit the application.
Reload hactar. Clear chat history, empty context, and reload the config.
Re-run a non-daemon watcher command.
Drop all files and clear the chat history.
Run a shell command and optionally add the output to the chat.
Search GitHub for code snippets based on a query. Usage: /search <natural language query for code>
Search for files containing a text pattern using rg, and add them to context.
Set the model used for the /complete command and HTTP endpoint. Uses fuzzy-select if no model name is provided.
Print out the current settings.
Generate a shell command based on a query and prompt for confirmation before running.
Generate a shell command based on a query and execute it immediately.
Search for starters by text query, allow selection, and add the selected starter to the documentation context.
Report on the number of tokens used by the current chat context.
Manually call a defined tool. Usage: /tool-call <toolname> <jsonargsorkeyvaluepairs>
List available tools and their descriptions.
View or manage the chat transcript.
Revert the last git commit made by hactar.
List active daemon watchers and stop the selected one.
Print the version information.
List available watchers and start the selected one.
Usage: cat <file1> [file2 …]
Displays content of specified files from the virtual context. The LLM will act like the OS 'cat' command. If a colon is passed on the end of filename treat it as a query on the file. For example:
cat src/components/Cheassboard.tsx:imports should return the imports from that file.
Usage: cmd <description of command>
Asks the LLM to generate a shell command based on the description. Prompts for confirmation before running.
Usage: cmd! <description of command>
Asks the LLM to generate a shell command and runs it immediately.
Usage: convert <filepath> <targetformatorconversiondescription>
Convert the content of the file or files from on format to another. Return each file as a org-mode source block. Example: convert src/components/Chessboard.tsx mdx should return the Chessboard.tsx as mdx
Usage: .create <FILE>
Create a new file in the virtual file system. The response should be a SEARCH/REPLACE block with an empty SEARCH section.
Usage: ls [path]
Lists directory contents. Acts like the OS 'ls' command.
When used on a file it should act like 'cat' command.
Usage: md <file-or-directory> [file-or-direct…]
Like cat/ls but isplays content of specified files, formatted as markdown source blocks
Usage: modify <filename> <description of changes>
Asks the LLM to generate modifications for the specified file. The response should be a SEARCH/REPLACE block, which will be processed automatically.
Usage: org <file-or-directory> [file-or-direct…]
Like cat/ls but isplays content of specified files, formatted as Org-mode source blocks.
Usage: set key=value
Set meta details to the operating system. e.g set project.description would set the project description
Usage: .| <dotcommand1withargs> | <dotcommand2withargsexpectinginput>
Pipes the text output of the first dot command as the final argument to the second dot command. Example: .| .cat myfile.txt | .modify anotherfile.txt The content is:
Hactar has multiple paths to configure things. This gives you flexiblity for how you configure it. Want to just use environment variables? Want to write some config? Want to fully modify everything using LISP? All three scenarios are covered.
Config is applied in the following order of precedence (lower numbers are overridden by higher numbers):
User customization files (
~/.config/hactar/user.lispor.hactar.user.lisp)User config (
~/.config/hactar/config.toml)Project configuration file (
.hactar.toml)Environment variables
CLI flags
These variables can be set in your shell to configure Hactar's behavior.
HACTAR_AUTHORSets the author name for the project, used in generated content or commits.
Example:
export HACTAR_AUTHOR="Your Name"
HACTAR_CONFIG_PATHSpecifies the path to Hactar's configuration directory.
Default:
~/.config/hactarExample:
export HACTAR_CONFIG_PATH="/path/to/your/hactar_config_dir"
HACTAR_DATA_PATHSpecifies the path to Hactar's data directory (starters, prompts etc)
Default:
~/.local/share/hactarExample:
export HACTAR_PATH="/path/to/your/hactar_data_directory"
HACTAR_REPO_URLSpecifies the url to clone the hactar repo from. Use your own custom Hactar!
Default:
git@github.com:hactar-project/hactar.git
HACTAR_REPO_DIR:The foloder where Hactar is cloned to
Default:
~/.local/share/hactar-repo
PIPER_MODEL_PATHThe full path to the Piper TTS model file (e.g.,
.onnx). This is required for the assistant's audio features (--audio).Default:
~/.config/hactar/speech/models/en_US-amy-low.onnxExample:
export PIPER_MODEL_PATH="/path/to/your/model.onnx"
OPENAI_API_KEY: The API key for OpenAI modelsANTHROPIC_API_KEY: The API key for Anthropic modelsGEMINI_API_KEY: The API for vertex/google ai studioOPENROUTER_API_KEY: The API keyA for open routerHACTAR_DB_PATH: The path to the sqlite hactar database. Defaults toXDG_DATA_DIR/hactar/hactar.dbAGENT_SAFE_ENV: Whether or not the environment is safe for running agents that might do things like delete files.Setting this to true or1will also enable--auto-cmds. Hactar will act on it's own and pontentially destroy systems. Please for the love of god don't enable this out of a VM.HACTAR_PRO_PATH: The path to content repository for pro features which by default isXDG_DATA_DIR/hactar/proHACTAR_UTILS_PATH:Specifies the directory where Hactar Pro utility scripts are symlinked. If this directory is in your shell's PATH, you can run scripts like
scripts-listdirectly.
HACTAR_SHELL:The shell to use when running commands. Fallsback to
$SHELLwhen not set and then tobash
HACTAR_MODEL: The model to use. Defaults toollama/qwen3:14bHACTAR_CHEAP_MODEL: The model used for cheap parsing tasks. Defaults toollama/qwen3:14bHACTAR_EMBEDDING_MODEL: The model used for generating embeddings. Defaults toollama/nomic-embed-textNote: Only ollama embedding API is currently supported.HACTAR_COMPLETION_MODEL: The model used for completion. Defaults toollama/qwen3:14bHACTAR_DOCS_META_MODEL: The model used for generating the metadata for documentation. Defaults to the value of cheap-model.HACTAR_GUIDE_PATH: Path to a plaintext file to include in the context. Use it to override aAGENTS.mdfile or.hactar.guide.orgin the project's root.
Project specific config can be placed in the current folder in a file name. .hactar.toml
pro(String)Path to the Hactar Pro content repository.
Overrides:
HACTAR_PRO_PATHDefault:
XDG_DATA_DIR/hactar/pro
hactar_config(String)Path to Hactar's main configuration directory.
Overrides:
HACTAR_CONFIG_PATHDefault:
~/.config/hactar
hactar_data(String)Path to Hactar's main data directory.
Overrides:
HACTAR_DATA_PATHDefault:
~/.local/share/hactar
hactar_repo_url(String)URL to Hactar Repo.
Overrides:
HACTAR_REPO_URLDefault:
git@github.com:hactar-project/hactar.git
hactar_repo_dir(String)Path to clone hactar to
Overrides:
HACTAR_REPO_DIRDefault:
~/.local/share/hactar-repo
database(String)Path to the Hactar sqlite database file.
Overrides:
HACTAR_DB_PATHDefault:
XDG_DATA_DIR/hactar/hactar.db
piper_model(String)Full path to the Piper TTS model file for audio features.
Overrides:
PIPER_MODEL_PATHDefault:
~/.config/hactar/speech/models/en_US-amy-low.onnx
author(String)The author name for the project, used in generated content or commits. Overrides the
HACTAR_AUTHORenvironment variable.
language(String)The primary programming language of the project. Will be
Example:
language = "python"
stack: (Array of Strings)A list of technologies, frameworks, or libraries used in the project. T
Example:
stack = ["react", "typescript", "vite"]
guide_extension(String)The default file extension for guides generated by
/guides-gen.Default:
"org"Example:
guide_extension = "md"
guide_exclude_tags(Array of Strings)A list of Org-mode tags. Headlines containing any of these tags will be excluded from the context provided by the active guide file.
Default:
["nocontext"]Example:
guide_exclude_tags = ["internal", "draft"]
embedding_model: (String)Set the model used for generating embeddings
Default:
nomic-embed-textExample:
embedding_model = "nomic-embed-text"Gotcha: The embedding model is ollama only. Do not prefix the model
guide: Path to the guide file to load. Recommended to useAGENTS.mdor.hactar.guide.orginstead.
test(String)The command used to run the project's test suite. Used by auto-test features and watchers.
Example:
test = "npm test"
lint(String)The command used to run the project's linter. Used by auto-lint features.
Example:
lint = "npm run lint"
typecheck(String)The command used to run the project's type checker. Used by auto-typecheck features.
Example:
typecheck = "npm run typecheck"
safe_env(Boolean)If
true, allows the agent to perform potentially destructive operations like executing arbitrary shell commands or deleting files. This is intended for use in controlled environments like VMs.Overrides:
AGENT_SAFE_ENVDefault:
falseWarning: Enabling this can lead to data loss.
You can enable or disable specific file analyzers for the project.
name(String)The name of the analyzer to configure (e.g.,
"package-json").
enable(Boolean)Set to
trueto enable the analyzer orfalseto disable it for this project.
Configure Hactar's agent-like automation features.
lint(Boolean)Enable/disable automatic linting.
test(Boolean)Enable/disable automatic testing.
typecheck(Boolean)Enable/disable automatic type checking.
docs(Boolean)Enable/disable automatic documentation features.
suggest_commands(Boolean)Enable/disable automatic command suggestion.
cmds(Boolean)Enable/disable automatic execution of shell commands. Warning: This is highly dangerous.
all(Boolean)Enable/disable all auto features at once.
limits(Integer)Set the retry limit for agent loops.
[paths] database = "/home/user/dev/my-project/hactar.db" [project] author = "Your Name" language = "typescript" stack = ["react", "vite", "tailwind"] guide_extension = "md" guide_exclude_tags = ["nocontext", "ignore"] [project.commands] test = "npm test" lint = "npm run lint" [agent] safe_env = false [auto] lint = true test = true limits = 3 [[analyzers]] name = "package-json" enable = true [[analyzers]] name = "react-dependency" enable = trueIf you prefer to not use environment variables you can configure the API keys for each platform in a TOML file
[api_keys] openai = "sk-..." anthropic = "sk-ant-..." gemini = "..." openrouter = "sk-or-..."You can customize anything in Hactar by using lisp. Hactar looks for this file at .hactar.user.lisp in the project root first, then at /.config/hactar/user.lisp.
*debug*(Boolean)Description: If
t, enables verbose debug output.Default:
nil
*git-autocommit*(Boolean)Description: If
t, Hactar will automatically create a git commit after applying changes fromSEARCH/REPLACEblocks.Default:
t
*http-port*(Integer)Description: The port for the HTTP server, which provides API endpoints for integrations.
Default:
4269
*mcp-port*(Integer)Description: The port for the MCP server, which provides API endpoints for integrations.
Default:
4369
*test-command*(String)Description: The default command to run for the test watcher.
Default:
"make test"
*transcript-file*(String)Description: The name of the file where the chat history transcript is saved.
Default:
".hactar.transcript.json"
*shell*(String)Description: The shell to use for running commands.
Default:
"bash"
*chat-history-limit*(Integer)Description: The maximum character limit for the chat history before it is automatically compressed.
Default:
8000
*multiline-mode*(Boolean)Description: Toggles multiline input mode.
Default:
nil
*docs-folder*(String)Description: The default folder to look for local documentation files when using
/docs-addwithout arguments.Default:
"docs/"
*max-content-chars*(Integer)Description: Maximum character length for a file's content before it gets split (used by some internal functions).
Default:
30000
*image-max-size-mb*(Integer)Description: Maximum size in megabytes for an image file before a warning is issued.
Default:
1
*guide-warn-chars*(Integer)Description: Character limit for an active guide file's content before a warning is issued.
Default:
30000
*guide-max-chars*(Integer)Description: The hard character limit for an active guide file. If a file exceeds this, it cannot be activated.
Default:
100000
*guide-file-extension*(String)Description: The default file extension for guides generated with
/guides-gen.Default:
"org"
*guide-exclude-tags*(List of Strings)Description: A list of tags to exclude headlines from the active guide file context.
Default:
'("nocontext")
*silent*(Boolean)Description: If
t, suppresses all non-essential output, including chat and model responses. Primarily used in execute mode when generating shell commands.Default:
nil
These are primarily configured via command-line flags but can be set in Lisp.
*assistant-output-file*(Pathname or String)Description: If set, the assistant's text extractions will be written to this file.
Default:
nil
*assistant-audio-enabled*(Boolean)Description: If
t, enables Text-to-Speech (TTS) audio generation for assistant responses.Default:
nil
*assistant-audio-muted*(Boolean)Description: If
t, temporarily mutes the assistant's audio output.Default:
nil
*piper-model-path*(Pathname or String)Description: Path to the Piper TTS model. Can also be set with the
PIPER_MODEL_PATHenvironment variable.Default: Path from
PIPER_MODEL_PATHor/speech/models/en_US-amy-low.onnx
*assistant-previous-image-description*(String)Description: The default text description to use for screenshots taken in assistant mode.
Default:
"Screenshot of the currently focused window."
You can configure and add models in ~/.config/hactar/models.yaml.
Here is an example with all possible config values:
- name: anthropic/claude-3-7-sonnet-20250219 # the model namededit_format: diff # what edit format to use. defaults to diffmodel_name: claude-3-7 # short name for the modelextra_params: # extra stuff passed in http requests extra_headers: anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25max_output_tokens: 8192max_input_tokens: 80000cache_control: true # whether or not cache requestssupports: ["vision"] # an array of things the model supportsAs of July 2025, MCP and tooling decreases performance of models, the more tools added the lower the performance. While we have some support for MCP and tool calling, the use is discouraged until the performance decrease is a solved problem. It is my guess that with models being trained on tool calling the tradeoff will swing long term. But for now, it is best to keep MCP usage to absolute minium.
Hactar is developed on GitHub and the primary route for help is via Discussions and Issues. You can also hop into the Discord and ask questions.
When reporting issues please include info like hactar version, model, settings etc. You can get all the info like version, model, context, settings etc with the /dump command.
Hactar is designed to not trip over other instances. You can launch a bunch of instances and get them running simultaneously.
That said, I discourage you from agent approaches to coding. You will be more productive by treating Hactar as a tool. What you are working on in your head is what Hactar should be working on. Use multiple instances more like workspaces to maintain diffferent context windows or even branches; but don't use them as agents. Like all LLMs tools Hactar can get stuck and just keep banging against a wall and not making progress while it eats up tokens. Hactar is meant to augment you.
org-mode has built in support and standards for literate workflows. In markdown literate features are tacked on. We rely on things like src-blocks for adding and removing files from context.
Use
hactar checkto make sure everything is good in your enviroment.Read the error messages
Error starting Slynk server: <details>The Slynk server, used for live Lisp development, failed to start. This is often because the specified port (default: 4005) is already in use by another application.
Stop the process using the port, or specify a different port for Hactar using the --slynk-port <port> command-line argument.
Failed to copy to clipboard. Neither 'wl-copy' nor 'xclip' found in PATH.The `/copy` command was used, but Hactar could not find the necessary command-line tools (`wl-copy` for Wayland, `xclip` for X11) to interact with the system clipboard.
Install `wl-copy` or `xclip` using your system's package manager.
API Error: HTTP <status_code> - <reason>The request to the LLM provider's API (OpenAI, Anthropic, etc.) failed with an HTTP error. Common causes include:
404 Not Found: Incorrect model name specified.
429 Too Many Requests: You have exceeded your API rate limit.
5xx Server Error: A problem on the LLM provider's end.
Verify the model name in your `models.yaml` configuration. Check the provider's status page and your account's rate limits.
Error deleting documents: <details>An error occurred while trying to delete records from the `documents` table in the PostgreSQL database. This could be due to permissions issues or a database connection problem.
Ensure the database user specified in the environment variables (`HACTARDBUSER`, etc.) has DELETE permissions on the `documents` table. Check that the database is running and accessible.
source, title, and content are required fields for docs-create.An attempt was made to create a document using `/docs-add` without providing the necessary information. This is an internal error and should not typically be seen by users.
Ensure the source file or URL for `/docs-add` is valid and readable.
Failed to generate embedding for document chunk: <title>When adding a document, Hactar failed to get a vector embedding from the configured Ollama model (e.g., `nomic-embed-text`). This usually means the Ollama server is not running or the embedding model is not available.
Make sure your local Ollama server is running (`ollama serve`) and that you have pulled the required embedding model (`ollama pull nomic-embed-text`).
Error: Failed to find git repository root.Hactar was started in a directory that is not part of a Git repository. By default, Hactar requires a Git repository to operate safely.
Run Hactar from within a directory that has been initialized with `git init`. Alternatively, you can pass the `--livedangerously` flag to suppress this error, but this is not recommended as Hactar may make irreversible changes.
hactar --livedangerouslyError: Search block not found in <file>The LLM generated a SEARCH/REPLACE block, but the content in the `SEARCH` section could not be found in the target file. This happens if the file has been modified since it was added to the context, or if the LLM hallucinated the file's content.
Re-add the file to the context with `/add <file>` to provide the LLM with the latest version, then ask for the modification again.
Parent entry with ID '<id>' not found. Sibling entry with ID '<id>' not found.An operation in a guide file (e.g., `/guide-add-example`) tried to find a headline with a specific `:ID:` property in its properties drawer, but it was not found. This can happen if the guide file's structure has been manually changed.
Check the specified guide file to ensure the headline with the required tag (e.g., `:examples:`) exists and has an `:ID:` property. If not, you may need to regenerate the headline by re-running the command that created it.
Warning: New org string to insert contains no headlines.An internal operation tried to insert content into a guide file, but the content to be inserted was not a valid Org-mode headline. This is an internal warning and usually not critical.
No action is typically needed. The operation will be skipped.
Error taking screenshot with niri/wl-paste: <details>Hactar's assistant mode failed to take a screenshot of the focused window. This can happen if `niri` is not your window manager or if `wl-paste` is not installed.
Currently, assistant mode screenshotting is only supported on the `niri` window manager with `wl-paste` installed.
Error: Piper model '<path>' not found. Cannot generate TTS.Assistant mode audio is enabled, but the specified Piper TTS model file could not be found.
Ensure the Piper model file exists at the path specified by the `PIPERMODELPATH` environment variable (default: `~/speech/models/enUS-amy-low.onnx`).
Model configuration missing required 'name'/'model_name' fieldThe `models.yaml` configuration file has an entry that is missing a required field.
Edit `~/.config/hactar/models.yaml` and ensure every model entry has at least a `name` (e.g., `openai/gpt-4o`) and a `modelname` (e.g., `gpt-4o`).
The biggest way to avoid file editing issues is to use a SOTA model. As of July 2025, most of the big models tend to handle diff blocks with a 95%+ accuracy. Other models can struggle more.
When you encounter a block that cant be applied often the simplest solution is to just retry it. You can do this with /retry. If you are using a model that frequently needs retries you can set auto-retry to true and even increase retry-limit.
By default this is set to off because in my experience bad edits now tend to be caused by overly complex context windows. You often cant the get model to fix a mistake with a retry loop. You need to start it over.
Use
/dropto drop filesUse
/clearto drop all files and clear historyUse
/resetto return all settings to what they were when you booted hactar.
Occassionally you will get errors from an API provider. If trying again doesn't resolve the issue the first step is to check that provider works with hactar. You can use the -p flag to execute a single prompt and return a response:
hactar -m "provider/modelname" -e "Write hello world"Run hactar pro.check to check for any issues with hactar pro.
If you are having issues with any of the content or extensions from the pro version of hactar, please email me or ping me on the discord.
Context engineering is the new and cool thing for working with LLMs. So many of the distinguishing features between AI tools is how they manage context. Some tools emphasize you managing the context with commands, some do it automatically, some do it with etc.
In Hactar we dramatically simplify the context engineering by simply using a plain org-mode file. Want to add a file? You can just insert it. Want to operate on your prompt using e.g grep? It is just a plain file so it just works. And all the agent automatic managing of context can be done through the same file as the context API. A single file is simpler for both humans and machines.
You start by enabling context files with the --context-expose flag:
hactar --context-exposeThis will expose a context file to your current folder at .hactar.{pid}.context.org. This context file is the source of truth for context. It contains what is passed to the LLM.
Now you can add and drop files. You can do this by both editing the file and using hactar commands.
Using a command:
/add main.lispWill insert a src block into .hactar.{pid}.context.org. You'll get something like this:
* Files ** main.lisp #+begin_src lisp :tangle "main.lisp" ; main.lisp source here #+end_srcYou can re-organize. create headlines, link to src files, copy and paste documentation etc. Hactar relies solely the filename and paths (that tangle part) to determine files in context. Hactar fully understands org-mode and work with whatever you throw at it. Edit and write however you like and the changes will be synced back and forth.
The context is a org-mode file you can edit and those changes get propogated back to Hactar. You can add files to context using everything from copy and past to hactar commands like /add. Ultimately it all ends up as a changes to an org-mode file that get's passed to an LLM as part of the system prompt.
Example;
/add main.lispWill insert a src block into .hactar.{pid}.context.org. You'll get something like this:
* Files ** main.lisp #+begin_src lisp :tangle "main.lisp" ; main.lisp source here #+end_srcMany times the documentation lookup tools and conversion parts of AI tooling fails. What never fails is a copy and paste. Want to add your API docs? Just copy and paste it into the file!
You can also use the add command and Hactar will attempt to resolve documentation for whatever uri you pass it.
Example of adding documentation for an npm package:
/add npm:react@19.2.0This will result in a context file like
* Files * API Docs ** React :PROPERTIES: :VERSION: 19.2.0 :END: An org-mode version of the React API docs hereHactar is unique in that it has tight integration with starter kits. The create command is sort of a universal create-react-app for a huge variety of stacks.
By default you just provide some details and then Hactar will use RAG to retrieve matching starter kits and ask you some generated questions.
Create the app
hactar create "A chess app. Use react-router and modern stack that can be deployed to cloudflare workers"Select a starter kit
You will be presented first with a starter kit selection.
run scripts
Hactar will generate documentation and scripts for you. All Hactar apps come with a standard set of start,dev,test,lint,deploy. If you have direnv setup in your shell these will be automatically symlinked for you and you can run them with start, test etc. If not use ./scripts/start.sh.
Read and Use the Docs
Checkout the README.md and the .hactar.guide.org for the developer guide to your new codebase. If you are on Emacs the org file links will point to docs in the Hactar content repo. An app with complete books, API references, and tutorials right from the get go!
Hactar pro comes with a suite of scripts that all work together to provide little usability improvements.
You can install them with
hactar utils.installto install all use:
hactar utils.install all
List them with:
hactar utils.list
They get symlinked into a folder configured as HACTAR_UTILS_PATH. If you add this folder to your path you can do things like: scripts-list to select a script in any hactar project and run it.
Hactar is as good as any SOTA CLI tool at mapping and documentating a codebase.
Navigate to the codasbe
cd path/to/projectStart hactar
hactarScan the codebase
hactar scan. This will usually take 30ish seconds. If longer please report because my goal is for it to always be fast as possible.Ask questions "give me a high level overview of the codebase"
Ask how to build a specific part "how would I add a route for viewing chess games on a board given a fen and query?"
Hactar is a context engineering focused tool. A lot of your daily workflow with Hactar will consist of how manage context.
Add some files with
/addRemove files with
/dropAdd docs with
/docs-addAdd docs automatically with
/docs-guessor by passing the flag--auto-docs
Hactar can use a combination of hard coded repo maps, RAG, and queries to an LLM to extract chunks of code. You can use this to query on the codebase
Use
/find-code <query>to find code using plain languageUse
/find-code <symbol>to find code using a matching symbol
Example:
/find-code Give me all the code that handles authenticationYou can work with Hactar in a pair programming manner by enabling automatic linting, testing, and type checking. When enabled, Hactar runs these checks after applying code changes and attempts to fix any issues it finds.
Enable features with flags (
--auto-lint,--auto-test,--auto-typecheck, or--auto-all), slash commands (/auto-lint, etc.), or your.hactar.tomlfile.Write code by typing instructions.
Hactar will make changes and then automatically run the configured checks. If a check fails, Hactar will attempt to fix the code and re-run the check, up to the configured retry limit.
Set the
--auto-limitsflag orlimitsin your config to control how many times Hactar will try to fix bugs in a loop.Use
/resetwhen Hactar gets stuck in a loop.Configure the linter, test, and typecheck commands with
linter-command,test-command, andtypecheck-commandvariables.
Hactar can also automate other parts of your workflow.
When enabled with --auto-docs or /auto-docs, Hactar will automatically manage documentation context for you. It will guess relevant docs based on files you add and keep your project's guide file up-to-date.
When enabled with --auto-suggest-commands or /auto-suggest-commands, Hactar will proactively suggest shell commands it thinks might be useful based on the conversation.
Hactar has tight integration a content repositry, this gives it the unique capability to be able to write documentation for you. It is not like the documentation generated by other agents. Hactar doc generation uses RAG so that the docs it generates are up to date and have less hallucinations.
Open hactar with
hactarGenerate docs with
/docs-gen <instructions>Generate developer guide with
/guides-genUpdate developer guide with
/guides-update
Open Hactar
hactarCreate a new branch
git checkout -b feature-awesome-catsGive your hactar your instructions to hactar for a new
Use
/planto write a plan for implementing the featureUse
/agent-runto have Hactar implement the plan itself
Use
/addto add images to context and/dropto drop iamgesAsk Hactar to analyze the image
What does this iamge show?
Use images for context
Fix the errors in screenshot
Hactar can be launched in multiple instances, but these can trip over each other if you aren't careful. You can prevent them from tripping over each other by using git worktrees.
# Create a new worktree with a new branch git worktree add ../project-feature-a -b feature-a# Or create a worktree with an existing branchgit worktree add ../project-bugfix bugfix-123# Navigate to your worktree cd ../project-feature-a# Run hactarhactar# List all worktreesgit worktree list# Remove a worktreegit worktree remove ../project-feature-aThe docs for worktrees https://git-scm.com/docs/git-worktree
Hactar has tight integration with shells
Hactar will automatically suggest commands with
--auto-suggest-cmdsenabled.You can shell commands with
/shand generate on by passing a query/sh run tests,/sh! <query>can be used to automatically accept and run the resulting command.You can run commands in a shell pipeline with
hactar sh <query>andhactar sh! <query>Set
AGENT_SAFE_ENVto true and enable--auto-cmdsto let Hactar go wildWarning: This is highly dangerous to data and should only be done in a VM. Don't let your LLM delete your data please!
In Emacs and vim you can run suggested commands with one keystroke using keybindings.
Hactar can be used in scripts with hactar sh! <query>. It will return a shell command or output that can be chained. For example:
cat cats.txt | hactar sh! "return the cats as json" | jqOften you will need to keep re-using documentaiton and references. Hactar includes features that help automate this cycle by parsing your doc in the background.
You can have Hactar generate this for you:
hactar/docs-gen <some extra instructions>Now attach the guide to context:
/add .hactar.guide.orgAs long as the file is attached to context Hactar will use it as a source of truth. Links will be added to e.g point to API references in the Hactar repo. Then when Hactar needs an API reference it will use that link to look it up.
hactar.el provides some useful wrapping around the hactar create command.
Use
M-x hactar-createto create a new projectYou will then be asked to select a stack/starter and then asked for more info
Use
C-u M-x hactar-createto be able to edit the starter kit before generating the projectUse hactar-org-mode keybindings to edit the file
M-x hactar-insert-docsto insert docsM-x hactar-lookup-docsto lookup relavent docsM-x hactar-create-finishwill save the file and build the project
From any buffer you can use the hactar-transient-menu to add things to the current hactar instance.
Add current buffer with
M-x hactar-addDrop current buffer with
M-x hactar-dropAdd all open files in window with
M-x hactar-add-windowDrop all the current files in window with
M-x hactar-drop-windowClear the history with
M-x hactar-clearReset everything with
M-x hactar-resetType commands to get autocomplete
Use
M-x hactar-docs-at-pointto lookup docs for the current thingsUse
M-x hactar-guide-opento open the guide for the current codebaseToggle
M-x hactar-auto-docsto automatically update the guide
Use
M-x hactar-docs-findto lookup docsUse
M-x hactar-docs-raw-searchto search through all the Hactar docs using ripgrep
Enable global-auto-revert-mode to see files and documents update automatically
Starter kits are stored in HACTAR_STARTERS_PATH which by default is ~/.config/hactar/starters. You can edit any of them using org-mode. Use M-x hactar-rag-update-index to update the Hactar database after making changes.
You can add and edit docs in HACTAR_DOCS_PATH which by default is ~/.config/hactar/docs. You can edit any of them using org-mode. Use M-x hactar-rag-update-index to update the Hactar database after making changes.
Use
:HactarCreateto create a new project using a starter
Use
:HactarAddto add the current bufferDrop the current buffer with
:HactarDropUse
:HactarAddAllto add all the files in your neovim instanceUse
:HactarClearto clear historyUse
:HactarResetto drop all files and clear history
Use
:HactarDocsAtPointlookup docs the thing at pointUse
:HactarDocsFindto search for docsUse
:HactarDocsRawSearchto use ripgrep to search through docsUse
:HactarGuideOpento open the developer guide for the projectUse
:HactarToggleAutoDocsto toggle auto updating the developer guide
Hactar is designed with minium security foot guns.
Hard code where possible makes Hactar more deterministic
We don't use an LLM just because we can
Commands are parsed and checked against whitelists and blacklists
Hactar never runs commands by default
You must explicitly run commands it suggests
No MCP by default
Bring your own Key
Hactar refuses to operate as an agent unless you explicity tell it to with environment variables or flags
By default Hactar acts as an agent when you place it in a VM
Hactar is developed in Common LISP so that it is maximally hackable and extensible. CL can be a barrier for many but my hope is that in the LLM era it is less one and the benefits outweight the cons.
The benefits are that Hactar is completely customizable. You could potentially build anything ontop of it.
A simple extension to hactar can be done by adding lisp to ~/.config/hactar/user.lisp. Let's add a new function to hactar and a command to print it:
(defun hello-world () (hactar:output (format nil "Hello World ~A" name)))(hactar:define-command hello (args) "Print hello message to a user." (let ((name (format nil "~{~A~^ ~}"))) (hello-world name)))Here we use the macro define-command to add a command to hactar. This will expose the hello-world function as slash command /hello.
To write rules use the defrule macro:
(defrule :prefer-custom-hooks "Prefer custom hooks over libraries where possible." :triggers '(:stack (:react)))Hactar is pretty good at extending itself. You can give rule generation a try with the /gen-rule command. Hactar will attempt to write a rule for based on your instructions. Then it will add it to your user.lisp file. Please read any rules it generates first and don't blindly accept them.
A plugin in Hactar is just a lisp file. Instead of auto loading things we let you manually determine what plugins you want. You can load a plugin just like you would load a lisp file:
(load "plugin.lisp")Hactar uses what is called an analyzer to process your code and determine things like stack, dependencies, lint things etc. Analyzers combine a hook, (or triggers), with a function. You can write them with defanalyzer. Here is an example of an analyzer that watches for AI comments and then adds those files to the context:
(defun is-ai-comment-event? (pathname event-type) "Checks if a file event is relevant for the AI comment analyzer. Specifically, if the file was added/changed and contains 'AI!' on a line." (when (and (member event-type '(:file-added :file-changed)) (probe-file pathname)) (let ((content (read-file-content pathname))) (when content (search "AI!" content)))))(defanalyzer auto-add-ai-comment-file ((*file-event-hook* #'is-ai-comment-event?)) t (pathname event-type) "Detects 'AI!' comments in files and automatically adds the file to the context." (declare (ignore event-type)) (debug-log "Auto-add-ai-comment-file: Detected 'AI!' in" pathname) (add-file-to-context pathname))Agents in Hactar are standalone programs that perform autonomous tasks by wrapping Hactar as a library. Unlike the interactive REPL, an agent runs a loop to achieve a specific goal, continuing until a condition is met or it receives a signal to stop (like SIGINT). An agent is simply a Lisp file, making it easy to create and modify.
You can create a new agent using a starter kit with the create.agent subcommand.
To define an agent, you use the defagent macro. This macro sets up the main entry point and loop for your agent.
(defagent my-research-agent (query) "An agent that researches a topic and writes a summary." (:init (setup-research-environment)) (:run (perform-research-step query)) (:stop-condition (research-complete-p)) (:cleanup (cleanup-research-files)))Agents can also be controlled via an HTTP/JSON-RPC API, allowing for more complex integrations.
| Method | Endpoint | Description |
|---|---|---|
| POST | /runs | Initiates a new agent run. Requires agentname, input. Optional: sessionid, mode (sync, async, stream). Returns the initial Run object or stream. |
| GET | /runs/{runid} | Retrieves the current state and details of a specific agent run. |
| POST | /runs/{runid} | Resumes an agent run in the awaiting state. Requires awaitresume data. Optional: mode for the response. |
| POST | /runs/{runid}/cancel | Requests cancellation of an ongoing agent run. Returns 202 Accepted if cancellation is initiated. |
Processors in Hactar handle processing the output from the LLMs.
You write one by using the def-processor macro. You will get the history of the current conversation and the current response will be last on the list.
Here is an example of a processor that handles searc/replace blocks:
(def-processor search-replace-processor (history) "Parses and applies SEARCH/REPLACE blocks from the last assistant message." (when history (let ((last-message (car (last history)))) (when (string= (cdr (assoc :role last-message)) "assistant") (let* ((content (cdr (assoc :content last-message))) (blocks (parse-search-replace-blocks content))) (when blocks (format t "~&Applying ~A SEARCH/REPLACE block(s)...~%" (length blocks)) (apply-search-replace-blocks blocks) (format t "~&Finished applying blocks.~%")))))))To add a command to Hactar use define-command:
(defun hello-world () (hactar:output (format nil "Hello World ~A" name)))(hactar:define-command hello (args) "Print hello message to a user." (let ((name (format nil "~{~A~^ ~}"))) (hello-world name)))Hactar uses hooks as the pub/sub method. You can define your own hooks like this:
(nhooks:define-hook-type process-history (function (list) t) "Hook run after an LLM response is processed, allowing modification or action based on history. Handler takes the full chat history.")(defvar *process-history-hook* (make-instance 'hook-process-history))Then use them like this:
(nhooks:add-hook *process-history-hook* (make-instance 'nhooks:handler :fn #',name :name ',name))#+end_src*** Writing Dot Commands:PROPERTIES::ID: c2730ac9-7c9f-461d-a780-463f9195732f:CUSTOM_ID: writing-dot-commands.c1a407fd-bc97-4091-88d1-f5dec1db4048:END:Dot commands are a special command that only acts on the LLM text. You can think of them like a virtual file system. Any commands you add will be sent to the LLM as part of the prompt. The prompt will instruct it how that prompt should transform the text.An example is a simple ls command that asks an LLM to imagine the content of a file:#+begin_src lisp (defdot ls (args) "Usage: ls [path] Lists directory contents. Acts like the OS 'ls' command. When used on a file it should act like 'cat' command. " (let ((full-command (format nil "ls ~{~A~^ ~}" args))) (get-llm-response full-command :dot-command-p t) ))Hactar is designed to let you quickly extend import sources. A similar development philosophy to devdocs, we want you to be able to import any of your common doc sources.
Hactar expects a import source to return three values:
content
title (defaults to the passed uri)
metadata
Only the content is required and metadata will be generated by calling an LLM. Keep in mind though that the more you can hardcode the better for token usage and response times. It is silly to e.g pass the entire npm package.json to generate metadata for npm packages. You should return metadata whenever you can. Sources are flexible enough that if you hardcode everything it is possible to import docs without any LLM roundtrips.
The simplest source could be a file. We could define one like this:
(defsource file-source :pattern "^file:(.+)$" :params (filepath) :priority 10 (lambda (filepath) (let* ((path (uiop:native-namestring (merge-pathnames filepath *repo-root*)))) (if (probe-file path) (values (uiop:read-file-string path) (file-namestring path)) (error "File not found: ~A" path)))))Now you can use it like this:
./hactar import file:~/my-docs/react/myreactdocs.mdThis isn't extremely useful. So let's take a look at how we might retrieve a web source:
(defsource http-source :pattern "^https?://(.+)$" :params (url-path) :priority 5 (lambda (url-path) (let* ((url (format nil "http~A://~A" (if (search "https" url-path) "s" "") url-path)) (content (fetch-url-content url))) (if content (values content (format nil "Web: ~A" url)) (error "Failed to fetch URL: ~A" url)))))The macros you need to know about to write new import commands:
defsource. You can use this to define a new import source
defdocsource. Define API documentation sources for packages
A common scenario is giving agents API documentation that we know the source of markdown docs for a thing but the urls change slightly because of versions or domains etc. We can use the router and macro defdocsource to define lookup functions for specific documentation and their versions. Here is how we could define the path to a markdown doc for react versions greater than 19:
(defdocsource :name "react" :version "19.^" :platform "npm" :uri "file:~/docs/react/MyReact19docs.md")You can set the metadata model using the
*docs-meta-modelglobal or the environment varHACTAR_META_MODEL
Hactar tries to make it dramatically easier to add things to context. The primary path to get most things for context is the web. Whether that is a search, browsing github etc. Eveyrtime you have to leave your dev tooling to do this is flow lost. We try to keep in Hactar and your IDE as much as possible. A way we do this is by building wrappers around common web sources so that they can be consumed by hactar as plaintext.
Engineering plaintext layers onto common web platforms is a whole niche of open source. And they are extremely useful but the barrier to using them is often pretty high. Sure you can consume your twitter as plaintext markdown, but good luck setting up all the infrastructure to do that in a couple hours much less five minutes. In hactar the goal is for everything web to be a command away. Want your raw LLM text for a blog post? Just one command. Want the markdown of the HN frontpage or a subreddit? One command for that which you cna pipe directly. Hactar brings the unix philosophy to the web. A herculean effort mostly impossible prior to LLM era.
To write new web commands you combine two tools the defwebcommand macro and the generic router tooling in Hactar. Turns out routing is useful for web requests too. (Rubyists know this)
Here is an example of an HN webcommand:
(defwebcommand hn "Fetch news from news.ycombinator.com (Hacker News)." (defwebroute hn-newest "Fetch newest posts from Hacker News" ("newest" &rest args) (args) :priority 10 (lambda () (get-hn-newest args))) (defwebroute hn-top "Fetch top posts from Hacker News" ("top" &rest args) (args) :priority 10 (lambda () (get-hn-top args))) (def-default-route () (lambda () (get-hn-front-page-md))))Here is a list of some conventions I follow with Hactar code
Subcommands should be prefixed with a dot e.g
hactar pro.updateTest extremely thoroughly (tests act as an LLM guardrail)
Prefer hooks over extension
Use macros instead of APIs
This is so we can swap out the underlying way things work. e.g if you always use the defanalyzer macro then the internal way analyzers work can change without requiring changes to the syntax. DSLs are the LISP way use them.
The release process is:
Tag the current state of main
push the tag
run make build
Use the outputted files and upload to releases
You can run the release script for this. By default it will be a dry run. Pass the --live-dangerously flag to actually make the release.
release --version 0.0.1 --description ./RELEASE_0_1.mdNote: These API docs are generated by Hactar and then human checked. They may be subject to errors. Last updated . It is not necessary to copy and paste them into Hactar projects. Hactar has it's own API docs that are auto updated.
This package provides a robust parser for Org-mode text, converting it into a structured Lisp representation (a plist).
(parse input)Parses Org-mode text from a string or pathname INPUT.
INPUT: A string containing Org-mode text or a pathname to an Org file.
Return Value: A plist representing the parsed document, with two main keys:
:settings: An alist of document settings (e.g.,((:TITLE . "My Document"))).:entries: A list of parsed headline entries. Each entry is a complex alist containing details like:level,:title,:tags,:props(property drawer), and:section(content).
(select-by-tag parsed-org tags)Selects entries from a parsed Org structure that contain ALL specified tags.
parsed-org: The plist structure returned byorg-mode-parser:parse.tags: A list of strings representing the tags to match (e.g.,'("api" "public")).
Return Value: A list of entry alists that match all the given tags.
(insert-child original-org-string parent-id new-org-string)Parses original-org-string, finds the entry with parent-id, parses new-org-string, adjusts its headline levels to be children of the parent, and inserts it after the parent's last descendant.
original-org-string: The string content of the main Org document.parent-id: The string ID of the parent headline (from its:PROPERTIES:drawer).new-org-string: The string content of the new Org entry/entries to insert.
Return Value: The reconstructed Org string with the new child entry inserted. Throws an error if parent-id is not found.
(insert-sibling original-org-string sibling-id new-org-string)Similar to insert-child, but adjusts the new entry's level to match the sibling-id entry's level, inserting it as a following sibling.
original-org-string: The string content of the main Org document.sibling-id: The string ID of the sibling headline.new-org-string: The string content of the new Org entry/entries to insert.
Return Value: The reconstructed Org string with the new sibling entry inserted.
(org-to-string parsed-org)Converts a parsed Org structure (plist) back into a valid Org-mode formatted string. This is the inverse of parse.
parsed-org: The plist structure returned byorg-mode-parser:parse.
Return Value: A string containing the Org-mode representation of the parsed structure.
This package provides a line-based, lower-level API for manipulating Org-mode files as strings. It is generally recommended to use org-mode-parser for more robust operations, but this package is useful for simpler, faster manipulations.
These functions take an Org string as input and return a modified Orgstring.
(insert-child org-string target-id new-content-string): Insertsnew-content-stringas a child of the headline withtarget-id.(insert-sibling org-string target-id new-content-string): Insertsnew-content-stringas a sibling after the headline withtarget-id.(delete-headline org-string target-id): Removes the headline withtarget-idand all its sub-headlines and content.(select-headlines-by-tag org-string tags): Returns a new Org string containing only the headlines (and their content) that match ALLtags.(filter-headlines org-string tags-to-filter): Returns a new Org string with headlines matching ANY oftags-to-filterremoved.
These functions modify a file in place. They are wrappers around the pure functions that handle reading from and writing to the file. They return T on success and NIL on failure.
(insert-child! filename target-id new-content-string)(insert-sibling! filename target-id new-content-string)(delete-headline! filename target-id)(filter-headlines! filename tags-to-filter)
(get-headline-level line): Returns the numeric level of a headline string, orNIL.(get-property prop-name lines): Finds the value of a property (e.g.,:ID:) within a list of lines representing a property drawer.(find-headline-region lines target-id): Finds the start and end line indices for the headline withtarget-id.(get-tags-from-headline headline-line): Extracts a list of tags (e.g.,("api" "public")) from a headline string.
These variables manage the fundamental state of the application during a session.
*hactar-version*: The version of the Hactar application. (Default:"0.1.0")*debug-stream*: Stream for debug output. (Default:nil)*debug*: Enable debug output. (Default:nil)*file-watcher*: The global file watcher instance. (Default:nil)*repo-root*: The root directory of the repository being watched. (Default:nil)*files*: Files currently in the context window. Synchronized with context file. (Default:nil)*stack*: A list of technologies used in the stack. (Default:'())*shell*: The shell being used on the system. (Default:"bash", or from$HACTAR_SHELL/$SHELL)*language*: The main language being used in the project. (Default:"unknown")*name*: The name of the project. (Default:nil)*author*: The author of the project. (Default:nil, or from$HACTAR_AUTHOR)*repo-map*: A map of all the symbols and tags in the current project. Generated using tree-sitter. (Default:nil)*current-model*: The current model being used. (Default:nil)*cheap-model: The model used for cheap parsing tasks. Defaults toollama/qwen3:14b*embedding-model: The model used for generating embeddings. Defaults tonomic-embed-textNote: Only ollama embedding API is currently supported.*completion-model:The model used for completion. Defaults toollama/qwen3:14b*docs-meta-model: The model used for generating the metadata for documentation. Defaults to the value of cheap-model.*chat-history*: The chat history. (Default:'())*chat-history-limit*: Maximum character limit for chat history. (Default:8000)*multiline-mode*: Whether multiline mode is enabled. (Default:nil)*transcript-file*: File to save chat transcript to. (Default:".hactar.transcript.json")*available-models*: List of available models. (Default:nil)*docs-context*: List of documentation plists currently added to the context. (Default:'())*docs-folder*: Default folder to look for documentation files. (Default:"docs/")*hactar-repo-url*: The Git URL for the Hactar project repository. (Default:"git@github.com:hactar-project/hactar.git"or from$HACTAR_REPO_URL)*hactar-repo-dir*: The local directory for the Hactar project repository. (Default:~/.local/share/hactar-repo/or from$HACTAR_REPO_DIR)*hactar-data-path*: Path to Hactar's data dir. (Default:~/.local/share/hactar/or from$HACTAR_DATA_PATH)*hactar-config-path*: Path to Hactar's configuration directory. (Default:~/.config/hactar/or from$HACTAR_CONFIG_PATH)*db-path*: Path to the SQLite database file. (Default:*hactar-data-path*/hactar.dbor from$HACTAR_DB_PATH)*hactar-pro-path*: Path to the Hactar Pro content repository. (Default:*hactar-data-path*/pro/or from$HACTAR_PRO_PATH)*hactar-starters-agent*: Path to the default Agent starter template (AgentStarter.org). (Default:*hactar-data-path*/starters/AgentStarter.orgor from$HACTAR_STARTERS_AGENT_PATH)*max-content-chars*: Maximum character length for content before splitting. (Default:30000)*git-autocommit*: Enable automatic git commits after applying changes. (Default:t)*hactar-ignored-paths*: List of regex patterns for paths to ignore (treated as git-ignored). Paths are relative to repo root. (Default:'("^\\./straight/repos/.*"))*active-rules*: Hash table storing the text of currently active rules, keyed by rule name. (Default:(new hash table))*active-guide-file*: Pathname of the currently active guide file. (Default:nil)*images*: List of images currently in the context. Each element is a plist (:path :text :mime-type :base64-data). (Default:'())*defined-tools*: Hash table storing defined tools. Key: tool name (string). Value: plist (:name :schema :fn :permissions). (Default:(new hash table))*image-max-size-mb*: Maximum size for an image file in megabytes before warning. (Default:1)*guide-warn-chars*: Character limit for guide content before warning. (Default:30000)*guide-max-chars*: Maximum character limit for guide content. (Default:100000)*guide-file-extension*: Default file extension for generated guides (e.g., 'org', 'md'). (Default:"org")*guide-exclude-tags*: List of tags to exclude headlines from the active guide file. (Default:'("nocontext"))*silent*: Suppress non-essential output when T. (Default:nil)*in-repl*: True if currently inside the interactive REPL. (Default:nil)*exposed-context-file*: Pathname of the exposed context file (hactar.{pid}.context.org). (Default:nil)*context-expose-hooks-installed*: Whether context.expose hooks are installed. (Default:nil)*docs*: Holds documentation available to context.
State related to the agent execution framework.
*agent-definitions*: Hash table storing agent definitions keyed by name. (Default:(new hash table))*running-agents*: Hash table storing active agent instances, keyed by a unique ID. (Default:(new hash table))*agent-retry-limit*: Default retry limit for agents. (Default:10)*live-dangerously*: Set to T to allow agents to run without a safe environment. (Default:nil)*agent-safe-env*: Set to T if running in a container or other safe environment, allowing agents to run. (Default:nil)
State related to automatic agent-driven features.
*auto-lint*: Enable/disable automatic linting agent. (Default:nil)*auto-test*: Enable/disable automatic testing agent. (Default:nil)*auto-typecheck*: Enable/disable automatic type checking agent. (Default:nil)*auto-docs*: Enable/disable automatic documentation features. (Default:nil)*auto-suggest-commands*: Enable/disable automatic command suggestion. (Default:nil)*auto-cmds*: Enable/disable automatic execution of shell commands. (Default:nil)
These variables are specific to the visual assistant mode (--assistant flag).
*assistant-mode-active*: Is the assistant mode currently active? (Default:nil)*assistant-extraction*: The last text extracted by the assistant mode LLM. (Default:nil)*assistant-last-screenshot-path*: Pathname of the last screenshot taken by the assistant. (Default:nil)*assistant-output-file*: Pathname to write assistant extractions to (if --output is used). (Default:nil)*assistant-audio-enabled*: Is TTS audio generation enabled for assistant mode? (Default:nil)*assistant-audio-muted*: Is assistant mode audio output currently muted? (Default:nil)*assistant-last-audio-file*: Pathname of the last TTS audio file generated. (Default:nil)*piper-model-path*: Path to the Piper TTS model. (Default:~/.config/hactar/speech/models/en_US-amy-low.onnxor from$PIPER_MODEL_PATH)*assistant-initial-delay-done*: Has the initial 30s delay for assistant mode passed? (Default:nil)*assistant-previous-image-description*: Default description for assistant screenshots. (Default:"Screenshot of the currently focused window.")
State related to the feature that processes AI! comments in source files.
*ai-comment-queue*: Queue of files with AI! comments to process. (Default:'())*ai-comment-processor-lock*: Lock to ensure single-threaded AI! comment processing. (Default:(new lock))
State related to the internal HTTP server for API access.
*http-port*: Port for the HTTP server. (Default:4269)*http-server*: Instance of the running Clack server. (Default:nil)*completion-model*: Model configuration specifically for the /complete command and endpoint. (Default:nil)
State related to the file and process watcher system.
*watcher-definitions*: Hash table storing watcher definitions keyed by name. (Default:(new hash table))*active-watchers*: Hash table storing active watcher instances. (Default:(new hash table))*test-command*: Default command to run for the test watcher. (Default:"make test")*lint-command*: Lint command to run for the lint agent (from config or derived from stack). (Default:nil)*typecheck-command*: Typecheck command to run for the typecheck agent (from config or derived from stack). (Default:nil)*test-agent-command*: Test command to run for the test agent (from config or derived from stack). (Default:nil)
utils.lisp is where we keep all the lib stuff. Hactar tries to use few dependencies, the tradeoff is a rather large utils file:
(is-port-available-p port &optional (host "127.0.0.1"))Checks if a TCP port is available for binding on the given host. Returns T if available, NIL otherwise.
(parse-metadata-args arg-list)Parses a list of string arguments representing a Lisp plist (e.g., '(":tags" "'(\"tag1\")")') into an actual plist. Used for parsing command-line metadata.
(get-free-args command-name)A workaround to extract free (non-option) arguments for a specific subcommand from the command-line arguments.
(push-end item my-list)Appends an item to the end of a list (non-destructively).
(debug-log &rest args)Logs a message to standard output and *debug-stream* (if set) when *debug* is T.
(find-executable name)Checks if an executable with the given name exists in the system's PATH. Returns T if found, NIL otherwise.
(copy-to-clipboard text)Copies the given text to the system clipboard using wl-copy (for Wayland) or xclip (for X11).
(split-lines text)Splits a string into a list of lines, preserving empty lines.
(join-lines lines)Joins a list of strings into a single string, separated by newlines.
(remove-prefix prefix str)Removes prefix from the beginning of str if it exists.
(extract-md-fenced-code-block s)Parses a string s and returns the first Markdown fenced code block found. The result is an alist containing :lang, :filename, and :contents.
(read-file-content filename)Safely reads the entire content of a file into a string. Returns NIL on error.
(write-file-content filename content)Safely writes content to a file, overwriting it if it exists. Uses UTF-8 encoding.
(to-json alist)Converts a Lisp alist or plist into a JSON string.
(get-models-config-path)Returns the full path to the models.yaml configuration file (typically in ~/.config/hactar/).
(get-prompt-path prompt-filename)Returns the full path to a prompt file located in the user's configuration directory (~/.config/hactar/prompts/).
(get-mime-type pathname)Determines the MIME type of a file based on its extension (e.g., "png" -> "image/png").
(is-image-file? pathname)Returns T if a file is likely an image based on its extension.
(check-image-size pathname)Checks if an image's file size exceeds *image-max-size-mb* and prints a warning if it does.
(resize-and-encode-image pathname)Resizes an image to standardized dimensions based on its aspect ratio (to optimize for vision model input) and returns its Base64-encoded string and MIME type.
(split-content content max-chars)Splits a large string of content into smaller chunks, each no larger than max-chars, attempting to split at paragraph breaks.
(get-language-hint-from-extension extension)Maps a file extension string (e.g., "js") to a language name suitable for Markdown code fences (e.g., "javascript").
(get-file-content file-path)Reads the content of a file using UTF-8 encoding.
(resolve-starter-path starter-name)Resolves the path to a starter file, checking for an environment variable override before falling back to the default location.
(list-git-tracked-files repo-root)Returns a list of all files tracked by Git in the repository.
These functions operate within the context of the current repository (*repo-root*).
(find-git-repo-root start-dir)Finds the root directory of the Git repository by searching upwards from start-dir.
(run-git-command args &key (ignore-error nil))A low-level helper for running a Git command with the given list of args.
(git-add files)Stages a list of files for the next commit.
(git-commit message)Creates a Git commit with the provided message.
(git-reset-hard revision)Performs a git reset --hard to the specified revision (e.g., "HEAD~1").
(git-check-ignore pathname repo-root)Checks if a file is ignored by Git (via .gitignore or because it's untracked). Returns T if the file is not tracked.
(normalize-completion completion-string)Cleans up a raw response from an LLM. It prioritizes extracting content from the first Markdown code block. If none is found, it removes common prefixes like "Completion:".
(play-audio-file audio-pathname)Plays an audio file using paplay (PulseAudio) or aplay (ALSA).
These functions control which local files and images are included in the LLM's context window.
(add-file-to-context file-path)Adds a text file to the context. The content of this file will be included in subsequent prompts sent to the LLM.
file-path(String or Pathname): The path to the text file to add.
The function will warn if adding the file might exceed the current model's token limit.
(drop-file-from-context file-path)Removes a text file from the context.
file-path(String or Pathname): The path to the text file to remove.
(add-image-to-context image-path &optional text)Adds an image to the context for use with vision-capable models.
image-path(String or Pathname): The path to the image file.text(String, optional): A textual description of the image to provide additional context to the LLM.
The function will warn if the image size exceeds the configured limit (*image-max-size-mb*). The image is automatically resized and encoded before being sent to the LLM.
(drop-image-from-context image-path)Removes an image from the context.
image-path(String or Pathname): The path to the image file to remove.
(list-context-files)Prints a list of all text files currently included in the context to standard output.
Hactar can include structured documentation from its internal database in the context.
(add-doc-to-context doc-plist)Adds a documentation entry to the context.
doc-plist: A plist representing a single document from the database, typically retrieved viadocs-find.
(remove-doc-from-context doc-id)Removes a documentation entry from the context by its unique ID.
doc-id(Integer): The ID of the document to remove.
These functions assemble the final strings that are sent to the LLM.
(generate-context)The core context generation function. It assembles the full context string by combining:
The repository map (
*repo-map*).Content of all files in
*files*, pruned to fit the model's token limit.Project stack (
*stack*), shell (*shell*), language (*language*), etc.Content of all documents in
*docs-context*.Descriptions of all images in
*images*.
Return Value: A single string formatted according to prompts/context.org.
(system-prompt)Returns the appropriate system prompt string based on the current application mode (*assistant-mode-active*). This is the primary function used to get the system prompt for an LLM call.
(default-system-prompt)Generates the standard system prompt by combining the base template (system.default.org) with active rules (*active-rules*), the active guide (*active-guide-file*), and the general context from generate-context.
(dot-system-prompt)Generates the specialized system prompt used for handling "dot commands" (e.g., .cat, .ls). It uses the system.dot-command.org template.
(assistant-mode-system-prompt)Generates the system prompt for assistant mode, which is focused on screen analysis and interaction. It uses the system.assistant.org template.
(get-active-guide-content)Reads the content of the currently active guide file (*active-guide-file*). It performs size checks and filters out any headlines marked with tags from *guide-exclude-tags* (e.g.,:nocontext:).
Return Value: A string containing the filtered guide content, or NIL on error or if the guide is too large.
The db functionality in Hactar manages the connection to the SQLite database, which stores documentation, starters, and other persistent data. It uses the sqlite library and the sqlite-vec extension for vector search.
Database connection parameters are configured via environment variables or globals.
*db-path*: Path to the SQLite database file. (Default:*hactar-data-path*/hactar.dbor from$HACTAR_DB_PATH)
These functions manage the global database connection.
(connect-db)Establishes a top-level connection to the database using the configured parameters. It also loads the sqlite-vec extension. This is typically handled by the application startup.
(disconnect-db)Disconnects the top-level database connection.
These are internal helper functions for formatting data for SQL queries.
(format-vector-for-sqlite-vec vector)Formats a Lisp list of numbers into a JSON string suitable for sqlite-vec insertion.
vector: A list of numbers.
(format-array-for-sqlite lisp-list)Formats a Lisp list of strings into a JSON array string for SQLite.
lisp-list: A list of strings.
RAG handling stuff is rag.lisp.
(docs-create &key source title content (tags #()) (covers #()) (links_to #()) slug type meta)Creates one or more document entries in the database. It automatically handles content splitting for large documents and generates vector embeddings for searching (using sqlite-vec). If a document with the same source already exists, it will be replaced.
&key:source(String, required): The origin of the document (e.g., a URL or file path). This is used as a unique identifier for the content.title(String, required): The title of the document.content(String, required): The textual content of the document.tags(List of Strings, optional): A list of tags for categorization (e.g.,("api", "react")).covers(List of Strings, optional): A list of technologies or topics this document covers (e.g.,("react@18", "typescript")).links_to(List of Strings, optional): A list of other document IDs this document references.slug(String, optional): A URL-friendly slug for the document.type(String, optional): The type of document (e.g., "documentation", "example").meta(Plist or Alist, optional): A plist or alist of arbitrary metadata to be stored as a JSON object.
Return Value: A list of integer IDs for the newly created document entries.
(docs-find &key tags covers slug sources text type id (limit 10) (offset 0))Finds documents in the database based on a combination of metadata filters and semantic text search (using sqlite-vec).
&key:text(String, optional): If provided, performs a vector similarity search on the document content. Results are ordered by relevance.tags(List of Strings, optional): Filters for documents that contain all of the specified tags.covers(List of Strings, optional): Filters for documents that cover all of the specified topics.sources(List of Strings, optional): Filters for documents that match any of the specified source strings.id(Integer, optional): Finds a document by its exact unique ID.slug(String, optional): Finds a document by its exact slug.type(String, optional): Finds documents by their exact type.limit(Integer, optional): The maximum number of documents to return (Default: 10).offset(Integer, optional): The number of results to skip (for pagination).
Return Value: A list of plists, where each plist represents a document matching the search criteria. The document plist contains keys like :id, :title, :content, :tags, :covers, :source, etc.
(starters-find &rest args &key tags &allow-other-keys)A convenience wrapper around docs-find that automatically adds the "starter" tag to the search query. It accepts all the same arguments as docs-find.
Hactar uses a bunch of different TUI stuff. Where possible we wrap command CLI tools like FZF, and where not we hand roll our own
The primary functions for creating interactive selections and prompts.
(select-with-fzf items &key preview-command)Presents a list of items to the user for selection using the external fzf command-line tool. This function requires fzf to be installed and available in the system's PATH.
items(List of Strings): The list of strings to be displayed for selection.&key:preview-command(String, optional): A shell command string to be used byfzffor generating a preview for the highlighted item. The placeholder{}can be used in the command to represent the selected item.
Return Value: The string of the selected item, or NIL if the user cancels the selection (e.g., by pressing Esc or Ctrl-C).
(select-doc-with-fzf doc-list)A specialized version of select-with-fzf designed for selecting from a list of documentation plists. It displays the document title for selection and a formatted preview of the full document.
doc-list(List of Plists): A list of document plists, as returned bydocs-find. Each plist must contain at least an:idand a:title.
Return Value: The full plist of the selected document, or NIL if the selection is cancelled.
(confirm-action prompt)Prompts the user with a "Yes/No" question and waits for their input.
prompt(String): The question to display to the user (e.g., "Are you sure?").
Return Value:
Tif the user enters "y" or "Y".NILif the user enters "n" or "N".The function will re-prompt if any other input is given.
(get-multiline-input)Opens the user's default editor ($EDITOR, falling back to nano) to allow for multi-line text input. This is useful for writing long prompts or messages.
Return Value: A string containing the text entered by the user. Lines starting with # are treated as comments and are removed from the final string.
The following function provides an in-Lisp TUI selector, which is used as a fallback or for environments where fzf is not available. It is not typically called directly.
(fuzzy-select items)Displays a TUI selector built within Common Lisp, without external dependencies. It provides a two-pane view with a filterable list on the left and a preview on the right.
items(List of Plists): A list where each element is a plist containing at least(:item . "display-string")and(:preview . "preview-text").
Return Value: The full plist of the selected item, or NIL if the selection is cancelled.
This file provides a system for importing documentation from various external sources. It uses a registry of "import sources" that match against a URI using regular expressions. This is similar to the `router.lisp` file but is specifically designed for fetching, processing, and ingesting new content into the document database.
When a URI is provided (e.g., "npm:react@latest", file:~/notes.md, "https://github.com/user/repo"), this system finds a matching source, executes its handler to retrieve the content, automatically generates metadata (tags and summary) using an LLM, and then creates a new document.
(defstruct import-source name ; Source name (symbol) pattern ; Regex pattern string param-names ; List of parameter names to extract from regex groups priority ; Integer priority (higher = checked first) handler) ; Function that retrieves content (package version) -> (values content title)This structure defines a single import provider.
name: A unique symbol identifying the source.pattern: A CL-PPCRE compatible regular expression string used to match against an input URI.param-names: A list of symbols that correspond in order to the capture groups in thepattern.priority: An integer. Sources with higher priority numbers are checked before sources with lower numbers.handler: A function that performs the import. It receives the extracted parameters as arguments and must return two values: the fetchedcontent(as a string) and atitle(as a string).
(defun register-import-source (name pattern param-names priority handler))This is the low-level function for manually adding a new import source to the `*import-sources*` registry. It instantiates an `import-source` struct and saves it. The `defsource` macro is the preferred, high-level way to define a source.
;; Manually register a source for local wiki files(register-import-source 'local-wiki-source "^wiki:(.+)$" '(:page-name) 10 (lambda (page-name) (let ((path (format nil "/var/wiki/~A.md" page-name))) (values (uiop:read-file-string path) page-name))))(defun match-import-source (uri))This function attempts to find a matching `import-source` for the given `uri` string. It collects all sources from the `*import-sources*` table, sorts them by priority (highest first), and iterates through them. The first source whose `pattern` matches the `uri` is selected.
It returns two values: the matching `import-source` struct and an alist of `(param-name . value)` pairs. If no source matches, it returns `(values nil nil)`.
(multiple-value-bind (source params) (match-import-source "wiki:Main_Page") (when source (format t "Matched source: ~A~%" (import-source-name source)) (format t "Params: ~A~%" params)));; Output:;; Matched source: LOCAL-WIKI-SOURCE;; Params: ((:PAGE-NAME . "Main_Page"))(defun generate-doc-metadata (content title source-uri))This utility function uses an LLM to automatically generate metadata for new content. It sends the content, title, and source-uri to the LLM with a prompt requesting a comma-separated list of tags and a one-line summary. It then parses the LLM's plain text response to extract these two pieces of data.
It returns two values: a list of string tags and a single summary string.
(defun execute-import (uri &key tags covers meta))This is the main, high-level function for running the entire import process. It takes a `uri` and optional, manually-provided metadata.
Its workflow is as follows:
It calls `match-import-source` with the `uri`.
If a source is found, it executes the source's `handler` with the extracted parameters to get the `content` and `title`.
If `tags` were not provided as a keyword argument, it calls `generate-doc-metadata` to create them automatically.
It then calls `docs-create` to save the content, title, and all metadata to the document database.
It prints a status message to standard output.
If no source matches the `uri`, or if the handler fails, it prints an error message and returns `nil`. On success, it returns the ID(s) of the newly created document(s).
;; Import a document and let the LLM generate metadata(execute-import "npm:react@19.0.0");; Import a local file and provide manual metadata(execute-import "file:~/my-notes.md" :tags '("personal" "project-alpha") :covers '("notes-alpha"))(defun %docs-import-common (args &key extra-tags usage))This is a private helper function used by the CLI commands (`/import`, `/import.starter`, etc.). It handles the common logic of parsing the command arguments, separating the `uri` from the metadata key/value pairs (like `:tags`, `:covers`), and merging any `extra-tags` (e.g., adding "starter") before finally calling `execute-import`.
(defmacro defsource (name &rest args))This is the primary, declarative macro for defining a new import source. It is a user-friendly wrapper around `register-import-source` that handles parsing keyword arguments and constructing the handler lambda.
name: The symbol to name this source (e.g., `npm-source`).:pattern: The regex string to match URIs.:params: A list of symbols for the regex capture groups. These become the lambda list for the handler body.:priority: (Optional) An integer priority (default 10).body: The Lisp forms to execute as the handler. This code must return two values: `(values content title)`.
;; Define a source for importing from a specific GitHub repo's "docs" folder(defsource my-project-docs :pattern "^my-project:(.+)$" :params (:doc-name) :priority 20 (let* ((url (format nil "https://raw.githubusercontent.com/user/my-project/main/docs/~A.md" doc-name)) (content (fetch-url-content url))) (values content (format nil "My Project: ~A" doc-name))));; This can now be imported;; (execute-import "my-project:installation")(defmacro defdocsource (&key name version platform uri))This macro is a specialized helper that does not define an import source. Instead, it defines a custom documentation route using `defroute` (from `router.lisp`).
This is used to override the default behavior of another import source. For example, the built-in `npm-source` calls `get-npm-docs`, which in turn calls `execute-route`. `defdoc` injects a high-priority route that `execute-route` will find first, allowing you to point a specific package/version (e.g., "react@19.^") to a specific static `uri` (like a README on a specific branch) instead of letting `npm-source` try to find it dynamically.
;; This route will be matched by the router,;; effectively overriding the default 'get-npm-docs' logic;; for React 19.(defdocsourc :name "react" :version "19.^" :platform "npm" :uri "https://raw.githubusercontent.com/facebook/react/main/README.md")The file defines several sources by default using `defsource`:
npm-source:
Pattern: `"npm:([^@]+)@(.+)$"`
Description: Imports documentation for an NPM package by calling `get-npm-docs`.
file-source:
Pattern: `"file:(.+)$"`
Description: Imports a local file. The path is resolved relative to `*repo-root*`.
http-source:
Pattern: `"https?://(.+)$"`
Description: A low-priority (5) fallback that fetches content from any `http` or `https` URL.
github-repo-source:
Pattern: `"https?://github\\.com/([^/]+)/([^/]+)/?$"`
Description: A high-priority (15) source that specifically matches GitHub repository URLs and fetches their main `README.md` file.
This file also defines several user-facing chat commands for triggering the import process.
(define-command import (args))The primary command for importing documentation.
Usage: `/import <uri> -tags=tag1 -covers=cover1`
Example: `/import npm:react@latest -tags=framework,ui`
(define-command docs-import (args))(define-command import.docs (args))Aliases for the `/import` command with identical functionality.
(define-command import.starter (args))A specialized version of `/import` that automatically adds the `"starter"` tag to any document it imports, in addition to any tags provided manually or generated by the LLM.
Hactar is built around an extensible architecture using hooks and macros. If you are LISPer you will feel right at home.
Hooks: A publish/subscribe system (
nhooks) that allows different parts of the application to react to events, such as file changes (*file-event-hook*) or LLM responses (*process-history-hook*).Analyzers: Functions that attach to hooks to analyze events and gather information about the project. For example, an analyzer might inspect a
package.jsonfile to determine the project's dependencies. Defined withdef-analyzer.Rules: Functions that attach to hooks and dynamically modify the LLM's system prompt based on the current context. For example, a rule could add React-specific instructions to the prompt if it detects a React project. Defined with
defrule.Processors: Functions that run after an LLM response is received. They are attached to the
*process-history-hook*and are responsible for parsing and acting on the LLM's output, such as applyingSEARCH/REPLACEblocks. Defined withdef-processor.Watchers: Background processes that monitor external commands (like a test suite). Their output can be hooked into by analyzers. Defined with
defwatcher.Tools: Functions that the LLM can decide to call to perform actions or get information. Hactar exposes these tools to the LLM, which can then request to execute them with specific arguments. Defined with
deftool.Commands: User-facing commands callable from the REPL. Slash commands (e.g.,
/help) are for direct user interaction, while dot commands (e.g.,.cat) are designed to be interpreted by the LLM as part of a prompt. Defined withdefine-commandanddefdot.
A struct holding the configuration for a specific LLM.
name(String): The unique name for the model configuration (e.g.,"openai/gpt-4o-mini").provider(String): The provider name (e.g.,"openai"), derived from the name.model-name(String): The actual model identifier used by the provider's API.edit-format(String): The format for code edits ("diff"or"file").use-repo-map(Boolean): Whether to include the repository map in the context for this model.max-output-tokens(Integer): The maximum number of tokens the model can generate.max-input-tokens(Integer): The maximum number of tokens the model can accept as input.input-cost-per-token(Float): The cost per input token.output-cost-per-token(Float): The cost per output token.supports(List of Strings): A list of features the model supports (e.g.,"vision").
(defstruct web-route "Represents a route within a web command." name ; Route name (symbol) description ; Route description (string) pattern ; List pattern to match, e.g., ("newest" &rest args) priority ; Integer priority (higher = checked first) handler ; Function that returns code to execute bindings) ; List of variable names to bind from the patternA web-route is a pattern used to match a web command to web route. It is the core struct for the web commands in Hactar. The stuff like hactar hn latest gets routed using it.
Use the macro to construct a web-route:
(defwebroute hn-top "Get the top items from hn" ("top" &rest args) (args) :priority 10 (lambda () (get-hn-top args)))(defstruct web-command "Represents a web command with routes." name ; Command name (string) description ; Command description routes ; List of web-route structs default-route) ; Default route handler (optional)These macros are the primary way to extend Hactar's functionality.
(def-analyzer name hooks enabled (&rest args) &body body)Defines an analyzer function and registers it with the system.
name: A symbol for the analyzer's name.hooks: A list of hook specifications to attach to. A spec can be a hook variable (e.g.,*file-event-hook*) or a list(hook-variable filter-function).enabled: A boolean indicating if the analyzer is enabled by default.args: The argument list for the analyzer function, which must match the signature of the hooks it attaches to.body: The code for the analyzer.
(defrule name hook-spec (&rest args) &body body)Defines a rule that can dynamically add text to the system prompt.
name: A symbol for the rule's name.hook-spec: The hook specification to attach to (same format asdef-analyzer).args: The argument list for the rule function.body: The code for the rule. It should return a string to be added to the prompt, orNILto remove/deactivate the rule.
(defwatcher name command help &key (daemon nil))Defines a watcher process that can be started by the user.
name: A symbol for the watcher's name.command: A string or list of strings for the shell command to run.help: A help string describing the watcher.daemon: A boolean. IfT, the watcher runs continuously in the background. IfNIL, it runs once and exits.
(def-processor name (&rest args) &body body)Defines a processor that runs on every LLM response. It is automatically attached to the *process-history-hook*.
name: A symbol for the processor's name.args: The argument list. For the history hook, this is typically(history).body: The code for the processor.
(deftool name schema &body body &key (permissions :confirm))Defines a tool that the LLM can call.
name: A symbol for the tool's name.schema: A plist describing the tool's name, description, and parameters in a format compatible with the LLM provider (e.g., OpenAI's function calling schema).body: The Lisp code to execute when the tool is called. The arguments from the LLM are available in a plist bound toargs.permissions::confirm(default) requires user confirmation before running, while:autoallows the tool to run automatically.
(define-command name args &body body)Define a command, available as a slash command and/or a sub-command. Use :slash and :sub keyword arguments at the end of the body to control availability. :slash defaults to t, :sub defaults to nil. Use :cli-options to define CLI arguments for the sub-command. When used as a sub-command, the body receives a plist of parsed arguments
name: A symbol or string for the command name. The leading/is added automatically.args: The argument list for the command function.body: The code for the command. The first string in the body is used as the docstring.
Options:
:slash: Make it a slash command. Defaults tot:sub: Make it a sub command. Defaults tonil(i.e false):cli-optionsA plist of CLI arguments
(define-command docs-db (args) "List all known documentation and select one. In non-interactive mode, prints the path of the selected doc.Options: -t/--tags <tag>, -l/--limit <n>" (run-docs-db args) :cli-options ((:short "t" :long "tags" :description "Filter by tags") (:short "l" :long "limit" :description "Limit number of results")))define-sub-command (name args &body body)Define a sub-command for CLI use. Sets in-repl to nil. NAME can be a symbol or a string (e.g., 'my-cmd', \"my.cmd\"). Supports :cli-options keyword for defining command-line arguments. The command body receives a plist of parsed arguments.
name: A symbol or string for the command name. The leading/is added automatically.args: The argument list for the command function.body: The code for the command. The first string in the body is used as the docstring.
Options:
:cli-optionsA plist of CLI arguments
define-slash-command (name args &body body)Define a slash command with the given name and arguments, capturing the docstring. NAME can be a symbol or a string (e.g., 'my-cmd', \"my.cmd\", \"/my.other.cmd\"). Supports :cli-options keyword for defining command-line arguments. The command body receives a plist of parsed arguments if cli-options are present
name: A symbol or string for the command name. The leading/is added automatically.args: The argument list for the command function.body: The code for the command. The first string in the body is used as the docstring.
Options:
:cli-optionsA plist of CLI arguments
(defdot name args &body body)Defines a dot command (e.g., .cat) intended for the LLM to use.
name: A symbol for the command name. The leading.is added automatically.args: The argument list for the command function.body: The code for the command. The first string in the body is used as the docstring.
Define an agent that can be run to perform tasks.
The body should contain a docstring (optional) followed by keyword arguments: :stack (optional list of strings) :init (required form) :run (optional form) :stop-condition (optional form) :cleanup (optional form
(defmacro defwebcommand (name description &rest route-definitions) "Define a web command with routes. ")Used for constructing web commands to allow Hactar and you to retrieve things as plaintext. The underlying mechanism behind how Hactar can retrieve API docs for npm packages, get plain rss feeds for sites etc.
(defwebcommand hn "Fetch news from Hacker News" (defwebroute ("newest" &rest args) (args) :priority 10 `(get-hn-newest ',args)) (def-default-route () `(get-hn-front-page-md)))(defmacro defwebroute (name description pattern bindings &rest args) "Define a route within a defwebcommand. Pattern is matched against args. NAME is a symbol identifying this route. DESCRIPTION is a string documenting what this route does. PATTERN is matched against args. BINDINGS specifies which variables from the pattern to bind. Body should return code to execute. Accepts :priority keyword (default 10) followed by body forms."A web-route is a pattern used to match a web command to web route. It is the core struct for the web commands in Hactar. The stuff like hactar hn latest gets routed using it.
(defwebroute newest-hn "Newest items on HN" ("top" &rest args) (args) :priority 10 (lambda () (get-hn-top args)))Note: You must use the defwebroute macro inside of defwebcommand. This is so the scoping ends up correct
(defmacro def-default-route (bindings &body body) "Define a default route for a web command (when no other routes match).")A macro for defining documentation in the current context/hactar instance. The macro is the primary macro used to integrate knowledge bases with Hactar. We use a combination of hooks, regex, ML models etc to add docs to context but it ultimately just boils down a push of a plist onto the docs global.
(defdoc "Example Doc" "file:docs/example.txt" :tags '("example" "test") :covers '("example-cover"))(get-llm-response prompt &key stream custom-system-prompt add-to-history dot-command-p)The main function for interacting with the current LLM.
prompt(String): The user's prompt.&key:stream(Boolean): Whether to stream the response.custom-system-prompt(String): An override for the system prompt.add-to-history(Boolean): Whether to add this turn to the chat history.dot-command-p(Boolean): IfT, uses the specialized dot-command system prompt.
(add-to-chat-history role content &key tool-calls tool_call_id name)Adds a message to the global *chat-history*.
role(String):"user","assistant", or"tool".content(String): The text content of the message.&key: Optional parameters for tool-related messages.
Extensions can attach to these hooks to react to events.
*file-event-hook*: Fired on file system changes. Handler receives(pathname event-type).*package-json-analyzed-hook*: Fired after apackage.jsonfile is analyzed. Handler receives(metadata-hashtable).*process-history-hook*: Fired after any LLM response is added to the history. Handler receives(full-chat-history).*assistant-extraction-hook*: Fired when the assistant mode extracts new text from the screen. Handler receives(extraction-text).*watcher-<name>-output-hook*: A dynamic hook is created for each watcher defined withdefwatcher. For a watcher namedtest, the hook is*watcher-test-output-hook*. Handler receives(active-watcher-struct line-of-output).*context-file-added-hook*: Called when a new file is addeded to context. The handler is called with the file path (string).*context-file-dropped-hook*: Called when a file is dropped from context. The handler is called with the file path (string).
Hactar uses a custom rolled lib called llm for abstracting over LLM APIs. Supports OpenAI, Anthropic, Google Gemini, Ollama, and OpenRouter. It supports streaming and non-streaming responses, vision + image capabilities, and tool use.
API keys are loaded from environment variables.
*openai-api-key*: Loaded fromOPENAI_API_KEY.*anthropic-api-key*: Loaded fromANTHROPIC_API_KEY.*gemini-api-key*: Loaded fromGEMINI_API_KEY.*openrouter-api-key*: Loaded fromOPENROUTER_API_KEY.
*read-timeout*: (Default:120) The timeout in seconds for HTTP requests to the LLM providers.*debug*: (Default:nil) WhenT, enables debug logging to standard output.*debug-stream*: (Default:nil) When set to a stream, detailed debug logs are written to it.*default-system-prompt*: (Default:"You are a helpful assistant.") The default system prompt used if no other is provided.
(complete type messages &rest args &key (max-context 32000) images tools &allow-other-keys)Dispatches a completion request to the appropriate provider specific function.
type: A keyword specifying the provider. One of:openai,:ollama,:anthropic,:openrouter, or:gemini.messages: A list of message alists, e.g.,'(((:role . "user") (:content . "Hello"))).&key:stream(Boolean): IfT, the function returns anllm-stream-readerobject for streaming the response. IfNIL(default), it returns the full response text.model(String): The specific model name to use (e.g.,"gpt-4o-mini").system-prompt(String): A system prompt to override the default.max-tokens(Integer): The maximum number of tokens to generate in the response.max-context(Integer): The context window size for the model.images(List): A list of image plists for vision-capable models. Each plist should contain:base64-dataand:mime-type.tools(List): A list of tool definitions for the LLM to use.
Return Value (Non-streaming): Returns three values:
response-text
(String): The content of the assistant's response.
tool-calls-list
(List): A list of tool calls made by the model, or NIL.
full-message-history(List): The complete list of messages sent to and
received from the API.
Return Value (Streaming): Returns two values:
An
llm-stream-readerinstance.The list of messages that were sent to the API.
Signature:
(openai-complete messages &key (api-key *openai-api-key*) (model "gpt-4o-mini") (endpoint "https://api.openai.com/v1/chat/completions") (system-prompt *default-system-prompt*) (max-tokens 1024) (max-context 32000) ; ignored (stream nil) (response-format nil) (images nil) (tools nil) (extra-headers nil) &allow-other-keys)Prepares messages and inserts a
systemmessage (if not present)If
imagesare provided:The last user message becomes a list of content parts:
First: text part
((:type . "text") (:content . user-text))Then: one or more image parts with data URIs via
image_url
Builds OpenAI Chat Completions payload:
:model,:messages,"max_tokens",:streamOptional
"response_format"e.g.,("type" . "json_object")Optional
:toolsand:tool_choice "auto"
Auth header:
Authorization: Bearer <api-key>
Returns
Streaming:
(llm-stream-reader processed-messages)Non-streaming:
(content tool-calls updated-history)
Non streaming:
(multiple-value-bind (text tool-calls history) (openai-complete "Write a haiku about the moon." :api-key (uiop:getenv "OPENAI_API_KEY") :model "gpt-4o-mini" :max-tokens 200) (declare (ignore tool-calls history)) (format t "~A~%" text))With images:
(let* ((imgs (list '((:base64-data . "BASE64...") (:mime-type . "image/png")))) (msgs '(((:role . "user") (:content . "Describe this image and its mood."))))) (openai-complete msgs :images imgs))Streaming:
(let ((prompt "Stream a short poem about the ocean.")) (multiple-value-bind (reader history) (llm:openai-complete prompt :model "gpt-4o-mini" :stream t ;; Optional: explicitly pass API key; by default it uses *openai-api-key* from env ;; :api-key (uiop:getenv "OPENAI_API_KEY") ) (declare (ignore history)) (unwind-protect (loop for chunk = (llm:read-next-chunk reader) while chunk do (princ chunk) ; print as it streams (force-output)) (when reader (llm:close-reader reader)))))Signature:
(ollama-complete messages &key (model "llama3") (endpoint "http://localhost:11434/api/chat") (system-prompt *default-system-prompt*) (stream nil) (response-format nil) (max-tokens 1024) ; use via options.num_predict if enabled (max-context 32000) ; sent as options.num_ctx (images nil) (tools nil) (extra-headers nil) &allow-other-keys)Prepares messages and inserts a
systemmessage (if not present)If
imagesare provided:The last user message gets an additional key
:imageswith a list of base64 strings
Payload:
"model","messages","stream""options"map:"num_ctx"set from:max-contextOptionally enable
"num_predict"if you uncomment it in code
Optional
"format"from:response-format(e.g.,"json")Optional
:toolsforwarded (provider support may vary)
No auth header by default (local server)
Returns:
Streaming:
(llm-stream-reader processed-messages)Non-streaming:
(content nil updated-history)tool-callsis currentlynil
non-streaming:
(let* ((imgs (list '((:base64-data . "BASE64...") (:mime-type . "image/png")))) (msgs '(((:role . "user") (:content . "Describe what you see."))))) (ollama-complete msgs :images imgs))Streaming:
(let ((prompt "Stream a short poem about mountain winds.")) (multiple-value-bind (reader history) (llm:ollama-complete prompt :model "llama3" :endpoint "http://localhost:11434/api/chat" :stream t) (declare (ignore history)) (unwind-protect (loop for chunk = (llm:read-next-chunk reader) while chunk do (princ chunk) (force-output)) (when reader (llm:close-reader reader)))))Signature:
(anthropic-complete messages &key (api-key *anthropic-api-key*) (model "claude-3-haiku-20240307") (endpoint "https://api.anthropic.com/v1/messages") (system-prompt *default-system-prompt*) (max-tokens 1024) (max-context 32000) ; ignored (stream nil) (images nil) (tools nil) (extra-headers nil) &allow-other-keys)Prepares messages WITHOUT auto-inserting a
systemmessageIf
system-promptnon-empty, it is sent via the top-level"system"payload field
If
imagesare provided:The last user message content becomes a sequence of blocks:
One or more image blocks:
(:type "image")with base64 sourceFollowed by a text block:
(:type "text")with user text
Payload includes:
"model","messages","max_tokens","stream"Optional
"system"when presentOptional
:toolsand"tool_choice" . (:type . "auto")
Headers:
"x-api-key","anthropic-version""2023-06-01""anthropic-beta""tools-2024-04-04" (required for tools)
Returns:
Streaming:
(llm-stream-reader processed-messages)Non-streaming:
(content tool-calls updated-history)tool-calls: content blocks of type"tool_use", if any
non-streaming:
(multiple-value-bind (text tool-calls history) (anthropic-complete '(((:role . "user") (:content . "Summarize the key points of the Agile Manifesto."))) :api-key (uiop:getenv "ANTHROPIC_API_KEY") :model "claude-3-haiku-20240307" :max-tokens 300) (declare (ignore tool-calls history)) (format t "~A~%" text))Streaming:
(let ((prompt "Stream a short poem about autumn leaves.")) (multiple-value-bind (reader history) (llm:anthropic-complete prompt :model "claude-3-haiku-20240307" :stream t ;; Optional: explicitly pass API key; by default it uses *anthropic-api-key* from env ;; :api-key (uiop:getenv "ANTHROPIC_API_KEY") ) (declare (ignore history)) (unwind-protect (loop for chunk = (llm:read-next-chunk reader) while chunk do (princ chunk) (force-output)) (when reader (llm:close-reader reader)))))Signature:
(openrouter-complete messages &key (api-key *openrouter-api-key*) (model "mistralai/mistral-7b-instruct:free") (endpoint "https://openrouter.ai/api/v1/chat/completions") (system-prompt *default-system-prompt*) (max-tokens 1024) (max-context 32000) ; ignored (stream nil) (response-format nil) (images nil) (tools nil) (extra-headers nil) &allow-other-keys)Same message/image handling as OpenAI (OpenRouter uses OpenAI-compatible format)
Payload fields parallel OpenAI:
"max_tokens","response_format",:tools,:tool_choice "auto"
Headers:
Authorization: Bearer <api-key>Optional recommended headers (uncomment in code): HTTP-Referer, X-Title
Returns:
Streaming:
(llm-stream-reader processed-messages)Non-streaming:
(content tool-calls updated-history)
Non-streaming:
(openrouter-complete '(((:role . "user") (:content . "List three creative uses for paperclips."))) :api-key (uiop:getenv "OPENROUTER_API_KEY") :model "meta-llama/llama-3.1-8b-instruct")Streaming:
(let ((prompt "Stream a short poem about sunrise.")) (multiple-value-bind (reader history) (llm:openrouter-complete prompt :model "mistralai/mistral-7b-instruct:free" :stream t ;; Optional: explicitly pass API key; by default it uses *openrouter-api-key* from env ;; :api-key (uiop:getenv "OPENROUTER_API_KEY") ;; Optional: send recommended headers ;; :extra-headers '(("HTTP-Referer" . "https://your.app/") ;; ("X-Title" . "your-app-name")) ) (declare (ignore history)) (unwind-protect (loop for chunk = (llm:read-next-chunk reader) while chunk do (princ chunk) (force-output)) (when reader (llm:close-reader reader)))))Signature:
(gemini-complete messages &key (api-key *gemini-api-key*) (model "gemini-1.5-flash") (endpoint "https://generativelanguage.googleapis.com/v1beta/models") (system-prompt *default-system-prompt*) (max-tokens 2048) (stream nil) (max-context 32000) ; ignored (images nil) (tools nil) (extra-headers nil) &allow-other-keys)Prepares messages WITHOUT auto-inserting a
systemmessageConverts messages to Gemini’s
contentsformat:Each message becomes
(("role" . "user"|"model") ("parts" . #(...)))Role mapping: "assistant" => "model", all others => "user"
Each parts vector starts with a text part
(("text" . content))
If
imagesare provided:For the LAST user message only, prepends one or more
inline_dataparts:(("inline_data" . (("mime_type" . ...) ("data" . base64))))
Text part follows the image parts
System prompt:
Sent via top-level
"system_instruction"withpartsvector
Generation config:
Sent via ="generationConfig" => ("maxOutputTokens" . max-tokens)
Endpoint/action:
Non-streaming:
.../<model>:generateContent?key=<API_KEY>Streaming:
.../<model>:streamGenerateContent?key=<API_KEY>&alt=sse
Content-Type header:
"application/json"
Returns:
Streaming:
(llm-stream-reader processed-messages)Non-streaming:
(content function-calls updated-history)function-calls: extracted from parts with"function_call", if any
Refer to the llm:complete documentation for common keyword arguments.
non-streaming:
(multiple-value-bind (text calls history) (gemini-complete '(((:role . "user") (:content . "Explain quantum entanglement like I am five."))) :api-key (uiop:getenv "GEMINI_API_KEY") :model "gemini-1.5-flash" :max-tokens 256) (declare (ignore calls history)) (format t "~A~%" text))Streaming:
(let ((prompt "Stream a short poem about starlight.")) (multiple-value-bind (reader history) (llm:gemini-complete prompt :model "gemini-1.5-flash" :stream t ;; Optional: explicitly pass API key; by default it uses *gemini-api-key* from env ;; :api-key (uiop:getenv "GEMINI_API_KEY") ) (declare (ignore history)) (unwind-protect (loop for chunk = (llm:read-next-chunk reader) while chunk do (princ chunk) (force-output)) (when reader (llm:close-reader reader)))))When stream is T in a completion call, the API returns a stream reader object to consume the response in chunks.
A structure representing an active LLM stream. It is not meant to be instantiated directly by the user.
http-stream: The underlying network stream.provider: The provider keyword (e.g.,:openai).closed-p: A boolean indicating if the stream is closed.tool-call-buffer: Internal buffer for assembling tool calls.
(read-next-chunk reader)Reads the next available text chunk from the llm-stream-reader instance.
reader: Anllm-stream-readerobject.
Return Value:
A string containing the next piece of the response.
NILif the stream is finished.
(close-reader reader)Manually closes the stream reader and its underlying network connection. This is called automatically when the stream ends.
(llm-stream-reader-closed-p reader)Returns T if the stream reader is closed, NIL otherwise.
(llm-stream-reader-provider reader)Returns the provider keyword (e.g., :openai) for the given stream reader.
(ollama-embed text &key model endpoint extra-headers)Generates a vector embedding for the given text using an Ollama-compatible API.
text(String): The input text to embed.&key:model(String): The name of the embedding model to use (default:"nomic-embed-text").endpoint(String): The API endpoint (default:"http://localhost:11434/api/embeddings").
Return Value:
A list of numbers representing the embedding vector.
NILon error.
Hactar pro is designed as a simple git repo. Installation is simply a clone and copy:
hactar pro.installThis will clone and copy hactar pro content and extensions. The pro extensions will end up symlinked to HACTAR_CONFIG_PATH which by default is ~/.config/hactar/pro and the content repo will be cloned to HACTAR_PRO_PATH which by default is XDG_DATA_DIR/hactar/pro
To update hactar you can run hactar pro.update. Or just manually clone and run hactar pro.check
All specific Hactar pro features are scoped under hactar pro
Install Hactar pro features. You will of course need have a pro membership and have your git configured to use the same Github account you signed up with.
Options: --content, -c VALUE Select which content DB to copy (default: all)
Update the pro hactar repo
Check for issuess with enviroment variables, depencies, packages etc
The core of philosophy of Hactar is augmented human coding. Hactar is designed to work with you and superhcahrge you as a coder. Hactar is not designed to be an agent. This puts us squarely in the genre of tools like Aider, gpt.el, parrot.nvim, minuet etc. If you like working in the CLI and hacking on your own developer tooling then Hactar is the tool for you. If you are seeking an agent vibe coding tool then Hactar is probably not the best fit.
The following maps common AI features to their hactar equivalents if they exist
| Feature | Hactar Feature Name |
|---|---|
| rules | .hactar.rules.lisp |
| Claude.md | .hactar.guide.org or .hactar.guider.md or Agent.md |
| Feature | Hactar | Claude | Aider | Cursor CLI | amp | plandex |
|---|---|---|---|---|---|---|
| out of the box multiple models support | yes | no | yes | yes | yes | yes |
| open source | yes | no | yes | no | no | yes |
| careful about token usage | yes | no | yes | no | yes | yes |
| rules | yes | yes | no | yes | yes | no |
| guides (e.g Agent.md) | yes | yes | no | yes | yes | unknown |
| spec driven/plans | no | yes | no | no | unknown | yes |
| unix philosphy e.g composable in commands | yes | yes | yes | no | yes | no |
Perhaps the biggest con with with Claude is token usage. Claude when not used with $200 max plan is generally the most expensive CLI out there. Claude takes advantage of being subsidized by Anthropic and tends to go wild with pushing everything into context. Hactar in contrast, tries very hard to minimize token usage.
This philosophy of token usage leads to larger DX choices. Claude for instance, won't even work out of the box with other providers. Claude doesn't emphasize tools for dealing with context because the size of the context is not a worry. Hactar conversely, provides robust tooling and extension points for dealing with context.
These are all tradeoffs though. Claude is wonderful because you can get a lot of performance by just throwing all the code into a prompt. The easiest way for an AI to figure out how your codebase operates is to just read your code. The biggest tradeoff of this, is that Claude will dramatically drop in quality when used with models that struggle with large context windows. Claude will work with any model but Hactar will perform well with any model.
Claude is not open source and is primarily integrated with other systems via composition. Hactar encourages extensions instead. If you want to hook into Claude you have limited things like shell hools at your disposal. WIth Hactar you can literally change anything about it with a simple lisp file.
Philosophy wise and feature wise Aider is the most similar tool to Hactar. In fact, I daily drove Aider before writing Hactar. Hactar has a few main distinguishing differences:
Hactar borrows a lot of philosphy from the Emacs and LISP communities. In Aider, you write plugins, in Hactar you extend it by modifying it. If you are a LISPer Hactar is a tool designed for you.
Aider puts a lot of UX features into their CLI. We assume that Hactar will primarily be used within something else. For example, instead of rolling our own TUI we mostly rely on formats that a tool wrapping. For example, is no markdown highlighting in Hactar because why bother bloating the codebase when your editor will handle the markdown syntax for you.
Hactar is a no hype no AI bullshit tool. Agents are incapable of building software that meet professional requirements, so we don't focus on features that enable it. We don't build hpye generating features that only work in demos. Hactar features are designed to enhance you as a developer not pretend to replace you so we can get VC dollars and then shut down. Hactar is a real tool used by me daily to increase my productivity as a developer. Annd all future features will be always be hyperfocused on making writing code easier.
That said, you can use Hactar to quickly build your vibe coding Lovable competitor. Hactar will make you a more productive dev.
Hactar focuses on augmented coding, while Cursor is hyper-focused on being an agent and the original vibe coding tool. The biggest difference, feature-wise, is that Hactar is composable. You can chain Hactar with other commands in the CLI; for example, tail -f app.log | hactar -e! "Analyze this log file". Cursor, in contrast, is designed to be an agent that works like the VSCode equivalent but in your terminal. Cursor is for vibe coding; Hactar is for real work.
Hactar is designed around Unix and LISPer philosophies. Hactar is meant to be a tool you compose into your workflows and integrate with tools you have spent decades mastering. AI IDEs like Cursor are designed to include everything you need in one package. This lowers the barrier, but the tradeoff is that you throw out the tools experienced developers have spent decades mastering.
A major selling point of AI IDEs like Cursor is replacing high-barrier tasks with a consistent interface: chat. Instead of, for example, writing scripts, you provide the AI with tools and let it figure out how to write Bash for you. For someone who can't write Bash, this is fine; for someone who can, it is replaced with a broken, worse version. And many IDEs just lock you out of better tooling.
In general, AI IDEs optimize for hype and demos. They do not care about developer productivity; they care about entertaining VCs. Hactar cares about professional software engineering and is designed to enhance the things you have spent decades learning.
Currently we don't support the include syntax. This will be added first quarter 2026. See the roadmap for more details.
Hactar has not solved the context window and comression problem. Like with any AI tool you will need to make decisions about context engineering yourself; choosing which files to add and which to drop. If the model seems like it is getting confused or missing things, drop some stuff from context.
The agents ecosystem has a lot of emerging standards. Choosing a tool that works well for you requires evalauting those features. We try not to AI bullshit you so here is a quick reference to help you evaluate things.
tools. We take the suggested shell commands approach for tool usage. It gives the best performance and token cost.
multiple instances. Hactar will avoid tripping over itself. You get context bound to instances and different ports exposes into standard files so you can connect to any Hactar instance via http, mcp APIs etc.
literate programming. Hactar exposes all context as a single org-mode file. If you want you can write and interact with Hactar entirely by just making edits to that file.
virtual commands. A few LLM tools have played around with the idea of using the prompt as a filesystem and storing state in the convo. Hactar treats this workflow as a first class citizen using what are called dot commands. Dot commands can be used to chain together Hactar instances and manipulate the entire state of the prompt using LLMs.
Infinitely hackable with a real plugin system. No shell commands or hooks for extensions. Hactar is written in Common LISP. You can extend and modify anything in Hactar at runtime.
Guides (aka skills). Hactar is built around the idea of using knowledge repositories. Guides are skills but with more features and built ontop of org-mode.
automatic documentation. Hactar pro comes with a devdocs esque library. Hactar can automatically detect what API docs it needs and automatically add or manually let you add them
emacs, neovim, and CLI. I have daily driven the CLI as my primary computing interface for decades. Hactar treats the CLI first DX as first class. In Hactar, if there is a plaintext unix philosphy way of doing something it is the way it is done. This makes Hactar feel like a tool for real devs instead of a Vscode Extension AI hype startup building extensions that wont be maintained when their vim guy joins big corp.
skills. Hactar includes a layer for using guides as skills. You can have Hactar convert guides to skills, install guides as skills and use with CLAUDE etc.
AGENTS.md. An AGENTS.md will be automatically included in context. We don't support include syntax yet though -- however we do support org-transclusion in Emacs.
CLAUDE.md. You can automatically include CLAUDE.md files with the
HACTAR_GUIDE_PATHenv var.mcp. some partial support is included for mcp. That said, it drops performance and it is not recommended to be used.
full AGENTS.md support. We intend to support all the include syntax and other emerging LLM agent specific markdown extensions.
full mcp support is not planned. mcp just seems to be bad for performance. In the best case it eats token usage. There are better ways to do tool calling. I reserve my right to change my mind though.