fwdslsh

Slash a path through the bloat

Sharp tools for hackers who remember when software didn't fight you. Zero deps, instant setup, no frameworks to babysit.

Built for the Command Line

Small, sharp programs that do one thing well

~/project
$ hyphn init # Share agents & skills
✓ Initialized ~/.hyphn with default sources
$ gather https://docs.example.com # Crawl site to Markdown
✓ Processed 42 pages in 1.2s
$ disclose session-context # Bundle context for AI
✓ Created context bundle (12k tokens)
$ dispatch start # Secure agent sandbox
✓ Dispatch environment active on port 3030

hyphn

Share agents, commands & skills

CLI for sharing extensible assets across OpenCode and Claude Code platforms. Don't silo your workflows.

$ hyphn init
$ hyphn setup
$ hyphn sources add ~/team-hyphn

Why we built it: AI coding tools shouldn't silo your custom workflows. Share them.

rabit

Manifest-based navigation for AI

Simple convention for publishing content that both humans and AI can reliably navigate using .burrow.json manifests.

$ cat .burrow.json
$ rabit validate
$ rabit fetch

Why we built it: AI agents need structured paths, not HTML soup.

dispatch

Secure sandboxed execution

Containerized dev environment for safely running Claude AI and CLI agents without risking your host system.

$ dispatch init
$ dispatch start
$ dispatch attach

Why we built it: Run AI assistants in complete isolation with enterprise-grade security.

disclose

Learning retrieval & context assembly

CLI tool for retrieving and bundling relevant context from the Hyph3n learning system with token budget management.

$ disclose learnings --query "auth"
$ disclose session-context
$ disclose pack

Why we built it: AI needs the right context, not everything.

gather

High-performance web crawler

Crawl websites, download Git repos, ingest feeds - all converted to clean Markdown.

$ gather https://docs.example.com
$ gather github.com/owner/repo/tree/main/docs

Why we built it: Documentation migration shouldn't require frameworks.

inform

High-performance web content crawler

Crawl websites, extract main content, and convert to clean Markdown. Powered by Bun for maximum performance with zero dependencies.

$ inform https://docs.example.com
$ inform github.com/owner/repo/tree/main/docs
$ inform --include "*.md" --exclude "temp/*"

Why we built it: Documentation migration and content extraction shouldn't require heavy frameworks. Fast, focused, effective.

catalog

AI-ready documentation indexer

Generate llms.txt and llms-full.txt from Markdown/HTML directories. Perfect for AI-powered documentation workflows and searchable knowledge bases.

$ catalog --input docs --output build
$ catalog --sitemap --base-url https://docs.example.com
$ catalog --optional "drafts/**/*" --validate

Why we built it: AI tools need structured, indexed content. We made it trivial to prepare documentation for AI consumption.

The Path Forward

Old-school principles, new-school execution

We're not building the next unicorn framework. We're building tools for hackers who value craft over hype.

Ship binaries, not ecosystems: Download once, run everywhere. No package manager roulette or dependency hell.

Hack with the grain: Work with web standards and POSIX conventions instead of reinventing everything.

Performance starts in your head: Tools should amplify human intelligence, not fight it with ceremony and complexity.

The / isn't just our logo - it's a commitment to cutting through the cruft.