About

Built for Builders

ALIN was born from a belief that AI tools should work the way professionals actually think — through delegation, coordination, and trust.

The Origin

ALIN began as a personal frustration. Every AI chat tool on the market offered the same thing: a single text box connected to a single model. You type, it responds, you type again. For simple questions, that's fine. But for real work — building a website, conducting deep research, managing a codebase, orchestrating multi-step creative projects — a chatbox is not enough. It's like trying to run a company through a single walkie-talkie.

ALIN was built to solve that. Not by making a slightly better chatbot, but by rethinking the entire interaction model from the ground up. What if instead of one conversation thread, you had six specialized workstations? What if the AI could maintain context across sessions through a structured memory system? What if complex tasks could be delegated — actually delegated, with plans, timelines, checkpoints, and receipts — instead of typed one message at a time?

The Philosophy: Delegation, Not Surrender

ALIN's core principle is bounded autonomy. Most AI tools fall into one of two camps: either they do too little (basic chat — you have to manage every step yourself) or they try to do too much (autonomous agents with no oversight that hallucinate, make wrong assumptions, and produce unreliable output). ALIN takes the middle path.

When you give ALIN a complex task, it generates a professional execution plan — phases, time budgets, pod assignments, quality targets — and shows it to you before anything runs. You can approve it, modify it, or reject it. Once approved, specialized Agent Pods execute the work with checkpoint gating: at defined intervals, ALIN pauses, reports progress, and asks whether to continue. At the end, a detailed receipt documents every decision made, every assumption taken, every file produced.

This is delegation in the real sense. You maintain oversight without micromanaging. You can trust the output because you can inspect the process. The term for this in ALIN's governance system is "bounded autonomy" — the AI has freedom to make decisions within defined parameters, but never operates without accountability.

The Architecture: Hub-and-Spoke

ALIN is organized as a hub-and-spoke network. At the center is the Intelligence Hub — the shared context layer that maintains conversation history, the 8-layer memory system, user preferences, and the self-model. Orbiting this hub are six specialized stations, each purpose-built for a distinct workflow:

Chat Station handles natural conversation with multi-model streaming (Claude and OpenAI simultaneously), file uploads, code blocks, and branching conversations. Code Lab provides a full development environment with sandboxed execution, file system access, and AI-powered code generation. TBWO Command is the orchestration center for Time-Budgeted Work Orders — complex autonomous tasks with pod-based execution. Research Hub does deep web research with source evaluation, citation tracking, and structured intelligence reports. Image Studio connects to DALL-E 3 for visual creation with style direction and iterative refinement. Voice Room enables hands-free interaction through Whisper speech-to-text and ElevenLabs text-to-speech with emotion-aware modulation.

All six stations share the same context. Start a conversation in Chat, reference it in Research, use the findings in Code Lab, review everything in TBWO Command. The memory system ensures nothing is lost between sessions — ALIN remembers your preferences, your project details, and its own performance patterns.

The Canon

ALIN operates under a canonical governance system called the ALIN Canon. The Canon establishes three foundational principles: one identity (ALIN never fragments into conflicting sub-personalities), one memory (all stations share the same knowledge base), and one continuity (sessions build on each other rather than starting fresh). The Canon also defines ALIN's trust framework — a structured system of confidence levels, escalation rules, and accountability requirements that ensures the AI becomes more trustworthy as it becomes more capable.

The Canon isn't just documentation. It's enforced in code. The system prompt, memory consolidation rules, checkpoint gating logic, and receipt generation are all implementations of Canon principles. When ALIN makes a decision autonomously, the Canon determines whether that decision requires user approval, can be auto-accepted based on memory, or should be flagged for review.

The Self-Model

What makes ALIN fundamentally different from other AI tools is that it learns from its own execution. Every TBWO produces structured data: what worked, what failed, how many user corrections occurred, which tools were reliable, which approaches produced the highest quality output. This data feeds into ALIN's self-model — a persistent understanding of its own capabilities and limitations.

After enough usage, ALIN doesn't just remember what you prefer. It remembers what it's good at. It knows which types of tasks it handles reliably at 9/10 quality and which ones consistently need user intervention. It adjusts its confidence signals accordingly — high confidence on proven patterns, lower confidence on novel territory. This isn't a gimmick. It's the difference between a tool that makes the same mistakes forever and one that genuinely improves over time.

By the Numbers

ALIN represents over 64,000 lines of production TypeScript across 214 source files. The server backbone runs on Node.js with Express and SQLite, handling 40+ REST API endpoints for persistence, 15+ self-model endpoints for learning, and a full SSE streaming proxy for both Anthropic and OpenAI APIs. The TBWO executor alone is 3,654 lines — a complete autonomous task orchestration engine with parallel pod scheduling, checkpoint gating, artifact passing, and receipt generation.

The Future

ALIN is currently in public beta with a 5-phase roadmap toward full deployment. Phase 1 (server backbone and streaming) is complete. Phase 2 (intelligent execution with universal plan generation) is in progress. Phases 3-5 will bring unified intelligence with intent detection, self-awareness with dynamic system prompts, and hardware-aware pod scheduling. The goal is a product where you describe an outcome and ALIN handles everything else — planning, execution, delivery, and learning — with full transparency at every step.