Documentation

ALIN Docs

Everything you need to get started and master the ALIN platform.

Getting Started

ALIN is available as a desktop application (full feature set with file system access, code execution, and hardware monitoring) and as a web application (chat, research, image generation, and TBWO planning). To get started with the desktop app, download the installer for your platform from the download page, run it, and sign in with your account. The web app is accessible at your ALIN instance URL after signing in.

On first launch, ALIN will walk you through initial setup: connecting your API keys (if self-hosted), choosing your default model, and optionally importing existing conversations. The Home Dashboard shows all six stations with live status previews — click any station card to enter that workspace.

Architecture Overview

ALIN uses a hub-and-spoke architecture. The Intelligence Hub (center) maintains shared state: conversations, the 8-layer memory system, user preferences, the self-model, and the Canon governance rules. Six specialized stations orbit this hub, each with its own UI and interaction model but sharing the same context layer.

On the backend, ALIN runs a Node.js/Express server with SQLite for persistence. All AI API calls route through server-side proxy endpoints (/api/chat/stream and /api/chat/continue) — API keys never touch the browser. The server handles authentication via JWT tokens, plan enforcement via middleware, and rate limiting per user tier.

On the frontend, ALIN is built with React + TypeScript + Zustand for state management. The component architecture follows the station model: each station has its own view component, but shares common UI primitives (InputArea, MessageBubble, Sidebar, etc.) through the design system.

Stations

Chat Station

The primary interface for conversational AI. Supports multi-model streaming (Claude and OpenAI), file uploads with preview, code blocks with syntax highlighting, markdown rendering, message branching, conversation pinning, and response regeneration. Chat operates in two modes: Direct Mode (fast tool-assisted responses where the model chains tool calls freely) and Sprint Mode (full TBWO execution for complex tasks). The intent detector automatically classifies incoming messages, or you can override manually.

Code Lab

A development environment with sandboxed code execution (Node.js, Python, shell), file system browsing, syntax highlighting, and AI-powered code generation. Code Lab integrates with the file system through the server's /api/files/* and /api/code/execute endpoints. Git operations are available through /api/git/execute. Desktop app only — requires local file system access.

TBWO Command

The orchestration center for Time-Budgeted Work Orders. See the TBWO System section below for full documentation. Pro plan and above.

Research Hub

Deep web research powered by Brave Search (via server proxy at /api/search/brave) with DuckDuckGo fallback. The research station synthesizes results, tracks citations, evaluates source quality, and can generate structured intelligence reports from 20+ sources. Results are stored in the memory system for cross-session reference.

Image Studio

Visual creation with DALL-E 3 integration. Supports prompt refinement, style direction, image gallery management, and iterative generation. Generated images are stored as artifacts and can be referenced in other stations. Pro plan and above.

Voice Room

Hands-free interaction through Whisper speech-to-text and ElevenLabs text-to-speech. Supports emotion-aware voice synthesis with adaptive modulation based on conversation context. Voice commands can control any station. Desktop app recommended for best microphone access.

TBWO System

Time-Budgeted Work Orders are ALIN's framework for autonomous task execution. A TBWO consists of:

Objective: What you want built or accomplished, described in natural language.

Time Budget: How long ALIN should spend. This directly controls complexity — a 10-minute budget produces a quick result, a 45-minute budget produces a comprehensive one with more phases, more polish iterations, and higher quality output.

Quality Target: The quality level you expect (draft, standard, polished, production). This affects how many review passes and refinement cycles are included in the plan.

Plan: ALIN generates an execution plan with phases, tasks, role assignments, and time estimates. The plan is domain-aware — a website sprint follows a different phase structure than a research report or a codebase audit. You review and approve the plan before execution begins.

Agent Pods: Specialized AI agents assigned to specific roles (Design, Frontend, Backend, QA, Copy, Research, Orchestrator). Pods execute in parallel where possible, pass artifacts to each other through the artifact bus, and communicate through a pub/sub message system with priority levels and correlation IDs.

Checkpoints: At defined intervals, execution pauses for progress review. Checkpoints show what's been completed, what's next, and any decisions that were made autonomously. You can adjust the plan, add instructions, or abort.

Receipts: After completion, a detailed receipt documents every decision, assumption, file produced, tool called, time spent, and quality assessment. Receipts are the foundation of ALIN's trust model — you can always see exactly what happened and why.

8-Layer Memory System

ALIN's memory is structured into 8 layers with different retention policies and purposes:

Layer 0 — Immediate: Current session context. Volatile, cleared when the session ends. This is the conversation you're having right now.

Layer 1 — Short-Term: 24-hour working memory. Auto-expires. Stores recent decisions, tool results, and temporary notes that are useful for follow-up but don't need permanent storage.

Layer 2 — Working: Project-scoped memory. Persists for 7 days or until the project closes. Stores project-specific context like file structures, naming conventions, and in-progress decisions.

Layer 3 — Episodic: Permanent, timestamped experiences. Stores what happened in each session — not the full conversation, but structured summaries of actions taken, decisions made, and outcomes achieved.

Layer 4 — Semantic: Permanent factual knowledge. Stores learned facts about your codebase, your organization, technical details, and domain knowledge that ALIN has accumulated across sessions.

Layer 5 — Relational: User model and preference graph. Stores your communication style, technical skill level, preferred frameworks, formatting preferences, and interaction patterns.

Layer 6 — Notes: User-created explicit memories. Things you've told ALIN to remember permanently — API keys (encrypted), project specifications, team member names, recurring instructions.

Layer 7 — Self-Model: ALIN's model of its own behavior. Stores execution outcome patterns, tool reliability data, promoted correction patterns, and operational insights. This layer is what makes ALIN improve over time.

Self-Model and Training

ALIN has two training systems that can be independently enabled in Settings:

Personal Learning tracks your corrections and ALIN's execution outcomes. When you override ALIN's choices 3+ times in the same direction, the pattern is promoted to the self-model (Layer 7) and ALIN adjusts its behavior permanently. Tool reliability is tracked per-tool — if a tool fails frequently, ALIN learns to use alternatives or add error handling. Execution outcomes (quality scores, user edit counts, phase success rates) are aggregated by task type so ALIN knows which types of work it handles well and which need more caution.

Community Learning (opt-in) anonymizes execution traces and uploads them to help all ALIN users. When enabled, ALIN strips personal content from execution data and shares the structural patterns — tool sequences, timing, retry counts, quality scores — so the planner can learn from the collective experience of all users running similar tasks. Users who contribute get access to the collective intelligence; users who opt out still get personal learning but not community insights.

Authentication and Plans

ALIN uses JWT-based authentication. Tokens expire after 7 days and are refreshed on each session start. Passwords are hashed with bcrypt (10 rounds). The first user to sign up on a new instance automatically receives the Pro plan.

Plan enforcement happens at two levels: server-side middleware rejects unauthorized requests (model access, rate limits, feature gating), and client-side capability detection hides UI elements that aren't available on the user's plan. The server is always the authority — client-side gating is purely for UX.

Configuration

Server configuration is handled through environment variables in .env:

ANTHROPIC_API_KEY — Your Anthropic API key (required for Claude models)
OPENAI_API_KEY — Your OpenAI API key (required for GPT models)
BRAVE_API_KEY — Your Brave Search API key (required for Research station)
JWT_SECRET — Secret key for signing JWT tokens (change in production)
PORT — Server port (default: 3002)
DOWNLOAD_URL_WIN / DOWNLOAD_URL_MAC / DOWNLOAD_URL_LINUX — Desktop app download URLs