AppGitHub
Introduction

What is AfferLab

AfferLab is a local-first AI chat client where conversation behavior and context management are programmable.

A different way to build AI conversations.

Most AI applications treat conversations as fixed systems. You can choose a model, but the application decides how prompts are built, how context is managed, and how tools are used.

AfferLab takes a different approach.

The platform provides the infrastructure — models, storage, and tools — while strategies define how conversations work.


Conversations Are Context Systems

Large language models are fundamentally context-limited systems. Every response depends on what information is placed into the prompt. This means every AI application must constantly answer questions like:

  • What conversation history should be included?
  • What information should be removed to fit token limits?
  • When should knowledge be retrieved?
  • and more.

Most applications hide this logic deep inside the product.

AfferLab exposes it.


Programmable Chat Flow

In AfferLab, the behavior of a conversation is defined by a strategy.

A strategy is a small TypeScript module that controls how the system interacts with the model.

Strategies define the flow of a conversation:

  • how prompts are constructed
  • which context should be included
  • how history is selected
  • when tools are used
  • when memories are written or retrieved
  • and everything...

Because strategies run inside an isolated engine, they can extend the system without modifying the core application.


Local-First by Design

AfferLab is designed around a local-first architecture.

Conversation state, settings, strategies, and memory metadata are stored locally. Attachments are also managed through local storage paths and local ingest flows.

This keeps the system fast and private while still allowing cloud models when needed.

The platform handles the heavy infrastructure work. Strategies only focus on defining conversation behavior.


System Architecture

AfferLab is organized into three layers.

Host Platform

Handles the UI, database, model APIs, and system lifecycle.

Strategy Engine

Runs programmable strategies inside isolated worker threads.

Memory & Ingest System

Manages document ingestion, vector indexing, and semantic search.

On this page