Guides
Step-by-step guides for installing, configuring, and extending SYNTHOS.
Installation
Prerequisites
Python ≥ 3.9, pip, and git. Optionally: an Ollama / llama.cpp / vLLM server for model evaluation.
One-liner install (recommended)
# macOS / Linux / WSL — one command does everything curl -fsSL https://synthos.dev/install.sh | bash # Or with wget wget -qO- https://synthos.dev/install.sh | bash
Alternative: pip install
pip install synthos
Alternative: from source
git clone https://github.com/synthos-ai/synthos cd synthos pip install -e ".[dev]"
Verify
synthos version # → synthos 3.0.0 synthos status # pipeline health-check synthos --help # full CLI reference
First run
The first run auto-creates ~/.config/synthos/config.yaml with sensible defaults. The installer also creates this for you. Edit with `synthos config --edit`.
Configuration
Config location
~/.config/synthos/config.yaml — created automatically on first run.
Edit interactively
synthos config --edit # opens in $EDITOR
Key settings
verbose: 1 # 0=quiet, 1=normal, 2=verbose
models:
- name: llama3.1-8b
provider: ollama
base_url: http://localhost:11434
model_id: llama3.1
ctx_len: 8192
crypto:
mode: cbc # ecb | cbc | ctr
authenticate: true
lattice_rows: 4
lattice_cols: 4
attention_heads: 8
context_window: 512Multiple models
Add as many model entries as you want under the `models` list. Each will be benchmarked by `synthos eval`.
Using the TUI
Launch
synthos # or: synthos tui
Tabs
Chat — talk to the assistant, see layer trace sidebar. Settings — adjust max_words, search queries, max sources. Eval — tok/s, latency, mode, word count per run + aggregate stats. Status — live subsystem dashboard. Crypto — STC cipher overview.
Keybindings
Ctrl+Q Quit Ctrl+T Cycle tabs Ctrl+B Cycle verbosity (quiet → normal → verbose) Ctrl+S Jump to Status tab Ctrl+E Jump to Eval tab Ctrl+Y Copy last response
Chat commands
/help Show all commands /status System status /eval Show eval panel /settings Show settings panel /set words 2000 Set response length /set queries 6 Set search depth /set sources 3 Set max sources /copy Copy last response /clear Clear chat log
Headless alternative
If you don't want the Textual TUI, use `synthos chat` for a plain Rich REPL with the same commands.
Model Evaluation
Prerequisites
A running inference server: Ollama, llama.cpp, vLLM, LM Studio, or any OpenAI-compatible endpoint.
Configure endpoints
models:
- name: llama3.1-8b
provider: ollama
base_url: http://localhost:11434
model_id: llama3.1
params_b: 8.0
quant: Q4_K_M
- name: phi3-mini
provider: ollama
base_url: http://localhost:11434
model_id: phi3
params_b: 3.8Run benchmarks
synthos eval # all suites synthos eval -s pattern_match # specific suite synthos eval --json # machine-readable
Built-in suites
pattern_match — 10 prompts testing regex-exact accuracy. coherence — 3 prompts testing variable binding and echo. speed_stress — 1 long-form generation for throughput.
Scoring formula
Composite = 0.35·quality + 0.30·speed + 0.20·coherence + 0.15·cost_efficiency Quality = 0.7·exact_match_rate + 0.3·BLEU-1 Speed = min(TPS / 100, 1.0) Cost efficiency = 1 / (1 + cost_per_Mtok)
Extending SYNTHOS
Custom primitives
from synthos.layers.lpe import PrimitiveRegistry, Primitive, PrimitiveType
reg = PrimitiveRegistry()
reg.primitives["C001"] = Primitive(
id="C001", name="CUSTOM",
regex=r"[A-Z]{2,}",
geometric_form="◆",
description="Uppercase tokens",
primitive_type=PrimitiveType.CHARACTER_CLASS,
examples=["AI", "ML"],
composition_rules=[]
)Custom lattice cells
from synthos.layers.gpl import GeometricParseLattice, LatticeCell, CellType
lattice = GeometricParseLattice()
lattice.add_cell(LatticeCell(
cell_id="ZZ99",
pattern=r"(?P<CUSTOM>\w+)",
cell_type=CellType.MATCH,
edges_out=["AA00"],
weight=25, row=3, col=3
))Custom grammar rules
from synthos.layers.rge import RecursiveGrammarEngine, ProductionRule, ProductionType
rge = RecursiveGrammarEngine()
rge.add_production_rule(ProductionRule(
rule_name="MY_RULE",
ebnf_pattern="TOKEN+",
regex_pattern=r"(?P<MY_RULE>[A-Z]+)",
production_type=ProductionType.NONTERMINAL,
geometric_form="──[MY]──"
))Custom attention heads
from synthos.layers.tam import TopologicalAttentionMesh, AttentionHead, ScopeType, AttentionType
tam = TopologicalAttentionMesh()
tam.add_attention_head(AttentionHead(
head_id=8,
query_pattern=r"[A-Z]+",
key_pattern=r"\w+",
value_pattern=r".+?",
scope=ScopeType.GLOBAL,
attention_type=AttentionType.PARTIAL_OVERLAP
))