Apollo model family
Loom Labs provides the Apollo family of models tuned for different latency, cost, and capability trade-offs, as well as the Daedalus family optimized for code generation and reasoning. All Apollo models support a 128k token context window and are available in the Loom AI API asApollo-1-2B, Apollo-1-4B, and Apollo-1-8B.
The Daedalus family (Daedalus-1-2B, Daedalus-1-8B) is purpose-built for code reasoning, program synthesis, and debugging workflows.
Apollo-1 2B
Apollo-1 4B
Apollo-1 8B
Daedalus-1 2B
Daedalus-1 8B
Apollo-1 2B
- Model ID:
Apollo-1-2B - Availability: Free (Beta)
- Context window: 128k tokens
- Recommended use-cases: classification, short summaries, simple chatbots, low-cost inference
Example
Apollo-1 4B
- Model ID:
Apollo-1-4B - Availability: Free (Beta)
- Context window: 128k tokens
- Recommended use-cases: content generation, multi-turn dialogue, question answering
Example
Apollo-1 8B
- Model ID:
Apollo-1-8B - Availability: Free (Beta)
- Context window: 128k tokens
- Recommended use-cases: advanced content generation, deep reasoning, long-form analysis
Example
All Apollo models currently return OpenAI-like shapes and are compatible with common SDKs. Choose a model based on your latency/cost tradeoffs.
Daedalus-1 2B
- Model ID:
Daedalus-1-2B - Availability: Free (Beta)
- Context window: 128k tokens (code-optimized)
- Recommended use-cases: code generation, bug fixing, algorithmic reasoning, unit-test synthesis
Example: Generate a Python function
Daedalus-1 8B
- Model ID:
Daedalus-1-8B - Availability: Free (Beta)
- Context window: 128k tokens (code-optimized)
- Recommended use-cases: large-scale code synthesis, complex debugging, instruction-following on codebases
