AI Companion
Advanced topic: Real-time AI-generated musical accompaniment
Overview
The AI Companion listens to your playing and generates complementary musical parts in real-time using machine learning models.
Features
- Multi-track sequence-to-sequence — Generates bass, chords, pads, lead, and drums
- Context-aware — Adapts to your key, tempo, and playing style
- Real-time — Low-latency inference using CoreML
- Optional — Enable only when needed, never affects core routing
How It Works
- AI listens to incoming MIDI stream
- Analyzes musical context (key, chords, rhythm)
- Generates complementary parts
- Outputs to designated destinations
Enabling AI Companion
Settings → Advanced → AI Companion
- Enable — Turn on AI processing
- Model — Select trained model
- Destinations — Route AI output
- Responsiveness — Latency vs quality trade-off
Training (Advanced)
AI models are trained using PyTorch and converted to CoreML. See the project repository for technical details:
- AI Design Documentation (in repository)
- Training Schema Documentation (in repository)
Performance
AI Companion runs on Neural Engine (Apple Silicon) or GPU:
- Latency: 10-50ms depending on model size
- CPU usage: Minimal (dedicated hardware acceleration)
- Memory: ~200MB for models
Next Steps
- NeuroScript routing model — Integrate AI signals without impacting routing
- Performance Optimization — Minimize AI latency
