Skip to content

AI Companion

Advanced topic: Real-time AI-generated musical accompaniment

Overview

The AI Companion listens to your playing and generates complementary musical parts in real-time using machine learning models.

Features

  • Multi-track sequence-to-sequence — Generates bass, chords, pads, lead, and drums
  • Context-aware — Adapts to your key, tempo, and playing style
  • Real-time — Low-latency inference using CoreML
  • Optional — Enable only when needed, never affects core routing

How It Works

  1. AI listens to incoming MIDI stream
  2. Analyzes musical context (key, chords, rhythm)
  3. Generates complementary parts
  4. Outputs to designated destinations

Enabling AI Companion

SettingsAdvancedAI Companion

  • Enable — Turn on AI processing
  • Model — Select trained model
  • Destinations — Route AI output
  • Responsiveness — Latency vs quality trade-off

Training (Advanced)

AI models are trained using PyTorch and converted to CoreML. See the project repository for technical details:

  • AI Design Documentation (in repository)
  • Training Schema Documentation (in repository)

Performance

AI Companion runs on Neural Engine (Apple Silicon) or GPU:

  • Latency: 10-50ms depending on model size
  • CPU usage: Minimal (dedicated hardware acceleration)
  • Memory: ~200MB for models

Next Steps

Built with ❤️ for musicians