The Training Data Problem: Why AI Gets Norah Jones Right but Fails on Nina Simone

I tested Moises AI stem separation on two piano-driven recordings: a 2016 Norah Jones studio session and a 1962 Nina Simone live performance captured in a single room. The contrast exposes a fundamental constraint in deep learning for audio - AI only separates what it's been trained on. Contemporary studio recordings with standard instrumentation? Near-perfect isolation. Vintage live recordings from single-room acoustic spaces? Bass and piano merge into mush. This analysis examines how dataset bias (DSD100), artifact generation, and bottom-up learning shape performance: even sophisticated neural networks can't separate sources they've never experienced.

Phil Conil - Berklee College of Music

Read Analysis