← Back to posts

Your Voice Is a Biomarker

5 min read

Last week, MIT Technology Review ran a story about Patrick Darling, a 32-year-old musician with ALS who lost his ability to sing — and got it back through an AI voice clone trained on old recordings. It's a beautiful, emotional story. But it buries the more important one.

Before Darling lost his voice entirely, it changed. His bandmates noticed something was off years before his diagnosis. His speech slowed. His articulation shifted. The music was still there, but the instrument was quietly breaking down.

That's not a metaphor. That's a biomarker.

What's Actually in Your Voice

Your voice carries information that goes far beyond the words you're saying. Neurologists have known this for decades — they can identify Parkinson's disease from speech patterns years before other symptoms appear. But now AI can detect patterns in the acoustic signal that even trained specialists miss.

Consider what happens when you speak:

  • Fundamental frequency and jitter: How your vocal cords vibrate reveals information about muscle control, which reflects both neurological function and emotional state
  • Articulation patterns: The precise timing of consonants and vowels depends on complex coordination between your brain, diaphragm, tongue, and lips
  • Prosody: The rhythm and melody of speech — affected by both cognitive load and neurological conditions
  • Breathing patterns: Captured in the subtle pauses and volume variations that most people never notice
  • Voice quality: The harmonic structure that makes your voice uniquely yours — and changes with age, health, and mental state

Each of these acoustic features correlates with different aspects of health. Depression changes prosody. Anxiety affects breathing patterns. Cognitive decline shows up in semantic fluency and word-finding delays. Neurological conditions like Huntington's or ALS alter motor control in ways that appear first in speech.

This isn't theoretical. Studies have shown that AI models can detect early-stage Alzheimer's from brief speech samples with 78% accuracy. They can identify depression from acoustic features alone — often before patients report symptoms to their doctors. They can predict Parkinson's disease progression by analyzing subtle changes in voice quality over time.

The Signal You Didn't Know You Were Broadcasting

What's remarkable is how much of this signal is unconscious. You don't decide to have vocal jitter when you're anxious. You don't choose to alter your fundamental frequency when you're depressed. These changes happen at the intersection of your neurology, physiology, and psychology — and they're encoded in every word you speak.

This creates both an opportunity and a privacy challenge. Your voice is simultaneously the most natural interface for communicating with machines and a rich source of deeply personal health information.

At Adalyon, where I serve as CTO, we're working on the opportunity side: using speech-based digital biomarkers to help researchers run better clinical trials. Instead of relying only on infrequent clinic visits and subjective questionnaires, we can continuously monitor participants' wellbeing through the speech they produce during regular check-ins.

The technology works by extracting hundreds of acoustic and linguistic features from short speech samples, then using machine learning to map those features to validated clinical assessments. When someone speaks for 60 seconds about their day, our models can estimate their current level of anxiety, depression, cognitive function, and other wellbeing measures — often more reliably than traditional pen-and-paper tests.

But this same capability raises profound questions. If your voice reveals your mental health status, should employers be allowed to monitor it? If AI can detect early signs of neurological decline, who gets access to that information? If your phone can hear depression in your voice, should it alert your doctor — or your insurance company?

The Intimacy of Speech

There's something uniquely personal about the human voice. Unlike a blood test or an MRI scan, speech is something we share freely, constantly, unconsciously. It's the most natural thing we do — and now it's become one of the most revealing.

This intimacy is what makes speech biomarkers so powerful and so concerning. The same qualities that make voice-based health monitoring scalable and unobtrusive also make it invasive. Every phone call, every video meeting, every conversation with a smart speaker becomes a potential health screening.

The technical barriers to widespread voice-based health monitoring are dropping rapidly. Modern smartphones have the computational power to run sophisticated acoustic analysis models locally. Cloud-based APIs make it trivial to add speech biomarker extraction to any application. The bottleneck is no longer technical — it's ethical and regulatory.

What This Means for You

We're entering a world where your voice will reveal more about your health than you might want to share — whether you intend to share it or not. The question is whether we'll build systems that respect the intimacy of that information.

The optimistic scenario looks like this: speech biomarkers become a routine part of healthcare, helping doctors detect problems early and monitor treatments more precisely. AI assistants become genuinely helpful by understanding when you're stressed or tired. Clinical research accelerates because we can measure outcomes continuously instead of only during clinic visits.

The pessimistic scenario looks different: employers screen job candidates by analyzing their speech patterns during interviews. Insurance companies adjust premiums based on vocal biomarkers. Mental health information leaks through every phone call. The most natural human interface becomes a surveillance tool.

Which future we get depends on the choices we make now about privacy, consent, and the governance of AI systems that can hear more than we intended to say.


Riko Nyberg is CTO at Adalyon, building speech-based digital biomarkers for CNS disorders, and a PhD researcher at Aalto University studying how large language models measure human wellbeing.