AI note-taking apps are incredibly useful. They save hours of manual transcription, provide fair overviews of what was discussed, extract actionable insights, and benefit everyone in the meeting. But here's the problem—if you don't ask for consent first, you could be in unwanted territory.
The California lawsuit against Otter.ai cuts straight to the issue: the company records everyone in meetings—including people who have never signed up for the service—without proper consent. Then it uses those voices to train its AI models. In states like California where two-party consent is required, this creates serious legal exposure.
It's not a technology problem—it's a complacent human problem. We ignore legal risks and don't give one thought to the implications of everything said in meetings.
Otter's defence is predictable: Users are responsible for getting consent. Their Terms of Service state that users "are solely responsible for providing any notices to, and consent from, individuals in connection with any recordings as required under applicable law."
The data training issue runs deeper than simple recording. Otter admits to using "de-identified audio recordings" to train its AI, but won't explain what de-identification actually means. Research shows that even sophisticated anonymisation often fails—voices are as unique as fingerprints. When companies claim data is "de-identified," they're often just making it slightly harder to connect back to individuals, not impossible.
Traditional call recording technology kept recordings with the person doing the recording. AI services change this completely—now the vendor has access to everything. Your confidential client calls, sensitive business discussions, and private conversations become training data for someone else's AI model. Here's the uncomfortable truth: do any of us non-technical people truly know where our data is stored when we use cloud systems daily? Or who potentially has access to it? We click "agree" on terms we don't read for services we don't fully understand.
For lawyers, doctors, and anyone handling sensitive information the apps must be surely a non starter.
While cloud-based AI services create these privacy nightmares, a different approach is gaining ground: running AI models entirely on your own hardware. No data leaves your building. No third-party training. No vendor access to your conversations.
Local AI deployment solves the core problems exposed by the Otter lawsuit. Your data stays on your servers. You control who has access. You decide how it's used. The trade-off is complexity and cost, but for organisations handling sensitive information, that trade-off increasingly makes sense.
The technical barriers are disappearing fast. Tools like LM Studio and AnythingLLM have turned local AI deployment from a specialised technical project into something most IT departments can handle.
The models themselves have reached serious capability levels. Meta's LLaMA 3 comes in 8 billion and 70 billion parameter versions that can handle complex document analysis and reasoning. These aren't toy models—they're production-ready AI that can run on hardware you own.
Google's Gemma 2 and Mistral-8x22b show how major players are embracing open-source distribution. Mistral's Mixture-of-Experts architecture is particularly clever—it uses 39 billion active parameters from a pool of 141 billion, delivering enterprise performance on consumer hardware. Even OpenAI have acquiesced to the open source family and joined in with Gpt-oss
Context handling has improved dramatically—some models now support up to 128,000 tokens, enabling the extended document analysis that enterprise users actually need. This isn't academic; it's practical AI that could partially replace cloud services for many use cases.
Local deployment isn't free. You need serious hardware—powerful GPUs, substantial RAM, fast storage. A capable setup might cost £10,000-£50,000 upfront, plus £40-£160 monthly in electricity. For smaller organisations, this creates an immediate barrier to entry.
Then there's maintenance. Models consume tens of gigabytes of storage. Updates require technical knowledge. Performance optimisation isn't automatic. You're trading cloud convenience for data control—a worthwhile trade-off for many organisations, but not a trivial one.
The sensible approach might be hybrid: use local AI for sensitive contexts where data control matters most, cloud services for everything else. Match your privacy requirements to your technology choices rather than picking one solution for everything. Unifcation is not always the right path.
The Otter lawsuit won't kill AI note-taking, but it may force the industry to confront its privacy problems. Companies that have been ignoring consent requirements and using customer data without clear permission are about to face expensive legal reality. We can't reach back to a Luddite time, but individuals and companies have to be aware of the potential risks of trusting technology completely. Foisting unwanted recording—what some claim amounts to wiretapping— is not the way forward.
If you're using AI note-taking tools now, audit how they handle data. Read the privacy policies. Understand what training they're doing with your recordings.
The local AI option is becoming practical for organisations with serious privacy requirements and technical capability. It's not right for everyone, but it's a working alternative to sending your data to someone else's servers.
Legal accountability is slowly catching up with technological capability. But don't blame the tool—AI note-taking apps are genuinely helpful when used properly. The problem is human complacency: ignoring legal requirements, not asking for consent, and failing to consider the implications of recording every word in sensitive conversations.