Last November Las Vegas hosted HLTH, the largest health innovation conference in the U.S. I live here, so no flight required — just a short drive to the Venetian. Over four days I sat through panels, cornered founders in hallways, and collected a prize on behalf of Parallel Learning: the Mental & Behavioral Health Rising Star award at the Digital Health Awards 2025. I'd been sharing observations with my team each evening, and what follows is that thread cleaned up. Five dispatches on what AI is doing to healthcare, and where it's falling short. During a session titled “Authentic Connections, Artificial World,” a panelist quoted Dara Treseder, Chief Marketing Officer at Autodesk: “AI is raising the floor, but it’s actually human ingenuity that’s going to raise the ceiling.” HLTH this year was anchored around two themes, AI and Women’s Health, and this line captured the undercurrent. AI keeps lifting baselines: faster drafts, cleaner templates, sharper diagnostic tools. Creative leaps, novel care models, clinical judgment calls? Still ours. The gap between AI enthusiasm and AI results remains enormous. A report from MIT published in August 2025 found that 95% of generative AI pilots at companies fail to deliver measurable business impact. Healthcare is no different. Plenty of pilots, very few scaled deployments that change patient care. The risk isn’t replacement. It’s wasting years competing with AI at the floor instead of building where only humans can reach. One theme stuck with me: empathy as a design principle. Geoffrey Hinton, one of the so-called Godfathers of AI, was quoted via a framing he introduced at the Ai4 Conference earlier that summer: “We need AI mothers rather than AI assistants. An assistant is someone you can fire. You can’t fire your mother.” Instead of designing AI with competitive logic, the kind that optimizes to win or control, Hinton argues we should embed what he called maternal instincts. Systems that care, protect, and preserve. Humans are the child; AI is the mother. It’s the only model we have, he noted, of a more intelligent thing being controlled by a less intelligent one. The idea reframes AI safety from control to care: rather than designing systems to obey us, design them to want our wellbeing. I run an AI company that builds tools for children with learning differences. The maternal framing isn’t abstract to me. Every product decision we make at Parallel Learning encodes an answer to Hinton’s question: is this tool an assistant you can fire, or something that genuinely protects the people using it? A therapy tool built to optimize engagement metrics is a different animal from one built to optimize patient outcomes. Same technology, different mothers. Unlike the chatbots we interact with directly, ambient AI operates invisibly. It listens, observes, adapts without being asked. Systems working in the background, learning patterns, anticipating needs through context, voice, or environment. This is agentic AI in practice. Autonomous agents that act proactively across systems, from homes to clinics to workplaces. At Parallel Learning, the application is immediate. Picture speech therapy where AI listens during a session, drafts the clinical notes, queues the next steps. The clinician stays present. The paperwork handles itself. We’re heading toward a world where AI isn’t an app you open but an atmosphere you inhabit. For companies that own their teleconferencing infrastructure end-to-end, as we do, the opportunity to build ambient clinical intelligence is already within reach. One of the more sobering moments came from Jon Cohen, CEO of Talkspace, on the rise of AI “friend” chatbots and their effects on teenagers. Talkspace was three to six months from releasing their own HIPAA-protected conversational agent, which gave his warnings particular weight. The numbers are striking. A Common Sense Media survey from July 2025 found that 72% of U.S. teens have used AI companions, with over half using them regularly. One-third said they’ve discussed serious or important matters with bots instead of turning to peers or adults. Nearly a third reported that AI conversations feel as satisfying as human ones. Psychiatrists call it an illusion of safe, endless validation. “Friends” that never judge, never disagree, never leave. The result: emotional dependency, distorted self-perception, difficulty managing real relationships. Some researchers are mapping the pattern onto addiction frameworks. Tolerance. Withdrawal. Relapse. AI companionship among minors is being framed as a public health concern, not a tech trend. Where it mimics care without responsibility, it risks harm. For providers and caregivers working with kids, staying current isn’t optional. They’re already using these tools. The conference sharpened a distinction I shared with my team on the last day. Anyone working in digital mental health needs to internalize it. Clinical oversight means AI used inside legitimate healthcare systems, supervised by licensed professionals. These tools follow evidence-based protocols, meet HIPAA or FDA standards, and act as clinical support. A therapist-in-the-loop monitors outcomes and keeps the AI within safe, accountable bounds. Clinical off-roading is the opposite: AI that simulates therapy or emotional care without oversight, regulation, or data protection. These chatbots can mimic empathy, reinforce biases, mishandle crises, expose sensitive data. Experts call it a “shadow care system.” Regulators are catching up. Illinois became the first state to explicitly regulate AI in psychotherapy, passing the Wellness and Oversight for Psychological Resources Act in August 2025. The law prohibits AI from making independent therapeutic decisions or generating treatment plans without licensed professional review. California followed with narrower measures around disclosure and anti-impersonation. New governance frameworks, including one from the Multi-Regional Clinical Trials Center at Harvard, are pushing for human-led oversight wherever AI touches therapeutic domains. Remember the MIT stat from the top of this piece, 95% of AI pilots failing to deliver? Most of those failures are execution problems. In mental health, the failure mode is different. An unregulated chatbot that “works” — one that keeps users engaged — might be the most dangerous outcome of all. The floor and the ceiling matter here too. Regulation sets the floor. What we choose to build above it, the care we encode, the oversight we insist on, that’s where the ceiling goes.