I’ve given some version of this talk three times in the past two months. Different audiences, different industries, same pattern. Someone in the room asks, “Where do we start with AI?” and someone else answers with a strategy deck, a vendor shortlist, or a pilot project. Wrong answer every time. AI adoption starts with a person. One person with enough organizational authority to make it everyone’s incentive, not just the engineering team’s curiosity. I call that person a sponsor, and without one, your best AI initiatives will die in procurement. I run product, engineering, data, and security at a healthcare company that sells special education services to K-12 school districts across the United States. On paper, we’re one of the last companies that should be leading on AI. Healthcare compliance, K-12 bureaucracy, a remote team spread across time zones. Every instinct says “wait and see.” We went the other direction. Not because we had a strategy deck. Because we had executive sponsorship that turned AI from an engineering experiment into a company initiative. Here’s what happens without a sponsor. One team adopts a tool. Another team blocks it. Legal reviews drag past the tool’s relevance window. Security concerns become blanket vetoes rather than informed decisions. No budget gets allocated for experimentation because nobody owns the outcome. I’ve watched this pattern repeat at companies three times our size and I’ve watched it paralyze startups with twelve people. With a sponsor, the dynamic changes. There’s a clear policy that teaches rather than restricts. Budget for tokens, tools, and training. Legal and IT become enablers instead of gatekeepers. Adoption becomes a company initiative rather than a side project someone squeezes in between quarterly planning sessions. AI fluency is becoming a filter. When I grew my team recently, I was struck by how many candidates across product, engineering, IT, and security asked about our AI tooling during interviews. Not out of curiosity. To decide whether they wanted to work here. One senior engineer told me he was “freaking out” when he misunderstood and thought we weren’t paying for premium AI coding seats. The best people want to go where they can move fast, and right now, moving fast means access to tools that eliminate the repetitive parts of knowledge work. The instinct at most companies is to restrict. Set monthly token budgets per employee. Limit AI tool access to specific roles. Require approval for every new tool. Monitor usage as a cost center. Every one of these instincts is wrong. Gallup’s Q3 2025 data shows the diffusion curve: 45 percent of employees already use AI at work at least a few times a year, but daily use sits around 10 percent. The gap between casual exposure and deep integration is where the talent fight is happening. Your best people don’t want to be in the 45 percent. They want to be in the 10 percent, and they’ll go where the company lets them. Restricting usage trains people not to experiment. The cost of tokens is dropping quarter over quarter. The cost of a slow team is not. Treat AI access like internet access, a baseline, not a perk. Define clear policies on data handling and security, then let people run. I keep running into founders who think AI is expensive. So I started doing the math in front of them. A knowledge worker produces roughly three million tokens of output per year and consumes about sixteen million. The raw API cost to match that output? A few hundred dollars. Even with agentic overhead, you’re at 0.5 to 2 percent of what you’d pay a person. The ratio holds even with a supervision tax. The question isn’t whether AI is cheaper for production work. It’s fifty to two hundred times cheaper at raw throughput. The question is what percentage of a knowledge worker’s value is actually token-producible. Judgment, relationships, persistent context, those aren’t. But first drafts, data pulls, formatting, research synthesis? That pays for itself in weeks. I’d rather invest in my existing people than hire more. Give them speed. Give them tools that make the boring parts disappear. Legal and IT will never have the natural incentive to push AI adoption. Their job is risk management, and AI reads as risk. That’s not a criticism. It’s structural. If you wait for legal and IT to greenlight AI initiatives, you’ll wait forever, because the calculus never favors them saying yes. McKinsey put numbers to the gap in early 2025: 92 percent of companies plan to increase AI investment, but only 1 percent of leaders classify their organizations as mature in AI deployment. Capital is flowing in faster than organizations know how to spend it. The bottleneck isn’t budget. It’s organizational will. Sponsorship means someone at the executive level makes it their job to push technology forward across every department, legal, IT, and finance included. A clear policy that teaches rather than restricts. The budget conversation before someone asks for it. At our company, I’m not the only sponsor. Our CEO is equally invested, which matters. When questions come up about permissions, security structure, or rollout strategy, those conversations happen at the executive level with people who have context on both the opportunity and the risk. That’s different from a team lead asking for permission to try a new tool. The worst version of this I see is what I call the “great status quo.” Everyone feels like they’re doing their job. No one is incentivized to do better. Skepticism propagates from the top down, and nothing changes. If you’re a founder reading this, that’s the scenario you should be most afraid of. Not that AI will disrupt your industry. That you’ll be standing still while it does. Sponsorship is structural, not performative. Someone owns the AI policy document. Not a strategy deck that sits in a shared drive. A living document that defines what tools are approved, what data can flow where, what the security boundaries are, and what the escalation path looks like when someone wants to try something new. There’s a budget line item. Tokens, seats, training. Not buried in engineering’s discretionary budget. Visible to the company as an investment. There’s a feedback loop. When someone discovers a workflow improvement, there’s a mechanism to share it. When something breaks or produces a bad result, there’s a mechanism to report it without fear. We run an internal Slack channel where AI wins and failures get equal airtime. That channel has done more for adoption than any training session we could design. And someone is willing to have the uncomfortable conversations with department heads who are skeptical. Not to override them. To understand their concerns, address the real ones, and call out the ones that are just inertia wearing a security costume. In February 2026, the U.S. Department of Labor released TEN 07-25, a framework that formally positions AI literacy as a foundational workforce skill alongside reading, math, and digital fluency. The framework didn’t arrive in isolation. It follows Executive Order 14277, which created a White House Task Force on AI education, and TEGL 03-25, which opened WIOA funding for AI skills development. America’s Talent Strategy, co-issued across Labor, Commerce, and Education, made AI literacy a strategic pillar. TEN 07-25 delivers the operational framework. The DOL defines AI literacy as “a foundational set of competencies that enable individuals to use and evaluate AI technologies responsibly, with a primary focus on generative AI.” Five content areas: understand AI principles, explore uses, direct AI effectively, evaluate outputs, use responsibly. The most important delivery principle is experiential learning embedded in actual job context, not abstracted into a classroom. The convergence is international. The EU’s AI Act creates a binding legal obligation under Article 4 for providers and deployers of AI systems to ensure sufficient AI literacy among staff. Finland’s “Elements of AI” has reached over two million learners across 170 countries. Singapore and South Korea are running national literacy programs at population scale. The DOL’s framework is voluntary guidance, not regulation. But the policy chain is unusually tight for workforce guidance, and it’s explicitly tied to WIOA funding. Federal training dollars will follow this framework. If your company isn’t teaching its own people, you’re behind the policy curve, not just the technology curve. Context-embedded learning. That’s the DOL delivery principle that matters most for founders, and the one I’ve seen work firsthand. You can’t teach people AI literacy in a webinar. You teach it by sitting next to them, watching their workflow, and showing them where a tool can eliminate thirty minutes of manual work. We call this shadowing, and it’s the most effective adoption initiative we’ve run. Top-down mandates to “use AI” with no context don’t work. Training sessions disconnected from real workflows don’t stick. Expecting adoption without giving people time to experiment produces resentment, not results. Measuring AI usage by tool adoption metrics rather than output quality incentivizes the wrong behavior. What works: an engineer or AI-literate team member sits with people in their actual workflow, spots acceleration opportunities, and shows them what’s possible. The World Economic Forum’s 2025 Future of Jobs Report found that analytical thinking remains the most sought-after competency, cited as essential by seven in ten employers. AI doesn’t replace thinking. It amplifies the thinking you already do. If you lack judgment, AI gives you faster bad judgment. The most skeptical person on our team, someone in operations who used to wake up at 4 AM for manual processes, became the company’s most enthusiastic AI advocate after someone showed her how voice-to-text could eliminate hours of SOP documentation. She didn’t need a strategy deck. She needed someone to meet her where she was. She now trains other people on the workflow. If you’re a founder who’s been waiting for the right time to push AI adoption, the right time was six months ago. The second best time is now. Make yourself the sponsor if nobody else will. Then go sit with your team and watch how they work, because the highest-leverage applications are never the ones you predicted from your desk.