Today, the U.S. Department of Labor released a Training and Employment Notice that formally positions AI literacy alongside reading, math, and digital fluency as a foundational workforce skill. I shared it with my team this morning because the framing matters: this isn’t a suggestion. It’s the federal government telling every workforce board, community college, and employer-training program in the country to start teaching people how to use AI. I’ve spent the last year building AI into our engineering workflows at Parallel Learning and watching the gap widen between people who treat these tools as a curiosity and people who treat them as infrastructure. That gap is about to become a career-defining fault line. McKinsey found that 92 percent of companies plan to increase AI investment, but only 1 percent of leaders classify their organizations as mature in AI deployment. Capital is flowing in faster than people know how to spend it. The framework comes through TEN 07-25, signed by Henry Mack (Assistant Secretary) and Taylor Stockton (Chief Innovation Officer). It defines AI literacy as “a foundational set of competencies that enable individuals to use and evaluate AI technologies responsibly, with a primary focus on generative AI, which is increasingly central to the modern workplace.” Five content areas, seven delivery principles. The content areas: understand AI principles, explore AI uses, direct AI effectively, evaluate AI outputs, and use AI responsibly. The delivery principles are where the document gets interesting. Experiential learning comes first. Embedded in job context, not abstracted into a classroom. Building complementary human skills alongside AI skills. Addressing prerequisites like digital literacy and broadband access. And designing for agility, because any curriculum that takes two years to update is already obsolete. This isn’t the DOL’s first move. Executive Order 14277 in April 2025 created a White House Task Force on AI education. Subsequent guidance opened WIOA funding for AI skills, and America’s Talent Strategy made AI literacy a strategic pillar across Labor, Commerce, and Education. TEN 07-25 delivers the actual framework. The policy chain is unusually tight for workforce guidance. And it’s explicitly tied to a competitiveness narrative: reindustrialization, the Golden Age, winning the AI race. Whether you buy that framing or not, the operational signal is clear. Federal training dollars will follow this framework. The DOL’s five content areas are sensible enough: understand AI, explore its uses, direct it, evaluate its outputs, use it responsibly. What makes the framework worth reading are the delivery principles. “Enable experiential learning” isn’t pedagogy jargon. The research consistently shows that AI literacy developed through hands-on use in real tasks transfers to the workplace. Abstract training doesn’t. A 2025 study conducted within a Navy robotics training program found that scenario-based AI tasks predicted applied literacy far better than knowledge tests. Context-embedded learning is equally important. A clinician learning AI through clinical workflows will retain and apply more than one sitting through a generic “Introduction to AI” course. Same for engineers, customer service reps, operations managers. The DOL’s insistence on industry-specific, occupation-relevant training reflects what practitioners have been saying for years. The complementary human skills principle deserves its own emphasis. The World Economic Forum’s 2025 Future of Jobs Report found that analytical thinking, not AI skills per se, remains the most sought-after competency. Seven in ten employers cite it as essential. AI doesn’t replace thinking. It amplifies the thinking you already do. If you lack judgment, AI gives you faster bad judgment. The places where I see these principles play out daily: directing AI effectively is where the widest variance shows up between teammates. Prompting is a skill, not a personality trait. Some people get mediocre results and conclude the tool is mediocre. Others get strong results from the same tool because they’ve learned how to ask. And evaluating outputs is the hardest competency to teach and the easiest to skip, because AI outputs look polished even when they’re wrong. The DOL is right to make evaluation a standalone content area rather than burying it under “responsible use.” Designing for agility is the right response to the shelf-life problem. AI tools update every few months. Curricula update every few years. The DOL is telling training providers to build modular, updateable programs. Correct in principle, but it demands organizational capacity that most workforce boards don’t have. The DOL’s focus on generative AI is strategically understandable but too narrow. Workers don’t only encounter AI as users of ChatGPT or Copilot. They encounter it as subjects of hiring algorithms, scheduling systems, productivity monitoring, and benefits-determination tools. The majority of large companies now use AI-enabled hiring tools. Brookings research found that interacting with biased AI made respondents more likely to make biased decisions themselves. The framework teaches you to use AI responsibly. It doesn’t teach you what to do when AI is used on you. The EU’s AI Act takes a broader view. Article 4 creates a legal obligation for providers and deployers of AI systems to ensure sufficient AI literacy among staff. The EU treats AI literacy as organizational accountability, not just individual skill. The DOL framework does address employers and training providers, but because it’s voluntary guidance rather than binding regulation, the practical burden of building AI competence falls more heavily on workers themselves. There’s a gap around governance literacy too. Understanding regulatory environments, rights of redress, and accountability mechanisms doesn’t appear in the DOL’s five content areas. For workers in healthcare, education, or financial services, those are daily operational realities. And the “use AI responsibly” content area addresses individual behaviors without touching structural questions. Who bears the cost when AI makes an error in a clinical workflow? Who audits the AI that scores your performance review? These aren’t edge cases anymore. They’re the questions that connect the framework’s gaps to the industries where the stakes are highest. I work at a company that sits at the intersection of healthcare and education. Our clinicians assess children for learning disabilities. Our engineers build tools that support those assessments. Our product managers translate between both worlds. AI literacy here isn’t abstract workforce development. It’s clinical safety. When a speech-language pathologist uses AI to help draft a progress report, the evaluation skill matters more than the prompting skill. Is the generated language clinically accurate? Does it reflect the specific child? Does it comply with IDEA documentation requirements? A generic AI literacy program won’t answer those questions. Domain-embedded training will. When an engineer deploys a model that flags students as at-risk, the responsible-use dimension goes well beyond “don’t share sensitive data.” It includes bias, false positive rates, and what happens when an algorithm gets the recommendation wrong. We’ve written about why algorithmic identification of learning disabilities performs at the level of chance. AI literacy for our team means understanding why that happens and building the guardrails that prevent it. The DOL’s delivery principle about context-embedded learning is the most important line in the entire framework. But it requires organizations to invest in building role-specific AI training, not just hand everyone the same webinar. The $2.75 billion Digital Equity Act program, which would have built the prerequisite infrastructure for AI literacy in underserved communities, was cancelled in May 2025. The DOL framework that assumes that infrastructure exists arrived ten months later. The DOL acknowledges prerequisites: digital literacy, device access, broadband connectivity. The National Skills Coalition found that one-third of workers lack foundational digital skills, with disproportionate impact on workers of color, low-income individuals, and rural residents. Brookings identified 6.1 million workers facing both high AI exposure and low adaptive capacity. Disproportionately lower-income, less-educated, non-metropolitan. The workers most likely to be displaced by AI systems they can’t use, contest, or understand. If AI literacy becomes a baseline employability requirement, and it will, then the digital divide becomes the AI divide. The DOL’s framework is voluntary. It relies on market incentives and WIOA funding flexibility. Historically, that model underinvests in training for the people who need it most. The EU’s binding mandate creates an accountability floor. The U.S. approach creates an opportunity ceiling that not everyone can reach. AI literacy requirements will enter job descriptions and hiring criteria across professional services, healthcare, finance, and government. The DOL framework accelerates a process that employer demand was already driving. Gallup’s Q3 2025 data already shows the diffusion curve: 45 percent of employees use AI at least a few times a year, but daily use sits around 10 percent. The gap between casual exposure and deep integration is where this framework will either succeed or become compliance theater. The competency model will tier out. The DOL’s baseline is necessary but not sufficient. Organizations will need to distinguish between general AI literacy for everyone, job-context AI fluency for specific roles, and builder-level capability for technical teams. The framework acknowledges this implicitly but doesn’t operationalize it. And the bigger shift will come from the tools themselves. As AI becomes more autonomous, literacy won’t mean knowing how to write a good prompt. It’ll mean knowing how to supervise a multi-step AI workflow: setting constraints, monitoring progress, auditing outputs, verifying provenance. The DOL’s emphasis on evaluation and accountability already points in this direction, even if the current examples don’t reach it yet. Whether the framework’s measurement of AI literacy captures actual capability or just training completion rates will determine whether this produces a more competent workforce or just a more credentialed one.