Michael Kratsios unveiled a national AI framework on March 20, 2026, to establish federal authority over a growing patchwork of state regulations. White House officials shared the legislative outline exclusively with congressional leadership as part of an effort to codify a consistent national standard. This framework aims to reduce regulatory uncertainty for developers while maintaining American technological leadership in the global race for computational dominance.
White House Office of Science and Technology Policy Director Michael Kratsios emphasized the urgency of the proposal during private interviews. Executive branch officials want Congress to advance the measure within the current calendar year. Republican leaders in the House and Senate appear largely aligned with the administration's vision for a centralized regulatory structure. Pressure is mounting on lawmakers to act as state legislatures in California and Texas implement their own diverse sets of requirements for machine learning systems.
David Sacks, the White House AI and crypto czar, described the initiative as the One Rulebook plan. President Trump initiated this directive in December through an executive order focused on simplifying industry requirements. Sacks argued that 50 discordant regulatory regimes would stifle innovation and provide an opening for international competitors. The proposal explicitly calls for federal preemption of state laws that impose what the administration defines as undue burdens on tech companies.
White House AI Framework Targets Four Core Priorities
Administrative officials have organized the legislative pillars around the four C's of regulation. These categories include child safety, communities, creators, and censorship. Protecting free speech and preventing algorithmic bias against political viewpoints are central components of the text. White House sources indicate the framework was designed to ensure AI development does not become a tool for silencing dissenting opinions.
Policy disagreements on Capitol Hill persist despite the administration's push for speed. Senate Commerce Committee members are currently reviewing the Cruz AI framework as a potential alternative or supplement to the White House plan. At the same time, House Energy and Commerce Chair Brett Guthrie expressed optimism that the federal proposal aligns with his focus on deployment and safeguards. Guthrie remains the primary figure for AI legislation in the House of Representatives.
But the focus on federal preemption is still a major hurdle for bipartisan consensus. Democratic lawmakers often favor allowing states to set more stringent privacy and safety standards than the federal floor. Current state laws already cover diverse areas like biometric data protection and AI-generated deepfakes in elections. The federal framework seeks to replace these local statutes with a single set of rules enforced at the national level.
Anthropic Faces Scrutiny Over Foreign Workforce Risks
Pentagon leadership is simultaneously mounting a legal defense against Anthropic after designating the firm a supply chain risk. Undersecretary Emil Michael filed a declaration on March 17 stating that the company relies heavily on foreign talent to build its large language models. The Defense Department expressed particular concern regarding employees who are nationals of China. Such workers are subject to the PRC National Intelligence Law which requires citizens to support state intelligence operations.
"Anthropic employs a large number of foreign nationals to build and support its LLM products, including many from the Peoples Republic of China (PRC), and the use of those workers increases the degree of adversarial risk should those employees comply with the PRC's National Intelligence Law."
In fact, the Pentagon argued that other major AI laboratories maintain higher security assurances than Anthropic. Defense officials claim the leadership at competing firms has demonstrated more consistent and trustworthy behavior in national security partnerships. Anthropic is currently suing the federal government to overturn the risk designation and block directives that would force agencies to drop its products. The company argues the designation is an overreach that ignores its internal safety protocols.
Yet the Department of War continues to utilize Anthropic tools for certain non-classified functions. Military officials are prepared to extend deadlines for offboarding the company's software if a sudden transition proves disruptive to ongoing operations. This indicates a complex dependency on private sector innovation even when security concerns are flagged at the highest levels. The legal battle in federal court will likely determine how the government vetters future AI service providers.
House Homeland Security Committee Holds Private AI Sessions
House Homeland Security Committee members met behind closed doors with Anthropic co-founder Jack Clark on Wednesday. Three sources familiar with the session described the tone as friendly despite the company's ongoing litigation against the Pentagon. Discussion topics focused on model distillation, a process that shrinks massive AI systems into smaller and more efficient versions. Smaller models are easier to deploy on mobile devices and edge computing hardware.
Separately, lawmakers examined the efficacy of export controls on advanced semiconductors and model weights. The committee had originally planned for a public hearing with CEO-level witnesses from Google and Quantum Xchange. Executives successfully lobbied to move these discussions into private roundtables to enable more substantive dialogue on sensitive technical vulnerabilities. Committee spokesperson Anna Holland confirmed that more private industry roundtables are scheduled for the coming months.
Industry compliance remains the pivot point for these legislative ambitions.
According to talent tracking data, foreign-born researchers constitute 38-40% of the top AI talent currently working at U. S. institutions. This reliance on global expertise creates a paradox for federal regulators who want to secure the supply chain without alienating the workforce. If the government imposes overly restrictive hiring mandates, critics worry that the most capable engineers will migrate to labs in Europe or Asia. The White House framework must balance these human capital requirements with the strictures of national security law.
Still, the administration maintains that a unified federal policy is the only way to sustain American dominance. To that end, the White House is urging Congress to pass the framework before the midterm election cycle begins in earnest. David Sacks and Michael Kratsios are coordinating with the House Energy and Commerce Committee to refine the technical definitions within the bill. Early drafts suggest the legislation will grant the executive branch broad powers to define what constitutes a national security risk in the AI sector.
National security remains the primary friction point for these legislative ambitions.
In turn, the outcome of the Anthropic lawsuit will set a precedent for how the Pentagon evaluates other Silicon Valley firms. If the court upholds the supply chain risk designation based on the nationality of a workforce, the entire tech industry may face a radical restructuring of its hiring practices. Most major labs utilize a similar global talent pool to maintain their competitive edge. The intersection of labor policy and national defense has never been more contentious in the digital age.
The Elite Tribune Perspective
Does the White House truly believe that a single rulebook can contain the explosive complexity of artificial intelligence, or is this merely a grab for federal control over state-level consumer protections? The administration's push to preempt state laws under the guise of national standard-setting is a transparent attempt to shield tech giants from the more aggressive safety mandates emerging in California. While David Sacks argues that a patchwork of laws stifles innovation, history shows that states often serve as the only effective check on corporate overreach when the federal government is paralyzed by industry lobbying.
The Pentagon's attack on Anthropic over its foreign workforce adds a xenophobic layer to this regulatory theater, suggesting that the government is more interested in policing the origins of engineers than the outputs of their algorithms. If we disqualify 40 percent of the top talent pool based on their passports, we aren't protecting the supply chain; we are committing intellectual suicide. The framework's emphasis on preventing censorship is a coded promise to ensure that algorithmic moderation favors the current administration's ideological allies. Congress must resist the urge to rubber-stamp this centralization of power.
Real safety requires diverse, overlapping layers of oversight, not a singular, easily manipulated federal clearinghouse.