March 27, 2026
March 27, 2026
Photo by Jonathan Simcoe on Unsplash
As AI lobbying hits record levels, federal policy is leaning toward flexibility and innovation. This leaves HR leaders to close the risk gap inside their own organizations.
Inside the Beltway, AI is one of the hottest topics in town. Lobbying firms pulled in almost $92 million from AI-related issues in the first three quarters of 2025 alone, per Bloomberg Government analysis, and full-year totals hit nearly $130 million as influence spending accelerated into 2026. Bloomberg reported that tech giants and AI vendors spent over $100 million pushing for light-touch national rules.
For HR leaders, that power play in Washington is not an abstract policy story. It shapes the extent of the organization’s exposure when deploying AI for hiring, performance management, monitoring and workforce planning.
At the federal level, the Trump administration has focused on accelerating U.S. AI dominance and minimizing “cumbersome regulation,” including an executive order aimed at curbing state attempts to regulate AI piecemeal.
That push, backed by industry lobbying, has coincided with the revocation of Biden-era AI directives and the scaling back of EEOC and DOL AI guidance on bias and inclusive hiring, according to a brief from Wiley Reber Law, leaving employers with a thinner federal playbook.
States, however, are not waiting. Legislatures in places like California, Colorado, Illinois and Texas are moving ahead with AI‑specific employment rules that treat tools used in hiring and personnel decisions as high‑risk.
In these jurisdictions, employers are confronting requirements for notices and consent, impact assessments, bias testing and appeal rights for candidates and employees. Some proposals also reach into electronic monitoring, requiring advance written notice before rolling out AI‑enabled productivity tracking or surveillance.
The result is a widening gap between a permissive, innovation‑driven federal posture and a patchwork of tougher state standards. For CHROs, that gap translates into risk.
First, there is liability. Even without a comprehensive federal AI statute, existing employment and civil‑rights laws already apply to algorithmic decisions.
Regulators and plaintiffs’ attorneys increasingly treat AI‑driven tools as extensions of the employer, not as neutral third‑party systems, Britney Torres, co-chair of Littler’s AI & Technology Practice Group, told HR Executive.
A biased screening model can scale discrimination across thousands of applicants in a way no individual hiring manager ever could. When something goes wrong, it is HR and the employer’s brand on the hook.
Second, there is compliance complexity, according to a brief from Foley & Lardner LLP. HR teams now have to map their workforce footprint against divergent state obligations, align notice and consent workflows, and maintain documentation of how AI influences personnel decisions.
Read the full article here.