Reports that Google asked US staff to use Nayya, a third party AI benefits tool, sparked backlash and privacy concerns. Google later clarified use is optional. The episode highlights the need for clear consent, strong data governance, and human oversight when automating health data.
On October 9, 2025, reports revealed that Google had informed US staff they should use Nayya, an AI powered third party tool, to enroll in health plans. Internal guidance was interpreted by some employees to mean refusing the tool could affect benefits eligibility. After internal and external reporting, Google clarified that using Nayya is optional and will not change eligibility. The episode raises a core question for employers: how can organizations deliver the efficiency of employee benefits automation while protecting health data privacy and consent?
Employers increasingly adopt AI for tasks such as benefits enrollment, eligibility checks, and personalized plan recommendations. AI driven automation promises faster decisions and lower administrative costs, but these systems often process extremely sensitive personal information, including medical conditions, prescriptions, and claims histories. Health data privacy requires extra care because misuse can lead to discrimination, legal exposure, and reputational harm. When an employer integrates a third party AI vendor into benefits workflows, employees need clear communication about what data is shared, how it is used, and what control they retain.
The episode illustrates several practical lessons for teams deploying employee benefits automation and other high risk AI. Below are key principles that should guide any rollout.
Consent is not a checkbox. Vague or coercive policy language erodes trust. Employers should adopt consent best practices and clear consent management with AI, ensuring employees can make informed choices without penalty.
When transferring health data to a vendor, require narrow purpose contracts that forbid secondary uses, including model training or unrelated analytics, unless the employee gives explicit permission. Apply data minimization, robust encryption, and strict retention limits to reduce risk.
Publish plain language privacy notices and a privacy impact assessment before deployment. Document what data is collected, retention periods, who can access it, and how AI decisions are made. These steps support compliance with AI privacy regulations and data protection standards.
Automated recommendations should include human review for edge cases, appeals mechanisms, and an easy opt out that does not affect benefits eligibility. Human oversight prevents harmful automated outcomes and supports fair decision making.
The backlash at Google shows that technical benefits alone do not overcome poor communication. Rolling out automation requires clear, repeated messaging, accessible explanations of tradeoffs, and direct channels for employee questions.
This episode aligns with broader trends in AI where convenience can outpace governance. Treat sensitive AI deployments as enterprise risk projects rather than product integrations. A strong approach combines minimal friction for users with maximal clarity about rights, data governance, and consent.
Google’s clarification underscores a clear lesson: deploying AI in HR without transparent consent and robust governance erodes trust, even at technology savvy organizations. As more employers explore automation for benefits and other sensitive functions, the test will be whether they can deliver efficiency without compromising employee rights. Will organizations update their rollout playbooks to include privacy impact assessments, consent management, and human oversight, or will similar controversies become routine? That is the issue to watch.