Google’s Push to Use Nayya AI for Benefits Enrollment Raises Privacy Alarms

Google asked US employees to use Nayya, an AI benefits advisor, prompting concerns that sensitive health information could be shared with a third party. Google clarified participation is optional. Key lessons include vendor risk management, consent lifecycle management, and transparency.

Google’s Push to Use Nayya AI for Benefits Enrollment Raises Privacy Alarms

On October 9, 2025, reports surfaced that Google asked US employees to use Nayya, an AI powered benefits advisor, to enroll in health coverage. The move sparked concern because Nayya analyzes plan details and personal health related inputs to produce recommendations. That raised alarms about employee data privacy, consent management, and how third party AI vendors handle sensitive information.

Why AI is moving into benefits administration

Benefits selection is complex and data heavy. AI in HR promises to simplify choices by analyzing plan features, costs, and user inputs to recommend the best match for an individual. Employers see benefits administration automation as a way to reduce friction, increase enrollment accuracy, and improve employee experience. At the same time, AI driven recommendations often rely on sensitive health related attributes, so data minimization and privacy by design are essential.

Key details and findings

  • Internal guidance initially suggested employees would need to use Nayya to enroll, raising questions about whether consent felt voluntary.
  • Google later clarified that participation is optional and that certain data sharing is not required for benefits eligibility.
  • Nayya analyzes both plan features and personal inputs to generate personalized recommendations, which creates concerns about storage, aggregation, and secondary uses of data.
  • Coverage across national and tech press highlighted public interest in workplace privacy, vendor risk management, and the governance of AI in HR.

Three practical takeaways

  1. Third party AI vendor risk is rising as organizations outsource benefits decisioning. Vendor due diligence and data processing agreements are critical.
  2. Consent lifecycle management matters. Opt in and opt out mechanisms must be explicit, separable from employment requirements, and easy to exercise.
  3. Transparency builds trust. Plain language notices, privacy impact assessments, and clear statements on retention and permitted uses reduce legal and reputational risk.

Implications for employers

Privacy and legal risk are heightened when health related employee data is involved. Even if participation is optional, unclear policies and broad vendor permissions can create regulatory exposure under state privacy laws and health privacy frameworks. Consent is not just a checkbox. If a tool is presented as effectively required for enrollment, the legitimacy of consent is undermined. Employers should treat consent as part of a governance program that includes independent audits, privacy impact assessments, and ongoing monitoring of vendor behavior.

Operational and reputational trade offs

AI driven benefits automation can reduce administrative load, but it requires procurement work to vet AI vendors, contractual safeguards such as purpose limitation and access controls, and governance to detect misuse. Public disputes about data use can affect employee morale and brand reputation. For employers competing for talent, trust around personal data handling is a retention factor.

Practical steps employers should consider

  • Publish clear, plain language explanations of what data is collected, why it is needed, and how long it will be retained.
  • Apply data minimization and avoid collecting health details unless strictly necessary for the enrollment decision.
  • Include robust contractual safeguards with vendors, including encryption, access controls, prohibitions on secondary uses, and clear data processing agreements.
  • Offer genuine opt in and opt out pathways that do not penalize employees or affect benefits eligibility.
  • Conduct independent audits or privacy impact assessments and share non confidential summaries with staff to build trust.

Conclusion

Google's Nayya episode is a cautionary moment for any organization integrating AI into HR processes. The technical benefits of personalized recommendations are real, but the stakes rise when systems touch health data. Employers that want to deploy AI in benefits administration should prioritize explicit consent, tight contractual controls, data minimization, and transparent communications. Without those protections, short term convenience could bring longer term legal and trust costs.

selected projects
selected projects
selected projects
Get to know our take on the latest news
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image