A LayerX study reported October 7 2025 finds 45% of employees use generative AI at work and many copy and paste corporate data into consumer tools like ChatGPT. About 22% of pastes included sensitive data creating regulatory and compliance exposure. Firms must adopt SSO DLP and approved AI services.
A LayerX security study reported by The Register on October 7 2025 reveals widespread enterprise data leakage to consumer generative AI services. The study shows that roughly 45 percent of employees use generative AI for work tasks and many copy and paste corporate content into chat based tools such as ChatGPT. Around 22 percent of those pastes contained sensitive information like personally identifiable information or payment data, creating clear generative AI security risks for organizations.
Shadow IT describes tools used inside organizations without explicit approval or oversight. When employees turn to consumer AI tools from personal accounts to automate tasks they create shadow AI risk. Consumer services are easy to access and familiar from personal use, which is why convenience often trumps controls. That behavior drives enterprise data leakage, erodes visibility, and raises compliance concerns.
These findings show that generative AI adoption without governance creates exposure across regulatory compliance operational security and reputation. Feeding PII or payment data into consumer models can trigger data protection violations for laws such as GDPR and sector specific rules. Using personal accounts reduces monitoring and impairs incident response. When enterprise automation depends on model outputs those pipelines can be tainted if the underlying data was uploaded insecurely.
LayerX recommends a mix of technical controls policy changes and user centered alternatives to manage generative AI risk. Key measures include:
Generative AI is now a routine part of many employees workflows which makes this a governance problem as much as a technical problem. Organizations should assume some staff will use consumer AI and then channel that activity into monitored enterprise grade platforms. Combining SSO DLP and usable corporate AI options will reduce leakage and preserve the productivity gains automation offers.
What to watch next: regulators are increasing scrutiny of AI data handling and vendors are responding with enterprise privacy controls. As a minimal immediate step audit employee AI usage now and prioritize deployment of single sign on data loss prevention and approved AI services to protect sensitive data and sustain secure automation.
Call to action: Audit your employee AI usage today and schedule a review of DLP and SSO options for generative AI.