AI Shadow IT Exposes Company Secrets in ChatGPT

A LayerX study reported October 7 2025 finds 45% of employees use generative AI at work and many copy and paste corporate data into consumer tools like ChatGPT. About 22% of pastes included sensitive data creating regulatory and compliance exposure. Firms must adopt SSO DLP and approved AI services.

AI Shadow IT Exposes Company Secrets in ChatGPT

A LayerX security study reported by The Register on October 7 2025 reveals widespread enterprise data leakage to consumer generative AI services. The study shows that roughly 45 percent of employees use generative AI for work tasks and many copy and paste corporate content into chat based tools such as ChatGPT. Around 22 percent of those pastes contained sensitive information like personally identifiable information or payment data, creating clear generative AI security risks for organizations.

Background on shadow AI and enterprise risk

Shadow IT describes tools used inside organizations without explicit approval or oversight. When employees turn to consumer AI tools from personal accounts to automate tasks they create shadow AI risk. Consumer services are easy to access and familiar from personal use, which is why convenience often trumps controls. That behavior drives enterprise data leakage, erodes visibility, and raises compliance concerns.

Key findings from the LayerX analysis

  • Adoption: about 45 percent of employees reported using generative AI for work.
  • Copy and paste behavior: 77 percent of AI users have pasted company data into chat based tools.
  • Sensitive content: about 22 percent of those pastes included PII or payment card data.
  • Account control: roughly 82 percent of pastes came from unmanaged personal accounts rather than enterprise managed AI instances.
  • File uploads: nearly 40 percent of uploaded files to these tools contained PII or PCI data.
  • Tool distribution: ChatGPT accounted for over 90 percent of usage while Microsoft Copilot represented about 2 percent, so most leakage goes to consumer services rather than corporate controlled automation platforms.

Plain language clarifications

  • PII means personally identifiable information such as names email addresses or government identifiers.
  • PCI refers to payment card information like credit card numbers.
  • SSO means single sign on which enables centralized account control for enterprise software and helps reduce account sprawl.
  • DLP means data loss prevention which can detect and block sensitive data leaving the organization.

Implications for automation governance and compliance

These findings show that generative AI adoption without governance creates exposure across regulatory compliance operational security and reputation. Feeding PII or payment data into consumer models can trigger data protection violations for laws such as GDPR and sector specific rules. Using personal accounts reduces monitoring and impairs incident response. When enterprise automation depends on model outputs those pipelines can be tainted if the underlying data was uploaded insecurely.

Practical steps to reduce enterprise data leakage

LayerX recommends a mix of technical controls policy changes and user centered alternatives to manage generative AI risk. Key measures include:

  • Enforce enterprise only accounts and single sign on for approved AI services so employees do not use personal accounts for work.
  • Deploy data loss prevention and monitoring tailored to AI inputs to scan prompts and uploads for PII PCI and other sensitive markers.
  • Offer approved enterprise grade AI services with contractual data handling guarantees and clear automation governance.
  • Educate staff with role specific guidance on what must not be pasted into external AI tools and run regular audits of employee AI usage.
  • Remove incentives to use personal tools by improving the speed usability and accessibility of sanctioned options.

Conclusion and next steps

Generative AI is now a routine part of many employees workflows which makes this a governance problem as much as a technical problem. Organizations should assume some staff will use consumer AI and then channel that activity into monitored enterprise grade platforms. Combining SSO DLP and usable corporate AI options will reduce leakage and preserve the productivity gains automation offers.

What to watch next: regulators are increasing scrutiny of AI data handling and vendors are responding with enterprise privacy controls. As a minimal immediate step audit employee AI usage now and prioritize deployment of single sign on data loss prevention and approved AI services to protect sensitive data and sustain secure automation.

Call to action: Audit your employee AI usage today and schedule a review of DLP and SSO options for generative AI.

selected projects
selected projects
selected projects
Get to know our take on the latest news
Ready to live more and work less?
Home Image
Home Image
Home Image
Home Image