# Article Name Do LLMs Increase Shadow AI Risk? # Article Summary Learn how unchecked LLM adoption fuels Shadow AI, exposing sensitive data and compliance gaps and how to mitigate risk today. # Original HTML URL on Toriihq.com https://www.toriihq.com/articles/llm-shadow-ai-risk # Details Generative AI tools are appearing in offices faster than security teams can react, and the chat-style interfaces feel so benign that employees weave them into everyday work without thinking about the corporate data they hand over. Constant hype only fuels that impulse and leaves risk reviews trailing behind. Unsanctioned prompts now shape everything from code checks to customer emails, and every time someone pastes PII or proprietary snippets into ChatGPT, they trigger a disclosure that most monitoring tools miss. Browser extensions and personal credit-card accounts let these apps skirt SSO and procurement gates, so security teams often discover the exposure long after the fact. Organizations can guard productivity while staying safe by spotting Shadow AI early and mapping its risk surface, then rolling out controls designed specifically for conversational data. ## Why are LLMs fertile ground for Shadow AI? Large language models reach offices faster than any business app in recent memory. With nothing more than a browser and a personal email address, an employee can spin up ChatGPT [https://openai.com/chatgpt] or Claude within seconds, skipping purchase orders and installs. Security teams suddenly face a scenario where one prompt can exit the company before anyone knows the new tool is in use. The appeal shows up clearly during a typical workday workflow. - Consumer log-ins bypass single sign-on altogether, which means identity checks and MFA rules never even get the chance to fire. - Personal credit cards under fifty dollars a month slide past finance thresholds and avoid vendor review. - Browser extensions tuck LLM access inside Outlook, Google Docs, or Jira, blending traffic with ordinary HTTPS. - Marketing posts promise “10x productivity,” pushing staff to test the tool right now instead of waiting for a risk review. Older SaaS apps needed an installer or at least an OAuth handshake; that single hoop gave security teams the opportunity to spot and catalog the service. Conversational AI disrupts that rhythm; the payload is plain text, and plain text feels harmless. Employees see only a chatbot and paste in draft press releases, client questions, or raw sales numbers without hesitation. The mix of stealth sign-ups and data-hungry chatbots turns Shadow IT into full-blown Shadow AI. Encrypted WebSocket sessions conceal prompts from inline proxies, and usage-based billing means finance sees the spend only after terabytes of context have left the house. When procurement finally asks for a contract, engineering teams may have wired external models into CI pipelines, forcing legal to race after a risk already running in production. ## How are employees using LLMs without governance? Shadow AI weaves into day-to-day tasks, yet most internal policies never even reference it. Within most organizations, developers are the first to lean on shadow AI tools. A coder stumped by a flaky API drops the entire repo into GitHub [https://github.com] Copilot, grabs a quick patch, then merges without noticing the model now stores every proprietary comment and commit. A 2024 Stack Overflow poll found 64 percent of contributors lean on AI assistants weekly. Marketing teams tend to jump in soon after the engineers. Pressed for highly tailored outreach, reps paste customer names, job titles, and purchase notes into ChatGPT [https://chat.openai.com] to smooth the wording. The free tier feels quicker than updating CRM merge fields, so a steady stream of PII flows beyond company walls. On the data side, analysts reach for their own shortcuts. When a spreadsheet error blocks a forecast, they paste the entire workbook formula chain into ChatGPT [https://chat.openai.com] for debugging. That file often contains unreleased revenue numbers and supplier terms. Cisco’s 2023 data-privacy benchmark reported 52 percent of knowledge workers admit sharing sensitive business data with external AI tools to save time. Security teams keep missing the same entry points, which continue to slip under the radar: - Browser add-ons that wedge prompt bars beside Gmail and Jira screens - Home-grown Slack bots wired to the OpenAI API with default settings - No-code Zapier zaps chaining spreadsheet updates to LLM summaries - Mobile keyboard apps silently calling remote models while employees text clients Few of these routes honor single sign-on, so logs show a bland HTTPS call marked “general web.” CASBs may flag the domain once, yet every follow-up request rides the same CDN and blends with normal traffic. Most employees assume they are helping the business with each prompt they type. They want a faster ticket close, a sharper subject line, a cleaner macro. Still, intent does not equal control, and each unsupervised prompt widens a visibility gap that security teams struggle to close. ## Where do exposure and compliance gaps arise with LLM use? Unsanctioned prompts travel farther and linger longer than most teams realize. When an employee drops source code or a customer roster into OpenAI [https://openai.com], that text leaves the corporate boundary right away and often lands on a server the company doesn't control. A 2023 Cyberhaven study found 11 percent of knowledge workers pasted sensitive data into ChatGPT during its first month on the corporate network, so exposure grows quickly. The leap from friendly chatbot to compliance headache shows up in several concrete ways: - Trade secrets can resurface because many vendors keep or fine-tune on prompt data for up to 30 days. - GDPR articles 44–46 block transfers to the United States, yet most large LLM APIs still terminate there by default. - Customer NDAs often forbid sharing identifiable details with external parties, and an LLM provider counts as one. - Personal health information entered into a public model triggers an immediate HIPAA violation. - Prompt history cannot be retrieved during e-discovery, leaving legal teams unable to prove what left the building. Contract language about data retention often slips past even diligent reviewers. Anthropic [https://www.anthropic.com] says it “may store snippets to improve service,” and Microsoft’s Azure OpenAI provides an opt-out few users notice. Security teams lose chain of custody as soon as the chat window closes. If legal later requests the prompt that contained a departed employee’s code, no trustworthy record exists unless the endpoint logged it on the way out. Shadow AI spreads quietly between audits and creates surprises at the worst possible moment. It shows up one paste at a time, then balloons into a compliance gap big enough to delay product launches or trigger fines. ## Why do Shadow IT controls fail for generative AI? Traditional cloud access security brokers were built for file uploads, not chatty streams of tokens that shift with every keystroke. They spot ZIP files heading to Dropbox, yet an engineer who pastes source code into a ChatGPT prompt uses the same HTTPS tunnel and never triggers a rule. Security teams learn about the traffic only after finance notices an unexpected OpenAI charge on the corporate card. Even tight URL filtering can’t see through encrypted WebSockets that many LLM plugins open once installed. A marketing manager installs a Chrome extension that promises instant headline drafts; the add-on calls home through an innocent domain, and the prompt payload hides inside TLS. Tools like Zscaler [https://www.zscaler.com] tag the destination as “AI,” but they lack the context to tell whether the content violates policy, so most alerts pile up in an ignored queue. Across many enterprises, three recurring technical gaps undermine data-protection strategies: - Tokenization scrambles context. DLP tools look for full phrases or credit card numbers, yet the model sends chunks like tok763tok122, defeating regex. - Billing signals lag. Procurement measures gigabytes, while LLMs bill per 1,000 tokens, so costs spike before anyone notices. - Extension sprawl hides risk. Browser stores push weekly updates, and each new version can ship a different remote model without notice or audit. Adding to the blind spot, model providers update their systems quickly and quietly. OpenAI expanded GPT-4 Turbo’s context window twice last quarter, which changed how much data a single call can leak without the endpoint URL ever changing. Version drift also breaks static allow lists; yesterday’s “safe” API now retains prompts for thirty days because the privacy terms changed in a point release. Legacy controls assume code stays put, yet generative AI lives in permanent beta. Until monitoring tools can read inside the conversation, organizations will keep chasing shadows instead of managing risk. ## How can firms balance innovation and risk with LLMs? Every employee should know which LLM tools are safe before they click anything at work. Clear guidance shrinks the attack surface fast. Publish a short “approved LLM” catalog that scores each model on data terms, log retention, and hosting region. Keep the list brief and update it every quarter so people read it. Enterprise contracts deserve the same meticulous attention as traffic flow. For every sanctioned provider, require language that bans training on your prompts, sets 30-day log deletion, and grants audit rights over subprocessors. Legal writes the paper, but security supplies the non-negotiables; otherwise the document turns into marketing copy vendors sign without consequence. Make safe behavior the default, not a scavenger hunt, by placing controls where work happens: - Deploy browser add-ons that flag credit-card numbers, patient IDs, or source code before the prompt leaves the laptop. - Route all external LLM requests through a secure gateway that strips or hashes detected secrets, then stores a clean copy for audit. - Enforce role-based keys so finance can reach a model tuned on public filings while R&D stays on a local instance. Internal sandboxes keep curiosity from leaking data onto the public internet. Stand up a small model on in-house GPUs, seed it with anonymized documents, and invite teams to experiment. Retrieval-augmented generation with on-prem embeddings lets the bot answer policy questions without ever sending the policy outside. Treat governance as an ongoing program, not a checkbox exercise. A committee spanning security, legal, compliance, and two rotating business leads meets every six weeks to review logs, vendor requests, and policy gaps. They track three numbers: percentage of prompts hitting approved endpoints, incidents per thousand prompts, and average turnaround on vendor decisions. When those metrics improve, innovation can move quickly without hiding in the shadows. ## Conclusion Unchecked use of large language models now hangs over nearly every company today. One-click sign-ups, easy browser logins, and raw speed tempt employees to paste code, customer details, and trade secrets into public chatbots before security can even spot the traffic. The result eclipses old Shadow IT, widens compliance gaps, and pushes security, legal, and business leaders to set policy, add guardrails, and share ownership. Leaders can capture value without losing control by treating LLMs as managed enterprise platforms. Approve usage in the open, record every prompt, and coach employees on safe workflows instead of chasing risky backchannels. ## Audit your company's SaaS usage today If you're interested in learning more about SaaS Management, let us know. Torii's SaaS Management Platform can help you: - Find hidden apps: Use AI to scan your entire company for unauthorized apps. Happens in real-time and is constantly running in the background. - Cut costs: Save money by removing unused licenses and duplicate tools. - Implement IT automation: Automate your IT tasks to save time and reduce errors - like offboarding and onboarding automation. - Get contract renewal alerts: Ensure you don't miss important contract renewals. Torii is the industry's first all-in-one SaaS Management Platform, providing a single source of truth across Finance, IT, and Security. Learn more by visiting Torii [https://www.toriihq.com].