# Article Name Find Every App - SaaS, AI Discovery, and Governance Guide # Article Summary Practical guide to continuous SaaS and AI discovery, centralizing visibility, prioritizing risks, and automating review workflows # Original HTML URL on Toriihq.com https://www.toriihq.com/articles/discover-shadow-ai-apps # Details How did one company uncover 600-plus apps after expecting 175 to 200, while app counts rose 20 percent across four of five company sizes? Discover practical ways to find every app, including fast-growing AI tools from 10 to 25-person teams, then decide what to keep, secure, or cut. Learn how to centralize visibility, streamline reviews, and eliminate surprise spend while reducing risk. In this video, John Baker, Sr Manager of Digital Strategy at Torii, and Rachel MacDonald, Senior Enterprise Customer Success Manager at Torii, shares field-tested tactics to tame shadow AI. See how centralized discovery outperforms manual audits, why consumption-based pricing changes cost control, and what to ask vendors about models, training data, and opt-outs. Follow two plug-and-play workflows to triage backlogs and new signups using Slack, tags, PII-aware forms, ticketing, and audit trails, then use dynamic views and a simple framework to align teams and prioritize risk. A must-watch for IT, security, and procurement leaders who need to uncover blind spots, streamline governance, and cut SaaS waste before renewals hit. This article was originally a video (YouTube link here [https://www.youtube.com/watch?v=2YFLeiMiXKQ]). Below is the full transcript: Today’s webinar is titled Find Every App, a practical guide to SaaS and AI discovery. We have shifted the focus from simply finding applications to answering what to do with apps once they are discovered, and how to think about AI apps in particular. For housekeeping, we recommend using computer audio rather than a telephone connection. All microphones are muted to limit background noise, and if you have questions, please use the Q&A section at the bottom of the screen. We ran a quick poll asking how confident people feel in their current processes for managing shadow AI, and many respondents reported low confidence. That response aligns with broader uncertainty in the field. At Torii we published a SaaS benchmark report that analyzed aggregated data across SaaS ecosystems. For four out of five organization size groups, the number of applications increased by about 20 percent over the past year, which was surprising. Rachel has seen this trend in the field, and she attributes it in part to renewed innovation driven by AI. That renewed activity is bringing many new applications into enterprise environments. When we dug into the data, we found that many of the newly discovered shadow IT apps are relatively small AI-powered tools, often created by teams of 10 to 25 employees. In prior years, shadow IT more often consisted of well-known productivity or travel apps, which were easier to monitor. Research from Gartner emphasizes the importance of centralized visibility into the SaaS ecosystem to mitigate cyber attacks. That finding prompted us to consider how shadow AI differs from traditional shadow IT and where the similarities lie. There are several key differences to keep in mind. Shadow AI tools are often standalone products, but they can also be embedded features or add ons within other applications. More importantly, many AI tools train on or reuse company data, and users are often incentivized to feed significant amounts of data into them to get value. Consumption-based pricing is increasingly common for AI tools, which differs from the traditional subscription-per-user model and may present new cost-management challenges. These dynamics introduce additional security, privacy, and budgetary considerations. A trend Rachel is seeing is that IT and security teams are taking more control of apps that were previously managed by other teams. For example, enterprise platforms such as Salesforce are adding AI features, which pushes governance and review processes toward centralized teams. There is an incentive misalignment between teams focused on outcomes, such as marketing, and teams focused on risk, such as IT and infosec. Balancing productivity gains with potential liabilities is an organizational challenge, and decisions will depend on company culture and context. We will now move into tactics. Discovery is the starting point: you cannot secure, manage, or optimize what you cannot see. Many organizations attempt discovery using network monitoring, identity data, CASB, endpoint management, expense reviews, or employee surveys, but these approaches are often incomplete and quickly become out of date. Without automation or a designated tool, discovery can become a full-time job. At Torii we offer continuous discovery, and we encourage teams unsure about their coverage to test a trial to see what real-time discovery can reveal. An example from our field work illustrates the gap: a customer who believed they had 175 to 200 applications discovered over 600 applications after onboarding Torii. That discrepancy highlights the scale of unknown apps in many environments. I developed a framework to think about shadow IT and shadow AI, which I called REPACK MY, not because the name is perfect, but because it encapsulates key elements. These elements include real-time detection, cross-functional engagement, prioritization of risks, continuous improvement, centralized data, maintained records, and aligned teams. Rachel sees centralizing data as one of the biggest practical hurdles. Security teams often have separate tools and may be surprised by the additional data available in a centralized solution, which creates opportunities for better collaboration. We will now describe two practical workflows: addressing an existing backlog of applications, and handling newly discovered apps. For existing apps, one approach is to use a custom application field to track whether generative AI is present, and to trigger a workflow when that field is not set. If a primary application owner is defined, the workflow can delegate the initial review to that owner via a Slack message. The owner receives instructions to contact the vendor and to copy security and privacy teams, and a ticket is created in the ticketing system for follow-up. The vendor inquiry should ask clear questions about AI usage, including whether AI capabilities are provided, how they are integrated and maintained, whether third-party models or external engines are used, which models are in use, whether customer data will be used to train models, whether an opt-out is offered, and how security and privacy are ensured. If no primary owner is set, you can prioritize applications using tags for features such as AI-generated content, development tools, or compliance attributes like GDPR, SOC 2, and ISO 27001. Those tags help identify the highest priority apps for review. Rachel noted that Torii can automate parts of this process, including sending vendor emails when vendor contact information is available in the system. Automation reduces manual work and helps scale these reviews. A centralized tool also provides an audit trail, so you can review historical workflow runs, vendor responses, and assessment outcomes. That historical context is valuable for compliance and repeated assessments. For newly discovered apps, we recommend sending a Slack notification that includes surface-level data, such as compliance status and relevant tags. At the same time, send a brief form to the user who signed up, asking about current usage, the purpose of the app, the designated owner, whether personally identifiable information will be uploaded, and whether the app will be integrated with existing systems. Responses to that form determine the next steps. If a user plans to upload PII, treat the app as high priority, ask the user to refrain from use until a security review is complete, and change the application status to security assessment. If the app is AI enabled without PII, treat it as a medium priority and move it to evaluation. Other apps can be queued for later review, depending on your resources. You can save custom views in the application catalog to create a dynamic list of AI-tagged tools or other feature-based groups. That saved view becomes a working list that grows as new apps are discovered, and it supports repeated assessment workflows. Creating focused views helps onboard other teams by showing them only the apps and priorities that matter to them, rather than presenting the entire catalog at once. This approach reduces friction and fosters collaboration. There are many building blocks you can combine, including custom application details and branching logic, so you can design prioritization workflows that reflect your organization’s risk tolerance and capacity to act. A few closing points: perfection is the enemy of good, so start with practical processes rather than waiting for a perfect system. Company culture determines which controls are appropriate, and policies should be tailored to your context. Finally, we are early in the evolution of AI in enterprise software, so engaging now will yield compounding benefits. A question came up about application categories and whether AI or AI-enabled could become a category. We recently released customizable categories for enterprise customers, which gives you more control over how apps are classified. However, AI is generally a mechanism rather than a primary use, so it may be better represented by tags or custom fields rather than a single top-level category. One practical approach is to add AI-related fields into procurement and renewal processes and capture that information in contract management systems. If those fields are synced into your discovery platform, you can maintain centralized records of which managed applications have AI components. Thank you to everyone who joined. If you have account-specific questions, please reach out to your customer success manager for detailed assistance. Thank you, Rachel, for joining the discussion. Goodbye, and take care.