Shadow AI: The IT Industry's Latest Cybersecurity Challenge
Saugat Sindhu, Senior Partner and Advisory Services Leader – CRS, Wipro Limited, 0
As companies rapidly deploy Artificial Intelligence (AI) and Generative AI (GenAI) to boost productivity and efficiency, the responsibility to ensure safe and secure execution becomes paramount. The necessary guardrails are still a work in progress. IT firms, eager to grab the lead in the AI race, may find it tempting to incorporate GenAI into daily operations without fully considering the risks. As a result ‘Shadow AI’ has emerged as a prominent cyberthreat, that the IT industry is grappling with today.
What is Shadow AI?
New technologies are accompanied by new risks, which is something companies have learned to expect. Although the term sounds nefarious, shadow AI is a much simpler and ubiquitous issue – employees using personal accounts to deploy AI tools rather than their official ones. It refers to the unauthorized use of AI technology in a professional workspace, outside of the control, visibility, and regulation of a firm’s IT department. For example, employees of a major consumer electronics corporation found themselves in hot water after entering a proprietary code into a popular GenAI chatbot while looking to generate content. The code was leaked, leading to the firm banning its employees from using the GenAI tool in question. In-house marketing teams are also vulnerable, as untracked use of AI tools for content creation can result in leak of sensitive customer data and production of low-quality material.
Forrester’s 2024 Predictions Report highlighted this growing problem, revealing a potential ‘shadow AI pandemic’ this year, brought about by over 60 percent of employees using their personal AI tools at work, leading to rampant security challenges, especially where data is concerned. The issue becomes even more pressing within the Indian IT market, where nearly half (46 percent) of companies believe that corporate data hosted on cloud is sensitive in nature. This ranges from financial information, customer details, employee data, intellectual property, and legal documents.
What is Shadow AI Being Used For?
In the absence of security protocols, employees can tap into a plethora of unmoderated yet easily accessible AI software and solutions that are used for:
• Chatbots and virtual assistants to fortify client engagement by addressing frequently asked questions, queries, and requests.
• Deployed by Communications/Marketing teams for to create content like video animation, graphics, blog writing, article production, etc.
• Refine and reduce repetitive manual tasks across departments such as Human Resources or Accounts.
• Examine massive amounts of raw data before providing actionable or customized messages that are tailored to different stakeholder behavior.
What Risks Come with Shadow AI?
With external threats escalating, shadow AI becomes an even more serious problem since it can directly transgress basic internal security measures and systems. The risks of shadow AI include, but are not limited to:
• Legal and compliance violations: Laws on AI use can vary wildly across geographical
locations and industries. If apprehended, unapproved use of AI tools can lead to violations of rules such as General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), etc.
• Data breaches and malware attacks: Quick fix GenAI solutions may lack basic encryptions or barriers that can leave user data vulnerable to easy extraction by malware.
• Operational inefficiency: Shadow AI is often the result of employees looking to accelerate basic tasks, but fragmented data management or improper resource allocation can have the polar opposite effect. Inaccuracy or inconsistency in data can throw up incorrect results that require duplicate or additional effort to rectify.
How can Businesses Protect Themselves from Shadow AI and its Risks?
The rise of shadow AI highlights the importance of adopting the principle of ‘Security by Design’. Proactively integrating access control and restrictions across all stages of the software and hardware development process helps reduce flaws that can be exploited by external threats or misused internally to access unauthorized devices and data. Addressing these vulnerabilities at inception makes IT systems more impervious to breaches, as opposed to reactive measures that are deployed only once a leak has taken place.
Given these risks, it is essential to understand how Shadow AI is being used within organizations. The key to steering clear of the pitfalls of shadow AI is by proactively inculcating a culture of secure and regulated AI use from the very start. Once a workforce becomes accustomed to the habit of resorting to shortcuts and convenient but unprotected AI tools, it can be a difficult cycle to break. Fostering an environment where employees are equipped to safeguard themselves from the dangers of AI misuse is the most effective measure against shadow AI.
Leaders should encourage proper business practices by providing regulated, approved tools, software, and solutions. Educating employees through awareness campaigns, workshops, and sessions with cybersecurity experts can also help develop a positive work culture.
The recommendations on steps that must be taken to prevent shadow AI from running rampant within an organization are:
• Top-level management must lead by example when it comes to endorsing policies and practices that should be followed across all levels of an organization.
• Establish and grow employee-facing communication channels, from e-modules to newsletters, that spread awareness.
• Create clear policies and guidelines with strict consequences to deal with violations in a fashion that discourages them from being repeated.
• Use real-world examples and situations to highlight the potential downsides of shadow AI.
Leaders across businesses must emphasize on responsible AI and take the responsibility of propagating it in all levels of their organizations. To instill this culture, new roles are being created at top levels of companies. We are witnessing an increasing number of firms install AI task-forces or governance officers that oversee the development of an environment where AI is used safely and securely. With the number of AI-related cybersecurity threats poised to keep increasing, companies must remain vigilant and take forward-looking steps from the early stages to mitigate all possible breaches.
• Data breaches and malware attacks: Quick fix GenAI solutions may lack basic encryptions or barriers that can leave user data vulnerable to easy extraction by malware.
• Operational inefficiency: Shadow AI is often the result of employees looking to accelerate basic tasks, but fragmented data management or improper resource allocation can have the polar opposite effect. Inaccuracy or inconsistency in data can throw up incorrect results that require duplicate or additional effort to rectify.
Leaders across businesses must emphasize on responsible AI and take the responsibility of propagating it in all levels of their organizations.
How can Businesses Protect Themselves from Shadow AI and its Risks?
The rise of shadow AI highlights the importance of adopting the principle of ‘Security by Design’. Proactively integrating access control and restrictions across all stages of the software and hardware development process helps reduce flaws that can be exploited by external threats or misused internally to access unauthorized devices and data. Addressing these vulnerabilities at inception makes IT systems more impervious to breaches, as opposed to reactive measures that are deployed only once a leak has taken place.
Given these risks, it is essential to understand how Shadow AI is being used within organizations. The key to steering clear of the pitfalls of shadow AI is by proactively inculcating a culture of secure and regulated AI use from the very start. Once a workforce becomes accustomed to the habit of resorting to shortcuts and convenient but unprotected AI tools, it can be a difficult cycle to break. Fostering an environment where employees are equipped to safeguard themselves from the dangers of AI misuse is the most effective measure against shadow AI.
Leaders should encourage proper business practices by providing regulated, approved tools, software, and solutions. Educating employees through awareness campaigns, workshops, and sessions with cybersecurity experts can also help develop a positive work culture.
The recommendations on steps that must be taken to prevent shadow AI from running rampant within an organization are:
• Top-level management must lead by example when it comes to endorsing policies and practices that should be followed across all levels of an organization.
• Establish and grow employee-facing communication channels, from e-modules to newsletters, that spread awareness.
• Create clear policies and guidelines with strict consequences to deal with violations in a fashion that discourages them from being repeated.
• Use real-world examples and situations to highlight the potential downsides of shadow AI.
Leaders across businesses must emphasize on responsible AI and take the responsibility of propagating it in all levels of their organizations. To instill this culture, new roles are being created at top levels of companies. We are witnessing an increasing number of firms install AI task-forces or governance officers that oversee the development of an environment where AI is used safely and securely. With the number of AI-related cybersecurity threats poised to keep increasing, companies must remain vigilant and take forward-looking steps from the early stages to mitigate all possible breaches.