The Dangers of Using Public Generative AI Tools in the Workplace

Several employees have inputted confidential information, such as customer data, into publicly accessible generative AI systems.

Employees risk inputting sensitive data into AI tools

Lock with wheels turning inside

Imagine you’re standing at the edge of a cliff, holding a precious document. The wind is howling, threatening to rip it from your grasp. You know that letting go could have disastrous consequences, but for some reason, you still decide to throw it into a swirling vortex below. That’s how some employees feel when they willingly input sensitive data into publicly available generative artificial intelligence (AI) tools.

Customer information, sales figures, financial data, and personally identifiable information like email addresses and phone numbers are all at risk. A recent study conducted by Veritas Technologies revealed that 39% of employees recognize the potential for a leak of sensitive data when using public generative AI tools, while 38% fear that these tools could produce incorrect, inaccurate, or unhelpful information. Additionally, 37% of respondents cited compliance risks, and 19% believed that the technology could negatively impact productivity.

🔥 Hot Tip: Want to learn how to use AI responsibly? Check out this article for some valuable insights!

Despite these concerns, a surprising 57% of employees admitted to using public generative AI tools in the office at least once a week, with 22.3% even relying on the technology daily. However, not everyone is taking this risk. Approximately 28% of people reported not using these tools at all.

So why are employees willingly putting sensitive data in harm’s way? The study found that 42% of respondents used generative AI tools for research and analysis, while 41% used them to write email messages and memos, and 40% hoped to improve their writing skills.

But what kinds of data do employees believe can provide business value when entered into these public generative AI tools? The results showed that 30% of employees pointed to customer information, such as references, bank details, and addresses, while 29% cited sales figures. Additionally, 28% highlighted financial information, and 25% indicated personally identifiable information. It was also discovered that 22% of workers referred to confidential HR data, and 17% mentioned confidential company information. Remarkably, 27% of respondents did not believe that inputting any sensitive information into these tools could yield value for the business.

Even more concerning is the fact that 31% of employees admitted to entering sensitive data into these tools, with 5% unsure if they had done so. On the positive side, 64% of respondents claimed they did not input any sensitive data into public generative AI tools.

🔥 Hot Tip: Are you considering a career in AI? Check out these 5 steps to pivot your career successfully!

But it’s not all doom and gloom. There are some benefits to using generative AI in the workplace. When asked about the benefits to their organization, 48% of respondents highlighted faster access to information. Forty percent mentioned higher productivity, 39% believed that generative AI could replace mundane tasks, and 34% thought that it helped generate new ideas.

Interestingly, 53% of employees considered it unfair for their colleagues to have access to and use generative AI tools, viewing it as an advantage. In fact, 40% believed that those who used these tools should be required to teach others on their team or in their department. Some employees even said that colleagues who used these tools should be reported to their line manager or face disciplinary action.

Surprisingly, only 36% of respondents reported having formal guidance and policies on the use of public generative AI tools at work. This lack of clear policies leaves employees exposed to potential risks. However, a significant majority (90%) of respondents agreed that it was important to have guidelines and policies on the use of emerging technology in the workplace. Sixty-eight percent believed that everyone should know the “right way” to adopt generative AI.

Risks Will Escalate as Generative AI Use Climbs

The security risks associated with generative AI could escalate as its adoption increases. According to IBM’s X-Force Threat Intelligence Index 2024, key platforms may face large-scale attacks when a single generative AI technology approaches a 50% market share or when the market consolidates to only three technologies. This implies that as generative AI becomes more prevalent, cyber criminals will seize the opportunity to exploit vulnerabilities and launch targeted attacks.

🌟 In-Depth Analysis: To gain a deeper understanding of the risks and future developments of generative AI, read this comprehensive report.

It is crucial for organizations to secure their AI models before threat actors scale their activities. Cyber criminals have already started targeting AI technologies, with over 800,000 posts about AI and GPT (Generative Pre-trained Transformers) on dark web forums in 2023. As adversaries continue to optimize their attacks, identity-based threats will grow.

IBM emphasized the need for a holistic approach to security in the age of generative AI. Enterprises must recognize that securing their AI models goes beyond implementing novel tactics to protect their underlying infrastructure. By addressing security fundamentals, such as identity-based attacks and the exploitation of valid accounts, businesses can mitigate the risks associated with generative AI.

🎥 Video Addition: To grasp the urgency of securing AI models, watch this enlightening video that delves into the importance of AI security.

In 2023, there was a 266% increase in attacks involving malware designed to steal personal identifiable information. European organizations were the most targeted, accounting for 32% of incidents globally. However, Asia-Pacific and North America also experienced significant impacts. A staggering 70% of attacks targeted critical infrastructure organizations, and 85% of these incidents could have been mitigated through patching, multi-factor authentication, or applying least-privilege principles.

🌍 Global Impact: To explore how different regions were targeted by cyber attacks, view these insightful statistics.

As an expert in the field, it’s crucial to stay up-to-date with the latest trends and news. Here are some additional resources for further reading:

  1. Walmart Debuts Generative AI Search and AI Replenishment Features at CES
  2. Critical 2024 AI Policy Blueprint for Safeguarding Workplace Risks
  3. China’s Generative Video Race Heats Up
  4. Microsoft Expands EU Data Localization Efforts
  5. Anecdotes Lands $25M to Expand Its Risk Management Compliance Business

So, next time you’re tempted to toss sensitive data into the jaws of public generative AI tools, remember the risks and consequences. Share this article with your colleagues to raise awareness about the importance of using emerging technologies responsibly and securely. Together, we can protect our data and navigate the world of AI with confidence.

Let’s Connect: What are your thoughts on the risks and benefits of public generative AI tools in the workplace? Share your experiences in the comments below and join the discussion on our social media platforms!


References

  1. image: Link
  2. Walmart Debuts Generative AI Search and AI Replenishment Features at CES: Link
  3. Critical 2024 AI Policy Blueprint for Safeguarding Workplace Risks: Link
  4. China’s Generative Video Race Heats Up: Link
  5. Microsoft Expands EU Data Localization Efforts: Link
  6. Anecdotes Lands $25M to Expand Its Risk Management Compliance Business: Link
  7. Five ways to use AI responsibly: Link
  8. The best AI chatbots: Link
  9. Want to work in AI? How to pivot your career in 5 steps: Link
  10. IBM’s X-Force Threat Intelligence Index 2024: Link
  11. How renaissance technologists are connecting the dots between AI and business: Link
  12. So you want to work in AI? How tech professionals can survive and thrive at work in the time of AI: Link
  13. How tech professionals can survive and thrive at work in the time of AI: Link
  14. Have 10 hours? IBM will train you in AI fundamentals – for free: Link
  15. How tech professionals can survive and thrive at work in the time of AI: Link
  16. So you want to work in AI? How tech professionals can survive and thrive at work in the time of AI: Link
  17. How renaissance technologists are connecting the dots between AI and business: Link
  18. China-backed Volt Typhoon hackers lurked in US critical infrastructure for at least five years: Link