New survey shows most people want regulation for AI due to distrust | ENBLE

New survey shows most people want regulation for AI due to distrust | ENBLE

Sanket Mishra / Pexels

Most American adults harbor reservations about artificial intelligence (AI) and its potential misuse, according to a recent survey from the MITRE Corporation and the Harris Poll. The findings indicate that mounting scandals surrounding AI-generated malware and disinformation are eroding public trust and may pave the way for AI regulation.

The survey, which polled 2,063 U.S. adults, reveals that only 39% of respondents believe that today’s AI technology is “safe and secure.” This figure is 9% lower than the results from the previous survey conducted by the two firms in November 2022.

Concerns about deepfakes and other artificially engineered content were foremost in the minds of respondents, with 82% expressing worry about these phenomena. Additionally, 80% feared the potential use of AI in malware attacks. The majority of participants voiced concerns over AI’s involvement in identity theft, data harvesting, job displacement, and more.

Interestingly, the survey highlights that growing concerns about AI cut across various demographic groups. While 90% of baby boomers expressed worries about the impact of deepfakes, 72% of Gen Z members shared the same concerns.

Although younger people are generally less suspicious of AI and more likely to integrate it into their daily lives, reservations remain high across various areas. These concerns revolve around whether the industry should take more proactive measures to protect the public and if AI should be subject to regulation.

Shutterstock

The declining support for AI tools can be attributed to months of negative media coverage involving generative AI tools such as ChatGPT and Bing Chat, along with controversies surrounding these products. As stories of misinformation, data breaches, and malware continue to accumulate, it appears that public acceptance of the forthcoming AI future is waning.

Respondents were also asked whether the government should regulate AI, with 85% expressing support for such measures, marking a 3% increase from the previous survey. Likewise, 85% agreed that making AI safe and secure for public use should be a nationwide effort spanning industry, government, and academia. Furthermore, 72% believed that the federal government should allocate more time and funding to AI security research and development.

The prevailing anxiety regarding AI’s potential to facilitate malware attacks is intriguing. We recently interviewed a group of cybersecurity experts on this very topic, and their consensus was that while AI could be employed in malware, it is not currently a particularly formidable tool. Some experts asserted that AI’s ability to generate effective malware code was weak, while others argued that hackers were more likely to find better exploits in public repositories than rely on AI for assistance.

However, the growing skepticism surrounding AI could shape the industry’s actions and spur companies like OpenAI to invest more in safeguarding the public from their products. With such overwhelming backing, it may not be surprising if governments expedite the implementation of AI regulations sooner rather than later.