As artificial intelligence becomes increasingly embedded in workplace operations, new data highlights significant cybersecurity vulnerabilities linked to its widespread use. According to a recent study by Cybernews, around 75% of employees now use AI tools, particularly AI chatbots, to complete work-related tasks. While this trend is widely credited with enhancing productivity, it may also be creating new avenues for data breaches and cyberattacks.
The concern is compounded by the lack of formal guidance in many workplaces. Only 14% of organizations currently have official AI usage policies in place, according to the report. This absence of oversight has led to a rise in unmonitored AI adoption, which could inadvertently expose businesses to risks such as credential theft, data leaks, and infrastructure vulnerabilities.
Cybernews Digital Index Highlights Alarming Security Gaps
In February 2025, Cybernews researchers evaluated 52 of the most visited AI web tools using Semrush traffic data. The findings, published in the Cybernews Business Digital Index, underscore a troubling pattern of security lapses across these platforms:
84% of the analyzed AI tools had experienced at least one data breach.
36% reported a breach within the previous 30 days alone.
93% exhibited SSL/TLS misconfigurations, which are essential for secure data transmission.
91% showed weaknesses in system hosting and infrastructure management.
51% had been linked to stolen corporate credentials.
44% of the companies behind these tools showed signs of employee password reuse.
While some platforms received favorable security ratingsā33% earned an Aāothers fared much worse, with 41% receiving a D or F, indicating high to critical risk levels.
Vincentas Baubonis, Head of Security Research at Cybernews, warns that the real danger lies in complacency.
āWhat is mostly concerning is the false sense of security many users and businesses may have,ā Baubonis said. āHigh average scores donāt mean tools are entirely safe āĀ one weak link in your workflow can become the attackerās entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information, or even deploy ransomware, causing operational and reputational damage.ā
The Need for Proactive AI Governance
Experts urge companies to take a proactive approach by developing clear AI policies, monitoring tool usage, and educating employees about best practices for secure integration. As AI continues to evolve and gain traction in the workplace, organizations that fail to address its associated risks may find themselves vulnerable to increasingly sophisticated cyber threats.
Comments