• AiNews.com
  • Posts
  • Slack AI Feature Raises Privacy Concerns: Security Firm Exposes Risks

Slack AI Feature Raises Privacy Concerns: Security Firm Exposes Risks

A realistic image highlighting the security concerns of Slack's AI features. The central focus is a digital screen displaying a Slack chat interface with a red warning symbol next to a private message, indicating a breach of privacy. Surrounding the screen are elements like digital locks and binary code, representing security vulnerabilities. The background subtly incorporates AI elements, such as circuit board patterns, to emphasize the technology aspect. The overall color scheme uses dark tones with red and yellow accents to convey a serious, cautionary tone

Image Source: ChatGPT

Slack AI Feature Raises Privacy Concerns: Security Firm Exposes Risks

Slack, the popular workplace instant messaging app, has introduced a suite of optional AI features designed to enhance productivity by providing quick summaries of conversations. However, according to a report by the security firm PromptArmor, these features come with significant security risks. The report highlights that Slack’s AI has access to private direct messages (DMs) and file uploads, which could be exploited to breach user privacy by phishing other users.

Potential Security Flaws Exposed by PromptArmor

PromptArmor’s investigation revealed two major issues with Slack’s AI. First, the AI system is intentionally designed to scrape data from private user conversations and file uploads. Second, a technique known as "prompt injection" can be used to manipulate Slack’s AI into generating malicious links, potentially enabling phishing attacks within Slack channels. This discovery raises concerns about the vulnerability of private conversations in the app. According to PromptArmor's blog, the security firm alerted Slack to the issue before publicly sharing their findings in the blog post.

Slack’s Response to the Security Breach

Following the publication of PromptArmor's findings, Slack’s parent company, SalesForce, acknowledged the issue and stated that it had been addressed. A SalesForce spokesperson explained that under very specific circumstances, a malicious actor within the same Slack workspace could exploit the AI to phish for sensitive information. The company claims to have deployed a patch to resolve the issue and assured that there is no evidence of unauthorized access to customer data at this time.

The Importance of AI Transparency in Everyday Apps

This incident underscores the need for transparency in the AI features offered by the apps we use regularly. Users are encouraged to review the stated AI policies of their frequently used applications to understand potential risks better and ensure their data remains secure.

For more technical details, you can read the full PromptArmor blog post here.