- AiNews.com
- Posts
- Slack AI Feature Raises Privacy Concerns: Security Firm Exposes Risks
Slack AI Feature Raises Privacy Concerns: Security Firm Exposes Risks
Image Source: ChatGPT
Slack AI Feature Raises Privacy Concerns: Security Firm Exposes Risks
Slack, the popular workplace instant messaging app, has introduced a suite of optional AI features designed to enhance productivity by providing quick summaries of conversations. However, according to a report by the security firm PromptArmor, these features come with significant security risks. The report highlights that Slack’s AI has access to private direct messages (DMs) and file uploads, which could be exploited to breach user privacy by phishing other users.
Potential Security Flaws Exposed by PromptArmor
PromptArmor’s investigation revealed two major issues with Slack’s AI. First, the AI system is intentionally designed to scrape data from private user conversations and file uploads. Second, a technique known as "prompt injection" can be used to manipulate Slack’s AI into generating malicious links, potentially enabling phishing attacks within Slack channels. This discovery raises concerns about the vulnerability of private conversations in the app. According to PromptArmor's blog, the security firm alerted Slack to the issue before publicly sharing their findings in the blog post.
Slack’s Response to the Security Breach
Following the publication of PromptArmor's findings, Slack’s parent company, SalesForce, acknowledged the issue and stated that it had been addressed. A SalesForce spokesperson explained that under very specific circumstances, a malicious actor within the same Slack workspace could exploit the AI to phish for sensitive information. The company claims to have deployed a patch to resolve the issue and assured that there is no evidence of unauthorized access to customer data at this time.
The Importance of AI Transparency in Everyday Apps
This incident underscores the need for transparency in the AI features offered by the apps we use regularly. Users are encouraged to review the stated AI policies of their frequently used applications to understand potential risks better and ensure their data remains secure.
For more technical details, you can read the full PromptArmor blog post here.