• AiNews.com
  • Posts
  • Zoom & Otter AI Assistants: Privacy Risks & Unintended Consequences

Zoom & Otter AI Assistants: Privacy Risks & Unintended Consequences

A professional, modern graphic representing workplace AI assistants and the risks they pose to privacy. The image depicts a virtual meeting being transcribed by AI, with symbols of exposed confidential information. Icons of office spaces and chatbots emphasize the AI-driven environment. The color scheme includes shades of blue and gray, with the text "AI Assistants in the Workplace: Privacy at Risk" displayed, highlighting the potential risks AI poses in workplace settings.

Image Source: ChatGPT-4o

Zoom & Otter AI Assistants: Privacy Risks & Unintended Consequences

Corporate assistants have traditionally been trusted to keep company secrets and gossip confidential. However, as artificial intelligence tools take on more of their tasks, that discretion is being lost. AI-powered tools like Otter.ai, Zoom AI Companion, and more are increasingly being used to transcribe and record workplace meetings, but they don't share the human instinct to keep sensitive information private.

A Case of AI Gone Wrong

Engineer Alex Bilzerian recently shared a story on X (formerly Twitter) about an AI mishap following a Zoom meeting with venture capital investors. After Bilzerian logged off, he received an automated email from Otter.ai containing a transcript of the post-meeting discussion, which included the investors’ private conversations about their firm’s strategic failures and cooked metrics. This unintentional breach led Bilzerian to cancel the deal, highlighting the risks of AI’s inability to "read the room."

Widespread Adoption of AI in Workplace Tools

The use of AI tools in corporate settings is growing rapidly, with companies like Salesforce, Microsoft, Google, and Slack integrating AI features into their products. These tools are designed to automate tasks like summarizing conversations, creating daily recaps, and even helping with customer service. However, many users overlook the potential for AI to accidentally expose sensitive information.

The Risk of AI-Powered Transcriptions

In some cases, AI tools like Otter.ai have caught conversations that weren’t intended to be shared. For example, software designer Isaac Naor received a transcript of a meeting that included a participant’s muted comments about him; She was unaware this happened, and Naor felt too uncomfortable to bring it up, he explained. Privacy advocate Naomi Brockwell has also raised concerns about how constant recording and AI transcriptions can erode privacy in the workplace, leaving employees vulnerable to lawsuits, retaliation, or embarrassment.

The Need for More User Awareness and Controls

AI companies like Otter and Zoom have responded by emphasizing the importance of user control. Both companies recommend adjusting settings to avoid automatic sharing of transcripts or meeting summaries. Otter also suggests asking for consent before using transcription tools in meetings, while Zoom’s AI Companion feature includes notifications when summaries are being recorded.

AI’s Impact on Workplace Culture

AI tools can create awkward situations in meetings as well. For example, small business owner Rob Bezdjian declined to share proprietary details with potential investors after they insisted on using Otter to record the conversation, leading to the collapse of a potential deal. These incidents show that while AI tools can enhance productivity, they can also disrupt human interactions and decision-making.

The Role of Companies in AI Implementation

Experts like Hatim Rahman, associate professor at Northwestern University’s Kellogg School of Management, believe that companies must take responsibility for ensuring that AI tools don’t lead to unintended consequences. Features like auto-sharing of transcripts should have more built-in friction to avoid exposing sensitive information, particularly when users leave meetings or are unaware of the tool’s capabilities.

What This Means for Workplace AI and Privacy

As AI becomes more integrated into the workplace, companies and employees must be vigilant about privacy risks. While AI assistants can boost productivity, they lack the judgment and discretion needed to handle sensitive information. Moving forward, both users and organizations will need to strike a balance between leveraging AI’s capabilities and protecting workplace privacy. Companies should implement more safeguards and ensure that employees are well-informed about the potential consequences of using AI tools, or risk unintended privacy breaches that could have serious implications.