Here is a guest post from our US sister organisation NNEDV reprinted with permission under licence from the Safety Net Project.

Safety Net focuses on the intersection of technology and abuse, and we work to address how that abuse impacts the safety, privacy, accessibility, and civil rights of survivors. Our team provides expert training and technical assistance, creates and disseminates resources, and influences conversations on technology abuse and safety globally. Safety Net is a project of the National Network to End Domestic Violence.

A Recommendation for Advocates Concerned About AI Meeting Assistants

Several organisations have contacted Safety Net about recent changes in Zoom’s privacy policies, and we have followed much of the reporting about these changes and updates from Zoom. Achieving clarity on Zoom’s privacy policies is crucial since advocates and service providers often discuss confidential information that they are obliged to protect. Disclosure of that information could put survivors at risk, and the nature of AI models and training processes implies that confidential information could be exposed in unpredictable ways.

Of concern to most people is the new language of Zoom’s policy sections 10.2 and 10.4, which imply that Zoom has broad rights to collect user content (video, audio, and text) for use in training AI models. See here and here for more detailed discussions about these concerns. Zoom’s response, given here, has been that “we do not use audio, video, or chat content for training our models without customer consent.”

“Consent” from Zoom’s perspective seems to be obtained when the administrator of a Zoom account has enabled Zoom IQ Chat Compose or Meeting Summary. Once enabled, participants joining a meeting will see a consent notice screen that may look something like this:


From a victim advocate confidentiality perspective, the most obvious problem with AI training is the human review component that is part of the training process. As Zoom states here (emphasis mine):

Zoom’s use of your Zoom IQ feature data, when shared by your account owner or admin, will include automated processing methods and may include manual review. Automated processing applies methods that utilise artificial intelligence technologies to provide predictions and improve the accuracy of automated responses. Zoom may retain your data for up to 18 months for automated processing. Manual review occurs when Zoom employees, or contractors working on Zoom’s behalf, manually review these inputs and automated responses to provide human feedback to improve accuracy and quality. Zoom may retain data for manual review for a longer period.

The manual process that is a requirement of AI training means that a Zoom employee could have access to any information shared in a Zoom meeting where Zoom IQ features are enabled, and any confidential information shared in that meeting would be disclosed. Although Zoom is very clear that meeting participants retain their “ownership” of shared content, that ownership is irrelevant to an advocate’s confidentiality obligations and the safety of survivors.

As such, we strongly recommend that advocates refuse to participate in any meetings where “Meeting Summary” or other Zoom IQ features are enabled, if it is possible that any confidential or otherwise sensitive information may be discussed. If organisations want to use the service for meetings that do not involve survivor information or sensitive content in any way, that is the decision of the organisation based on the comfort level of the staff involved.

Regarding Microsoft Teams, Microsoft has had “Intelligent Recap” and other AI-based tools available on its Teams Premium tiers since February. Those other tools include 365 Copilot (Copilot for Teams) and Live Captions, and Microsoft also makes an Azure Cognitive Services (their AI platform) software development kit available to developers who make 3rd party products for the Teams platform. Like all AI products, these tools require human review of their outputs to train them to be more effective, and it is not clear how Microsoft will make use of user-generated content in that process. While Microsoft’s Privacy Policy was not updated when these products were released (though it has been updated recently), substantial data security and privacy concerns at Microsoft have been reported this year.

We also recommend avoiding meetings using Teams where Intelligent Recap or Live Captions are enabled for conversations that can involve confidential or sensitive information. Teams Premium products are not used as widely among non-profits as Zoom, however, so these products may not come up for many service providers.

As artificial intelligence continues to advance and become integrated into numerous products and services we rely on, it becomes increasingly important for us as service providers and advocates to be well-informed about how it may impact survivors. It’s imperative that we remain vigilant to ensure that technologies do not inadvertently harm those who are most vulnerable.

In light of this, we suggest that you continue to stay informed and engage in a conversation with the administrator of your organisation’s Zoom account about disabling this feature, or keeping the feature disabled if it has not been enabled. Express the concerns highlighted above and emphasise how these features might compromise the safety and well-being of survivors who depend on these services.

Chad Sniffen