In the realm of video conferencing, AI digital assistants are becoming quite popular. Their integration into platforms like Teams and Zoom is very convenient, providing exceptional summarization of the discussion topics. However, they raises cybersecurity concerns about the data being collected and exposed to unknown 3rd parties providing these services. This article explores the potential risks of using digital AI assistants on your video calls. It will focus on the potential for sensitive data exposure and data privacy risks that exist.
AI digital assistants in video conferencing have access to any critical data discussed or shared on those sessions. They process conversations, chat histories, and create video records storing this data in 3rd party cloud environments. This exposes any sensitive information discussed or shared on these third-party video sessions. Platforms like Fathom assure data protection, encrypted communications between all participants, and have robust but not quite complete protection built into their privacy policies. That said, many risks remain.
Privacy policies like Fathom’s are crucial. They state intentions not to misuse data. However, loopholes exist. In fathom’s own words from their Data Privacy policy they state:
[Fathom] Marketing. We do not rent, sell, or share information about you with nonaffiliated companies for their direct marketing purposes, unless we have your permission. We do not sell the content of your meetings or information about your meeting attendees to anyone.
Does this mean affiliated companies to Fathom are able to get transcripts of “de-identified” data? Critical, sensitive, or even regulated data might still be accessible to third parties without your direct consent or knowledge. Gaining a thorough understanding of the digital AI’s data privacy policies is vital to your data’s security.
Video conferencing involves sharing sensitive information. AI digital assistants might unintentionally capture this sensitive data. Think back to your most recent video meetings, did you talk about anything mission critical with your team? Would you be comfortable sharing that with unknown 3rd party digital assistants? Was there confidential discussions or private data. To CyberHoot vCISO’s the risk of critical and sensitive data leakage is significant.
AI assistants process data to enhance our video conference user experiences. However, this processing is a definitely a double-edged sword. It’s beneficial to the meeting summary recipients, but at what cost? Sharing the data with affiliated partners of the digital AI assistants? Does this expose your data?
Using AI in regulated industries can complicate compliance. Industries like healthcare or finance have strict regulations. AI’s data handling might not always align with these putting you in violation of disclosure laws. It is highly advisable that you prevent the adoption of AI technologies such as video plugins as Digital AI assistants until more is know about how they process and share the data entrusted to them for summarization.
AI assistants record your voice and auto-translate it into text with summaries and more. However, the recordings themselves are stored in the digital assistant’s cloud. With the amount of recordings available, digital assistants are likely to be targeted by advanced threat actors who wish to clone our voices and use them in extortion scenarios as outlined in this Grandchildren Ransomware attack we wrote about earlier this year.
If you are insistent on using digital AI assistants, please follow these best practices:
In this article, we navigated the potential risks of AI assistants in video conferencing. Critical and sensitive data is getting exposed by these newly-minted AI assistants. Digital assistants attending video conferences seems like a breakthrough use of AI. The summarization power and accuracy of these digital assistants mean they have enjoyed a meteoric rise in popularity. However, it’s important to be aware that this comes with serious data privacy risks. Understanding and mitigating these risks is crucial for data protection and your cybersecurity. As we embrace new AI technologies, adopting a cautious approach and staying informed will be key to protecting your sensitive, critical, or regulated data from exposure.
Discover and share the latest cybersecurity trends, tips and best practices – alongside new threats to watch out for.
Stop tricking employees. Start training them. Take Control of Your Security Awareness Training with a Platform...
Read moreA recent discovery by cybersecurity firm Oligo Security has unveiled a series of critical vulnerabilities in...
Read moreGet sharper eyes on human risks, with the positive approach that beats traditional phish testing.