Zoom updated its terms and conditions — again — on Friday following persistent criticism related to language that allowed the company to use customer data to train its AI systems.
The latest iteration of the terms allows Zoom to access customer content for legal, security and safety purposes, but the company will not use any audio, video, chat or screen-sharing data to train third-party or its internal AI models.
The industry’s sudden shift to adopt generative AI has caused tension between vendors that want data to bolster offerings and enterprises that are deeply concerned with security.
For some analysts, Zoom's swift response was a welcome change.
“It’s always welcome when a SaaS generative AI vendor offering provides a level of transparency with regards to the data,” Bill Wong, principal research director at Info-Tech Research Group, said in an email.
Others weren’t as forgiving. “With Zoom, the damage is done,” said Shiva Nathan, founder and CEO of app development startup Onymos. “We spent the time to read the terms of service, and we said we’re not going to be with Zoom.”
Despite the differences in perception, there was a common sentiment: the struggle to find the balance between innovation and privacy is bigger than just Zoom. Because of the nascent state of generative AI, vendors are learning through the process and responding to the needs of an increasingly data-conscious pool of enterprise customers.
One vendor's approach to AI privacy guardrails
At low-code software and platform provider Iterate.ai, Head of Applied Machine Learning and Platform Shomron Jacob knows the importance of juggling privacy concerns with the desire to enhance offerings with the latest capabilities.
The vendor supports enterprise customers, such as Circle K, Ulta Beauty, Jockey and Pampered Chef, and initially used generic data to power its AI capabilities. When large language models began to prove valuable, Iterate.ai pivoted.
“The kind of data we need now to do innovation is very different, so we had to put things in place to deal with privacy concerns,” Jacob said.
Customers were comfortable granting Iterate.ai access to their data provided the company assured them only they would have access, said Jacob.
Part of the solution relied on where data was housed. The company moved from mostly relying on the cloud to building out models that could run locally, on-prem so customers could own the whole process without data privacy and confidentiality concerns.
“That gave them what they wanted because all the data was sitting in their environment, in their offices,” Jacob said.
The company has additional security and privacy guardrails in place, such as data encryption, access control, data minimization, retention policies and clear guidelines for opting in or out, according to Jacob. Some of the guardrails were put in place five years ago when the company started, others were added or enhanced as recently as March.
While vendors should have security and privacy safeguards, CIOs, CISOs and tech leaders craft the first level of protection. Data classification is key when sharing data with vendors, Jacob said.
“You want to make sure you classify it correctly so that people like us who are building solutions can put the right amount of processes around access control,” Jacob said.
Data ownership and transparency
Businesses that want access to vendors’ generative AI tools should also encourage leaders to ask tough questions when processes are unclear.
When an enterprise enters into an agreement with a SaaS vendor, Wong said CIOs should have a comprehensive understanding of who owns the data, how it will be used, who it will be shared with, how long it is stored and the process of addressing concerns.
While the expanding use of generative AI technology has led vendors to try to organically grow systems with customer data, blurring the lines of innovation and privacy, some industry experts cite the SaaS model as a secondary culprit behind the latest privacy pain point.
“Zoom is the model for what is wrong with SaaS today,” Nathan said. “They rent you software with their right hand and take your data with their left.”
The appeal for the current SaaS model is speed and cost, but as enterprises become more aware and cautious of the data they’re giving away, that allure could wither if vendors fail to adjust.
For organizations currently using generative AI, data ownership, data privacy and intellectual property ownership are some of the top concerns, according to an August Alteryx survey of 300 data leaders. Nearly half of organizations that have not yet implemented generative AI point to data privacy concerns as the reason, according to the report.
“CIOs have to fundamentally look at the software as a service that the business consumes,” Nathan said. “In the same way that you look at your security posture, you need to look at your data posture.”
Source link