fbpx

Meta, TikTok, YouTube and Twitter dodge questions on privacy and national security

Meta, TikTok, YouTube and Twitter dodge questions on privacy and national security
Image Credits: Alex Wong/Getty Images

Executives from Meta, TikTok, YouTube and Twitter testified before the Senate Homeland Security Committee on Wednesday at a hearing convened to explore social media’s impact on national security.

Present at the hearing were Meta CPO Chris Cox, YouTube CPO Neal Mohan, TikTok COO Vanessa Pappas and Jay Sullivan, Twitter’s GM of Consumer and Revenue.

During the hearing, the companies refused to give clear answers to a number of questions, including the number of employees they have working full-time on trust and safety, and whether any of them focus on moderating non-English content.

Twitter’s General Manager of Consumer and Revenue Jay Sullivan, said the company has 2,200 people working on trust and safety “across Twitter”, without specifying if they did other kinds of work. He also denied Mudge’s claims that Twitter lied to the FTC, saying that “Twitter disputes the allegations.” 

Meta’s Chief Product Officer Chris Cox, in turn, said they had over 40,000 people working on trust and safety issues, without clarifying the exact location or nature of their job.

In her first appearance before Congress, TikTok’s Global Chief Operating Officer Vanessa Pappas even refused to corroborate any connections with China, claiming that Bytedance, TikTok’s Chinese parent company, is “distributed” and doesn’t have a headquarters at all. 

“TikTok does not operate in China,” Pappas said more than once.

Pappas also categorically denied BuzzFeed reports that employees regularly accessed private user data, even though the reporting was drawn from leaked audio, and similarly denied the app’s use of biometric data.

Last year, the app quietly updated its privacy policy to allow TikTok to collect “faceprints and voiceprints”, raising a lot of privacy concerns on what that really entails and how that kind of information is being used.

“We do not use any sort of a facial, voice, audio or body recognition that would identify an individual,” Pappas told Sen. Kristen Sinema., adding that facial recognition is used for augmented reality effects in creators’ videos.

Questions on the platforms’ handling of serious matters such as domestic extremism, vaccine misinformation and other content moderation issues were similarly dodged by abstract answers that mainly focused on their current policies.

If you see something out of place or would like to contribute to this story, check out our Ethics and Policy section.