Identity Review | Global Tech Think Tank
Keep up with the digital identity landscape.
Congress’ House Energy & Commerce Committee recently held its first hearing on the American Data Privacy and Protection Act, which would instill a national standard on what data companies can gather from consumers and how they can use it. Supporters of the bill feel that privacy legislation has been necessary for years. With so many ways individuals can use technology, it’s important to understand possible violations of privacy (even if it is inadvertent). To learn more, we’ve compiled a list of the worst companies on consumer privacy:
1. Tim Hortons
The Office of the Privacy Commissioner of Canada and three other privacy commissioners discovered that fast food chain Tim Hortons used the GPS systems in customers’ mobile phones to track locations at all times; however, the company has not been penalized under privacy laws. The report released by the office noted this as an invasive attempt to advertise toward their customers, conflicting with the company’s statements that it only gathered data on user locations when the app was open.
2. Lyra Health
Lyra Health, a mental health startup that provides services for big companies such as Google, Facebook, and Starbucks, has come under fire for particular use of patient data. The company would send out questionnaires about things like anxiety or irritability, and then use data from patient answers to show how effective their services were. Although Lyra Health has said that outcome health is important to helping them show possible employers the effectiveness of their services, some patients were not aware their data would be shared. Therapists of Lyra also were not aware data was shared to companies to assess how well Lyra Health served its employees.
Politico reported that Crisis Text Line, a global mental health support texting service, used collected data and shared with its sister company, Loris.ai, to “to create and market customer service software.” Though the data has been claimed to be “anonymized,” ethics and privacy experts assert that examples of past anonymized datasets have shown it is possible to trace data back to individuals. As a result, Crisis Text Line ended its relationship with Loris, having requested Loris delete all shared data.
Mozilla researchers found through an analysis of privacy protections for mental health apps that BetterHelp had some of the worst privacy practices. When consumers use BetterHelp’s services, the company states that they “can share some data with a number of third parties including advertisers”. Furthermore, a report in 2020, revealed BetterHelp shared metadata of every message with Facebook. What this means, even though BetterHelp does not share contents of messages, is that Facebook could know “what time of day a user was going to therapy, their approximate location, and how long they were chatting on the app.”
6. Remote Learning Apps
Human Rights Watch researchers found that almost 90 percent of educational learning tools were designed to be able to send data to ad-technology companies. For instance, learning app Schoology, has code according to the researchers “that would have allowed it to extract a unique identifier from the student’s phone, known as an advertising ID”. Marketers could use these IDs to track people across apps other than Schoology to “build a profile on what products they might want to purchase.” PowerSchool, the app developer, claims that the app does not collect advertising IDs. However, Schoology’s policies says it uses third-party tools to target ads “to users based on their browsing history on other websites or on other devices.”
Zoom Video Communications paid $85 million in a settlement to a lawsuit that alleged it violates users’ privacy by sharing data with Facebook, Google and LinkedIn, as well as letting hackers “Zoom bomb” meetings. The settlement was also made with the agreement that Zoom would “bolster its security practices.” However, the company has recently come under fire for its plans to implement an “emotional AI” in its services to capture data representing expressions to interpret internal emotions. Fight for the Future and AccessNow have urged Zoom to not consider this feature to reduce data security risks and protect user privacy rights.
Are there other worst companies on consumer privacy that we missed? Email us at firstname.lastname@example.org. Find us on Twitter.
ABOUT THE WRITER
Katherine Zhang is a contributor to Identity Review from the University of California, Berkeley.
Do you have information to share with Identity Review? Email us at email@example.com. Find us on Twitter.
Keep up with the digital identity landscape.
Bringing together key partners, platforms and providers to build the future of identity.Apply