Worldcoin just officially launched. Here’s why it’s already being investigated.

Raw Text

Tech policy

The project is backed by some of tech's biggest stars, but four countries are probing its privacy practices.

By

Tate Ryan-Mosley archive page

Worldcoin

This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up her e .

It’s possible you’ve heard the name Worldcoin recently. It’s been getting a ton of attention—some good, some … not so good.

It’s a project that claims to use cryptocurrency to distribute money across the world, though its bigger ambition is to create a global identity system called “World ID” that relies on individuals’ unique biometric data to prove that they are humans. It officially launched on July 24 in more than 20 countries, and Sam Altman, the CEO of OpenAI and one of the biggest tech celebrities right now, is one of the cofounders of the project.

The company makes big, idealistic promises: that it can deliver a form of universal basic income through technology to make the world a better and more equitable place, while offering a way to verify your humanity in a digital future filled with nonhuman intelligence, which it calls “proof of personhood.” If you’re thinking this sounds like a potential privacy nightmare,  you’re not alone .

Luckily, we have someone I’d consider the Worldcoin expert on staff here at MIT Technology Review. Last year investigative reporter Eileen Guo, with freelancer Adi Renaldi,  dug into the company  and found that Worldcoin’s operations were far from living up to its lofty goals and that it was collecting sensitive biometric data from many vulnerable people in exchange for cash.

As they wrote:

“Our investigation revealed wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent.”

What’s more, the company was using test users’ sensitive, but anonymized, data to train artificial intelligence models, but Eileen and Adi found that individuals did not know their data was being used that way.

I highly recommend you  read their investigation —which builds on more than 35 interviews with Worldcoin executives, contractors, and test users recruited primarily in developing countries—to better understand how the company was handling sensitive personal data and how its idealistic rhetoric compared with the realities on the ground.

Given their reporting, it’s no surprise that regulators in at least four countries have already launched investigations into the project, citing concerns with its privacy practices. The company  claims  it has already scanned nearly 2.2 million “unique humans” into its database, which was primarily built during an extended test period over the last two years.

So I asked Eileen: What really has changed since her investigation? How do we make sense of the latest news?

Since her story, Worldcoin CEO Alex Blania has  told other outlets  that the company has changed many of its data collection and privacy practices, though there are reasons to be skeptical. The company hasn’t specified exactly how it’s done this, beyond saying it has stopped some of the most exploitative and deceptive recruitment tactics.

In emails Eileen recently exchanged with Worldcoin, a spokesperson was vague about how the company was handling personal data, saying that “the Worldcoin Foundation complies with all laws and regulations governing the processing of personal data in the markets where Worldcoin is available, including the General Data Protection Regulation (‘GDPR’) … The project will continue to cooperate with governing bodies on requests for more information about its privacy and data protection practices.”

The spokesperson added, “It is important to stress that The Worldcoin Foundation and its contributor Tools for Humanity never have and never will sell users’ personal data.”

But, Eileen notes, we (again) have nothing but the company’s word that this is true. That’s one reason we should keep a close eye on what government investigators start to uncover about Worldcoin.

The legality of Worldcoin's biometric data collection is at the  heart of an investigation the French government launched into Worldcoin  and a probe by a German data protection agency, which has been  investigating Worldcoin since November of last year , according to Reuters. On July 25, the Information Commissioner’s Officer in the UK put out a statement that it will be “making enquiries” into the company. Then on August 2, Kenya’s Office of Data Protection  suspended the project  in the country,  saying  it will investigate whether Worldcoin is in compliance with the country’s Data Protection Act.

Importantly, a core objective of the Worldcoin project is to perfect its “proof of personhood” methodology, which requires a lot of data to train AI models. If its proof-of-personhood system becomes widely adopted, this could be quite lucrative for its investors, particularly during an AI gold rush like the one we’re seeing now.

The company announced this week that it will  allow other companies and governments  to deploy its identity system.

“Worldcoin’s proposed identity solution is problematic whether or not other companies and governments use it. Of course, it would be worse if it were used more broadly without so many key questions being answered,” says Eileen. “But I think at this stage, it’s clever marketing to try to convince everyone to get scanned and sign up so that they can achieve the ‘fastest’ and ‘biggest onboarding into crypto and Web3’ to date, as Blania told me last year.”

Eileen points out that Worldcoin has also not yet clarified whether it still uses the biometric data it collects to train its artificial intelligence models, or whether it has deleted the biometric data it already collected from test users and was using in training, as it told MIT Technology Review it would do before launch.

“I haven’t seen anything that suggests that they’ve actually stopped training their algorithms—or that they ever would,” Eileen says. “I mean, that’s the point of AI, right? that it’s supposed to get smarter.”

What else I'm reading

Meta’s oversight board, which issues independently drafted and binding policies, is reviewing how the company is handling misinformation about abortion. Currently, the company’s  moderation decisions are a bit of a mess , according to this nice explainer-y piece in Slate. We should expect the board to issue new abortion-information-specific policies in the coming weeks.

At the end of July, Twitter rebranded to X, in a strange, unsurprising-yet-surprising move by its new czar Elon. I loved  Casey Newton’s obituary-style take , in which he argues that Musk’s $44 billion investment was really just a wasteful act of “cultural vandalism.”

Nobel-winning economist Joseph Stiglitz is worried that AI will worsen inequality, and  he spoke with Scientific American  about how we might get off the path we seem to currently be on. Well worth a read!

What I learned this week

Bots on social media are likely being supercharged by ChatGPT. Researchers from Indiana University have released a  preprint paper  that shows a Twitter botnet of over 1,000 accounts, which the researchers call fox8, “that appears to employ ChatGPT to generate human-like content.” The botnet promoted fake-news websites and stolen images, and it’s an alarming preview of a social media environment fueled by AI and machine-generated misinformation. Tech Policy Press wrote  a great quick analysis  on the findings, which I’d recommend checking out.

Additional reporting from Eileen Guo.

hide

by Tate Ryan-Mosley

Share

linkedin link opens in a new window

twitter link opens in a new window

facebook link opens in a new window

email link opens in a new window

Popular Covid hasn’t entirely gone away—here’s where we stand Jessica Hamzelou Meta’s latest AI model is free for all  Melissa Heikkila Junk websites filled with AI-generated text are pulling in money from programmatic ads Tate Ryan-Mosley Eric Schmidt: This is how AI will transform the way science gets done Eric Schmidt

Deep Dive

Tech policy

Junk websites filled with AI-generated text are pulling in money from programmatic ads

More than 140 brands are advertising on low-quality content farm sites—and the problem is growing fast.

By

Tate Ryan-Mosley archive page

The $100 billion bet that a postindustrial US city can reinvent itself as a high-tech hub

Can a massive infusion of money for making computer chips transform the economy of Syracuse and show us how to rebuild the nation’s industrial base?

By

David Rotman archive page

Five big takeaways from Europe’s AI Act

The AI Act vote passed with an overwhelming majority, but the final version is likely to look a bit different

By

Tate Ryan-Mosley archive page

Six ways that AI could change politics

A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived.

By

Bruce Schneier archive page

Nathan E. Sanders archive page

Stay connected

Illustration by Rose Wong

Get the latest updates from MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Enter your email

Privacy Policy

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.

Single Line Text

Tech policy. The project is backed by some of tech's biggest stars, but four countries are probing its privacy practices. By. Tate Ryan-Mosley archive page. Worldcoin. This article is from The Technocrat, MIT Technology Review's weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up her e . It’s possible you’ve heard the name Worldcoin recently. It’s been getting a ton of attention—some good, some … not so good. It’s a project that claims to use cryptocurrency to distribute money across the world, though its bigger ambition is to create a global identity system called “World ID” that relies on individuals’ unique biometric data to prove that they are humans. It officially launched on July 24 in more than 20 countries, and Sam Altman, the CEO of OpenAI and one of the biggest tech celebrities right now, is one of the cofounders of the project. The company makes big, idealistic promises: that it can deliver a form of universal basic income through technology to make the world a better and more equitable place, while offering a way to verify your humanity in a digital future filled with nonhuman intelligence, which it calls “proof of personhood.” If you’re thinking this sounds like a potential privacy nightmare,  you’re not alone . Luckily, we have someone I’d consider the Worldcoin expert on staff here at MIT Technology Review. Last year investigative reporter Eileen Guo, with freelancer Adi Renaldi,  dug into the company  and found that Worldcoin’s operations were far from living up to its lofty goals and that it was collecting sensitive biometric data from many vulnerable people in exchange for cash. As they wrote: “Our investigation revealed wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent.” What’s more, the company was using test users’ sensitive, but anonymized, data to train artificial intelligence models, but Eileen and Adi found that individuals did not know their data was being used that way. I highly recommend you  read their investigation —which builds on more than 35 interviews with Worldcoin executives, contractors, and test users recruited primarily in developing countries—to better understand how the company was handling sensitive personal data and how its idealistic rhetoric compared with the realities on the ground. Given their reporting, it’s no surprise that regulators in at least four countries have already launched investigations into the project, citing concerns with its privacy practices. The company  claims  it has already scanned nearly 2.2 million “unique humans” into its database, which was primarily built during an extended test period over the last two years. So I asked Eileen: What really has changed since her investigation? How do we make sense of the latest news? Since her story, Worldcoin CEO Alex Blania has  told other outlets  that the company has changed many of its data collection and privacy practices, though there are reasons to be skeptical. The company hasn’t specified exactly how it’s done this, beyond saying it has stopped some of the most exploitative and deceptive recruitment tactics. In emails Eileen recently exchanged with Worldcoin, a spokesperson was vague about how the company was handling personal data, saying that “the Worldcoin Foundation complies with all laws and regulations governing the processing of personal data in the markets where Worldcoin is available, including the General Data Protection Regulation (‘GDPR’) … The project will continue to cooperate with governing bodies on requests for more information about its privacy and data protection practices.” The spokesperson added, “It is important to stress that The Worldcoin Foundation and its contributor Tools for Humanity never have and never will sell users’ personal data.” But, Eileen notes, we (again) have nothing but the company’s word that this is true. That’s one reason we should keep a close eye on what government investigators start to uncover about Worldcoin. The legality of Worldcoin's biometric data collection is at the  heart of an investigation the French government launched into Worldcoin  and a probe by a German data protection agency, which has been  investigating Worldcoin since November of last year , according to Reuters. On July 25, the Information Commissioner’s Officer in the UK put out a statement that it will be “making enquiries” into the company. Then on August 2, Kenya’s Office of Data Protection  suspended the project  in the country,  saying  it will investigate whether Worldcoin is in compliance with the country’s Data Protection Act. Importantly, a core objective of the Worldcoin project is to perfect its “proof of personhood” methodology, which requires a lot of data to train AI models. If its proof-of-personhood system becomes widely adopted, this could be quite lucrative for its investors, particularly during an AI gold rush like the one we’re seeing now. The company announced this week that it will  allow other companies and governments  to deploy its identity system. “Worldcoin’s proposed identity solution is problematic whether or not other companies and governments use it. Of course, it would be worse if it were used more broadly without so many key questions being answered,” says Eileen. “But I think at this stage, it’s clever marketing to try to convince everyone to get scanned and sign up so that they can achieve the ‘fastest’ and ‘biggest onboarding into crypto and Web3’ to date, as Blania told me last year.” Eileen points out that Worldcoin has also not yet clarified whether it still uses the biometric data it collects to train its artificial intelligence models, or whether it has deleted the biometric data it already collected from test users and was using in training, as it told MIT Technology Review it would do before launch. “I haven’t seen anything that suggests that they’ve actually stopped training their algorithms—or that they ever would,” Eileen says. “I mean, that’s the point of AI, right? that it’s supposed to get smarter.” What else I'm reading. Meta’s oversight board, which issues independently drafted and binding policies, is reviewing how the company is handling misinformation about abortion. Currently, the company’s  moderation decisions are a bit of a mess , according to this nice explainer-y piece in Slate. We should expect the board to issue new abortion-information-specific policies in the coming weeks. At the end of July, Twitter rebranded to X, in a strange, unsurprising-yet-surprising move by its new czar Elon. I loved  Casey Newton’s obituary-style take , in which he argues that Musk’s $44 billion investment was really just a wasteful act of “cultural vandalism.” Nobel-winning economist Joseph Stiglitz is worried that AI will worsen inequality, and  he spoke with Scientific American  about how we might get off the path we seem to currently be on. Well worth a read! What I learned this week. Bots on social media are likely being supercharged by ChatGPT. Researchers from Indiana University have released a  preprint paper  that shows a Twitter botnet of over 1,000 accounts, which the researchers call fox8, “that appears to employ ChatGPT to generate human-like content.” The botnet promoted fake-news websites and stolen images, and it’s an alarming preview of a social media environment fueled by AI and machine-generated misinformation. Tech Policy Press wrote  a great quick analysis  on the findings, which I’d recommend checking out. Additional reporting from Eileen Guo. hide. by Tate Ryan-Mosley. Share. linkedin link opens in a new window. twitter link opens in a new window. facebook link opens in a new window. email link opens in a new window. Popular Covid hasn’t entirely gone away—here’s where we stand Jessica Hamzelou Meta’s latest AI model is free for all  Melissa Heikkila Junk websites filled with AI-generated text are pulling in money from programmatic ads Tate Ryan-Mosley Eric Schmidt: This is how AI will transform the way science gets done Eric Schmidt. Deep Dive. Tech policy. Junk websites filled with AI-generated text are pulling in money from programmatic ads. More than 140 brands are advertising on low-quality content farm sites—and the problem is growing fast. By. Tate Ryan-Mosley archive page. The $100 billion bet that a postindustrial US city can reinvent itself as a high-tech hub. Can a massive infusion of money for making computer chips transform the economy of Syracuse and show us how to rebuild the nation’s industrial base? By. David Rotman archive page. Five big takeaways from Europe’s AI Act. The AI Act vote passed with an overwhelming majority, but the final version is likely to look a bit different. By. Tate Ryan-Mosley archive page. Six ways that AI could change politics. A new era of AI-powered domestic politics may be coming. Watch for these milestones to know when it’s arrived. By. Bruce Schneier archive page. Nathan E. Sanders archive page. Stay connected. Illustration by Rose Wong. Get the latest updates from MIT Technology Review. Discover special offers, top stories, upcoming events, and more. Enter your email. Privacy Policy. Thank you for submitting your email! Explore more newsletters. It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.