Skip to main content

Think AI tools aren't harvesting your data? Guess again

When you use a “free” product like ChatGPT, your personal data is the product — and governments across the globe are using that to their advantage.

The meteoric ascent of generative artificial intelligence has created a bonafide technology sensation thanks to user-focused products such as OpenAI’s ChatGPT, Dall-E and Lensa. But the boom in user-friendly AI has arrived in conjunction with users seemingly ignoring or being left in the dark about the privacy risks imposed by these projects.

In the midst of all this hype, however, international governments and major tech figures are starting to sound the alarm. Citing privacy and security concerns, Italy just placed a temporary ban on ChatGPT, potentially inspiring a similar block in Germany. In the private sector, hundreds of AI researchers and tech leaders, including Elon Musk and Steve Wozniak, signed an open letter urging a six-month moratorium on AI development beyond the scope of GPT-4.

The relatively swift action to try to rein in irresponsible AI development is commendable, but the wider landscape of threats that AI poses to data privacy and security goes beyond one model or developer. Although no one wants to rain on the parade of AI’s paradigm-shifting capabilities, tackling its shortcomings head-on now is necessary to avoid the consequences becoming catastrophic.

AI’s data privacy storm

While it would be easy to say that OpenAI and other Big Tech-fuelled AI projects are solely responsible for AI’s data privacy problem, the subject had been broached long before it entered the mainstream. Scandals surrounding data privacy in AI have happened prior to this crackdown on ChatGPT—they’ve just mostly occurred out of the public eye.

Just last year, Clearview AI, an AI-based facial recognition firm reportedly utilized by thousands of governments and law enforcement agencies with limited public knowledge, was banned from selling facial recognition technology to private businesses in the United States. Clearview also landed a fine of $9.4 million in the United Kingdom for its illegal facial recognition database. Who’s to say that consumer-focused visual AI projects such as Midjourney or others can’t be used for similar purposes?

The problem is they already have been. A slew of recent deepfake scandals involving pornography and fake news created through consumer-level AI products have only heightened the urgency to protect users from nefarious AI usage. It takes a hypothetical concept of digital mimicry and makes it a very real threat to everyday people and influential public figures.

Related: Elizabeth Warren wants the police at your door in 2024

Generative AI models fundamentally rely upon new and existing data to build and strengthen their capabilities and usability. It’s part of the reason why ChatGPT is so impressive. That being said, a model that relies on new data inputs needs somewhere to get that data from, and part of that will inevitably include the personal data of the people using it. And that amount of data can easily be misused if centralized entities, governments or hackers get ahold of it.

So, with a limited scope of comprehensive regulation and conflicting opinions around AI development, what can companies and users working with these products do now?

What companies and users can do

The fact that governments and other developers are raising flags around AI now actually indicates progress from the glacial pace of regulation for Web2 applications and crypto. But raising flags isn’t the same thing as oversight, so maintaining a sense of urgency without being alarmist is essential to create effective regulations before it’s too late.

Italy’s ChatGPT ban is not the first strike that governments have taken against AI. The EU and Brazil are all passing acts to sanction certain types of AI usage and development. Likewise, generative AI’s potential to conduct data breaches has sparked early legislative action from the Canadian government.

The issue of AI data breaches is quite severe, to the point where OpenAI even had to step in. If you opened ChatGPT a couple of weeks ago, you might have noticed that the chat history feature was turned off. OpenAI temporarily shut down the feature because of a severe privacy issue where strangers’ prompts were exposed and ​​revealed payment information.

Related: Don’t be surprised if AI tries to sabotage your crypto

While OpenAI effectively extinguished this fire, it can be hard to trust programs spearheaded by Web2 giants slashing their AI ethics teams to preemptively do the right thing.

At an industrywide level, an AI development strategy that focuses more on federated machine learning would also boost data privacy. Federated learning is a collaborative AI technique that trains AI models without anyone having access to the data, utilizing multiple independent sources to train the algorithm with their own data sets instead.

On the user front, becoming an AI Luddite and forgoing using any of these programs altogether is unnecessary, and will likely be impossible quite soon. But there are ways to be smarter about what generative AI you grant access to in daily life. For companies and small businesses incorporating AI products into their operations, being vigilant about what data you feed the algorithm is even more vital.

The evergreen saying that when you use a free product, your personal data is the product still applies to AI. Keeping that in mind may cause you to reconsider what AI projects you spend your time on and what you actually use it for. If you’ve participated in every single social media trend that involves feeding photos of yourself to a shady AI-powered website, consider skipping out on it.

ChatGPT reached 100 million users just two months after its launch, a staggering figure that clearly indicates our digital future will utilize AI. But despite these numbers, AI isn’t ubiquitous quite yet. Regulators and companies should use that to their advantage to create frameworks for responsible and secure AI development proactively instead of chasing after projects once it gets too big to control. As it stands now, generative AI development is not balanced between protection and progress, but there is still time to find the right path to ensure user information and privacy remain at the forefront.

Ryan Paterson is the president of Unplugged. Prior to taking the reins at Unplugged, he served as the founder, president and CEO of IST Research from 2008 to 2020. He exited IST Research with a sale of the company in September 2020. He served two tours at the Defense Advanced Research Agency and 12 years in the United States Marine Corps.
Erik Prince is an entrepreneur, philanthropist and Navy SEAL veteran with business interests in Europe, Africa, the Middle East and North America. He served as the founder and chairman of Frontier Resource Group and as the founder of Blackwater USA — a provider of global security, training and logistics solutions to the U.S. government and other entities — before selling the company in 2010.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.



from https://ift.tt/2iVC7wt
https://ift.tt/wE34MZ5

Comments

Popular posts from this blog

How Social Platform Chingari is Using Web 3.0 to Transform the Traditional Way We Use Social Media

The world is changing. This isn’t news to anyone, but sometimes it is nice to realize that—contrary to news headlines—not all the change is bad.  In fact, the last decade has seen so much innovation and so many improvements to technology that even 2015 seems like a different world.  Internet speeds, connecting with anyone globally (for free), and our ability to reach large groups of people without a middleman is nothing short of revolutionary. When it comes to technology evolution, this often happens with different iterations.  Once a system is mature, there’s a better idea of what we would like to change and improve.  We go back to the drawing board, target our creative minds at the issues, and create a new version that has evolved to better meet our needs.  The Internet has followed this model since its inception, evolving through three distinct stages.  We are only at the cusp of the third stage, called Web 3.0, with technologies such as blockchain and ...

ENS DAO delegates offer perspective on DAO governance and decentralized identity

AlphaWallet CEO and Spruce co-founder talk about their roles as contributors to the Ethereum Name Service following the project's recent airdrop. Earlier this month, the Ethereum Name Service, or ENS, formed a decentralized autonomous organization, or DAO, for the ENS community.  Cointelegraph spoke to two ENS DAO delegates who applied for the opportunity to represent the community and stay involved in the decision making process: Victor Zhang, CEO of AlphaWallet, an open source Ethereum wallet, and Gregory Rocco, co-founder of Spruce, a decentralized ID and data toolkit for developers. Zhang spoke about his experience as an external contributor to ENS and an early supporter since 2018. Zhang initially sought to help ENS by offering Alpha Wallet as a user-friendly tool for  resolving .eth names and cryptocurrency wallet addresses. Essentially, if a user inputs an .eth name in the AlphaWallet, it will show the wallet address, and vice versa using reverse resolution. Alpha...

Meta's head of crypto to step down at end of year

In explaining his decision to leave Meta, David Marcus said that his entrepreneurial DNA had been nudging him “for too many mornings in a row to continue ignoring.” David Marcus, the head of Meta’s cryptocurrency and fintech unit Novi, will step down from his role by the end of 2021. Taking over from Marcus will be Stephane Kasriel, the former CEO of Upwork who has been at Meta, formerly known as Facebook, since August 2020. Marcus announced the decision via a Dec. 1 tweet , noting that he had made the “difficult decision” to leave the firm by the end of this year. The exec didn’t go into detail about what his next move would be, but hinted that it may be something “new and exciting” that he builds himself: “While there’s still so much to do right on the heels of launching Novi — and I remain as passionate as ever about the need for change in our payments and financial systems — my entrepreneurial DNA has been nudging me for too many mornings in a row to continue ignoring it.” Ma...