Generative AI: Making It Easier for Scammers and Thwarting Them at the Same Time
May 18, 2023

Before generative AI was publicly available, multiple resources were required to run disinformation campaigns effectively. However, this new technology has made it easier to create fake news stories, social media posts and other types of disinformation quickly and at a much lower cost. Generative AI can create content that is almost indistinguishable from human-created content, making it difficult for people to detect when they are being exposed to false information. A fake story that looks and reads like a legitimate news article can spread quickly and easily through social media, where it can reach millions of people within just a few hours.
Generative AI has also opened new possibilities for fraud and social engineering scams, allowing scammers to program chatbots to convincingly mimic human interaction. These chatbots can analyze the messages they receive and generate human-like responses that don’t have the tell-tale language and grammar errors associated with older chat scams. AI itself can be an effective way to tackle these issues. It can leverage its power for identity verification and authentication using multimodal biometrics and verify that users are genuine humans and are who they claim to be. While no technology can truly prevent someone from falling for a generative AI scam that convinces them to give away personal information, it can help prevent how that information can be used.
Get our free daily newsletter
Subscribe for the latest news and business legal developments.
Read this next
Top 100 Litigator Sues Blue Cross Over His Cancer Treatment
In 2018, Robert Salim, 67, realized he was seriously ill. After numerous […]
Financial Industry Suing to Foil New Regulations
New rules aimed at lenders, investment funds, and other financial entities would […]
GC Must Warn Boards Of AI Risks
There are companies investing hundreds of millions of dollars or more into […]