close
close
Generative AI significantly exacerbates the problem of bad apps and fake reviews

Hello and welcome to Eye on AI.

Today, I’m giving you an exclusive look at new research that shows how generative AI is rapidly increasing the number of fake online reviews, tricking users into downloading malware-infected apps, and tricking advertisers into placing ads on malicious apps.

The fraud analysis team at DoubleVerify – a company that provides advertisers, marketplaces and publishers with tools and research to help them detect fraud and protect their brands – is today releasing a study detailing how generative AI tools are being used at scale to create fraudulent app reviews faster and easier than ever before. The researchers told Eye on AI that they found tens of thousands of AI-generated fake reviews supporting thousands of malicious apps across major app stores, including the iOS Store, Google Play Store and connected TV app stores.

“(AI) basically allows the scale of fake reviews to escalate much faster,” Gilit Saporta, senior director of fraud analysis at DoubleVerify, told Eye on AI.

Fraudulent reviews have long been a problem online, especially on e-commerce platforms like Amazon. Earlier this month, the FTC adopted rules banning fake reviews and related deceptive practices, such as buying reviews, misrepresenting authentic reviews on a company’s own website, and buying fake followers or engagement on social media.

The final rules also explicitly ban reviews generated by AI. According to DoubleVerify, these have increasingly flooded sites like Amazon, TripAdvisor, and any other site that hosts reviews since generative AI tools became readily available. In their new findings, the company’s fraud researchers describe how generative AI is exploding the already widespread problem specifically in app stores. In 2024, the company identified more than three times as many apps with fake reviews generated by AI as during the same period in 2023. Some of the reviews contain obvious phrases that indicate they were generated by AI (“I’m a language model”), but others seem authentic and would be difficult for users to spot, Saporta said. Only by analyzing reviews at scale was her team able to spot other subtleties that indicate AI generation, such as phrases repeated over and over again.

Malicious apps legitimized by AI verification typically download malware onto users’ devices to collect data or request intrusive permissions, such as running the app in the background without being noticed.

Many of the apps “target the most vulnerable parts of our society,” Saporta said, such as seniors (magnifying glass apps, flashlight apps) and children (like those promising free coins, gems and the like in popular kids’ mobile games). Other specific apps that DoubleVerify discovered that have a significant number of AI-generated reviews include a Fire TV app called Wotcho TV, an app called My AI Chatbot on the Google Play Store, and another called Brain E-Books on the Google Play Store.

DoubleVerify has also found that malicious apps that host audio content rely heavily on AI-generated ratings. Advertisers pay a premium for audio ads, so this scheme relies on making the app appear legitimate to both users and advertisers. Once downloaded, these apps install malware that simulates audio playback or plays audio in the background of the device without the user’s knowledge (draining battery and increasing data usage). This allows the app developer to fraudulently charge advertisers for fake listens.

In some cases, the malicious app developers themselves use tools like ChatGPT to quickly generate five-star reviews. In other cases, they outsource this task to gig economy workers. One sign to look out for is when an app has around 90% five-star reviews, 10% one-star reviews, and nothing in between.

“I think for someone who doesn’t know that the app has some suspicious patterns, it would be very difficult to find the ratings generated by the AI,” Saporta said, adding that app stores are aware of the problem and DoubleVerify is working with them to flag problematic apps.

AI companies boast that generative AI models make writing easier, but that comes at a price. The explosion of AI-generated reviews is similar to the way AI tools have made it easier for hackers to write faster and more convincing phishing emails. Educators report that students are outsourcing their writing to ChatGPT, and hiring managers report being overwhelmed by a flood of low-quality resumes written with AI tools. DoubleVerify also tracks how malicious actors are using AI to create bogus e-commerce websites for companies that don’t actually exist.

Technology often aims to lower the barrier to entry, but can it lower it too much?

And here is more AI news.

Say Lazzaro
[email protected]
sagelazzaro.com

AI IN THE NEWS

California lawmakers pass comprehensive AI law. That is loud The New York TimesLike Jeremy Kahn in Newsletter from Tuesdaythe bill – referred to as SB1047 – is designed to prevent catastrophic harm from AI and has divided the AI ​​community. After being watered down following lobbying from the tech industry, the version of the bill passed yesterday would require AI companies to test the largest AI models for safety before release and would allow the state’s attorney general to sue model developers for serious harm caused by their technologies. If enacted, the bill would likely become the de facto regulation for the U.S., marking a major shift in AI regulation in the country that still lacks national regulation of AI. For more information on the bill, see Assets‘s Jenn Brice has a helpful explainer.

Nvidia beats expectations with record second-quarter profits. As AssetsAlexei Oreskovic reportedThe chipmaker reported revenue of $30 billion in the three months ended July 28, up 122% year-on-year and well above Wall Street’s already optimistic forecast of $28.9 billion. More importantly, Nvidia confirmed that customers will begin receiving orders for its next-generation Blackwell AI chip in the fourth quarter. Rumors that the chip’s release could be delayed have been circulating, putting additional pressure on yesterday’s earnings.

Uber is partnering with Wayve to develop cardless self-driving cars. The companies today announced a partnership and Uber’s strategic investment in Wayve, one of the autonomous driving companies that is taking a newer “mapless” approach that relies heavily on AI and is designed to allow autonomous vehicles to operate without geofencing boundaries. With Uber’s backing, Wayve intends to accelerate its collaboration with automakers to equip consumer vehicles with Level 2+ advanced driver assistance and Level 3 automated driving features. The company will also work toward Level 4 autonomous vehicles – the level at which vehicles can drive completely without human intervention under certain conditions – that Uber will deploy in the future. Uber founder and former CEO Travis Kalanick spoke publicly about replacing drivers with automated vehicles as early as 2014, and the company invested more than $1 billion in autonomous technology before eventually selling its self-driving division to Aurora. Last week I wrote about the role of AI in self-driving cars (and why self-driving car makers aren’t buying into the AI ​​hype). You can check out the story Here.

New research provides initial empirical evidence that racist linguistic stereotypes prevail in LLM courses. Researchers from Stanford University, the University of Oxford, the University of Chicago and other institutions published the paper yesterday in Naturewhich details how AI language models show a particular bias against speakers of African American English. Specifically, the researchers describe a discrepancy between what language models overtly say about African Americans and what they covertly associate with them through dialect. They also argue that current techniques designed to mitigate racial bias in language models (such as aligning with human preferences) actually exacerbate this discrepancy by obscuring racism in LLMs. Finally, they found that the models are more likely to “suggest that speakers of African American English be assigned less prestigious jobs, convicted of crimes, and sentenced to death” than speakers of Standard American English. These would be serious decisions to leave to an LLM, and are the kind of use cases that many legislators want to regulate. The EU AI Act, for example, names employment and justice processes as “high-risk areas” where the use of AI is subject to stricter guardrails and transparency requirements.

HAPPINESS WITH AI

Google now uses AI to moderate staff meetings and employees say the questions aren’t that difficult -from Marco Quiroz-Gutierrez

Wall Street’s AI darling Super Micro delayed its earnings results while under the microscope of short sellers —by Will Daniel

Meta has abandoned efforts to make custom chips for its upcoming AR glasses -from KAli Hays

Klarna employs 1,800 people and hopes AI will make them redundant —by Ryan Hogg

Why Honeywell relies so heavily on artificial intelligence —by John Kell

AI CALENDAR

10-11 September: The AI ​​Conference, San Francisco

10-12 September: AI Hardware and AI Edge Summit, San Jose, California.

17-19 September: Dream Force, San Francisco

25-26 September: Meta Connect in Menlo Park, California.

22-23 October: TedAI, San Francisco

28-30 October: Voice and AI, Arlington, Va.

19-22 November: Microsoft Ignite, Chicago, Illinois.

2-6 December: AWS re:Invent, Las Vegas, Nevada.

8-12 December: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia

9-10 December: Fortune Brainstorm AI San Francisco (Registration Here)

KEEPING AN EYE ON AI NUMBERS

1,914

This is how many AI companies with B2B business models have raised equity investments so far in 2024 (as of August 9), compared to only 214 deals for AI companies with B2C business models, according to CB Insights.

The number continues a trend (B2B AI deals have far outpaced B2C deals for at least four years, according to the data) and reflects the same strategy that the major AI players are pursuing. For example, while OpenAI’s ChatGPT is as consumer-focused as AI gets, the company’s investments and acquisitions are primarily geared toward accelerating its enterprise strategy. That makes sense: Companies have more access to the resources they need to experiment with AI.

By Olivia

Leave a Reply

Your email address will not be published. Required fields are marked *