When online platforms violate their own privacy policies to sell your photos, have no fear: They just might have to pay an undisclosed settlement fee 12 years later. (Who says justice is dead?) According to Reuters , AI company Clarifai says it has deleted 3 million profile photos taken from dating site OkCupid in 2014. It follows a settlement reached last month between the FTC and Match Group , OkCupid's owner. The Delaware-based Clarifai reportedly certified the data deletion to the FTC on April 7. The company also confirmed to US Representative Lori Trahan (D-MA) that it deleted any models that trained on the data. Clarifai told the representative's office that it hadn't shared the data with third parties. The FTC opened the investigation in 2019, after The New York Times reported that Clarifai had built a training database using OkCupid dating profile photos. The behavior was a direct violation of OkCupid’s privacy policy. Court documents reviewed by Reuters reveal that Clarifai asked OkCupid executives for the data in 2014. Apparently, they obliged. Clarifai uses this creepy facial profiling example to sell its services. Clarifai "We're collecting data now and just realized that OkCupid must have a HUGE amount of awesome data for this," Clarifai founder Matthew Zeiler wrote in an email to OkCupid co-founder Maxwell Krohn. The AI startup used the dating site's images to build a facial recognition service that can identify a person's age, gender and race. (Another brilliant and totally ethical idea from Clarifai, tapping into unsecured city surveillance cameras without authorization, was reportedly shuttered.) Zeiller suggested to The New York Times in 2019 that people needed to, well, get over it. "There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use, and get comfortable with that," the AI founder declared. Some of OkCupid's founders were reportedly investors in Clarifai. As part of the settlement, the FTC "permanently prohibited" OkCupid from misrepresenting its data collection and privacy controls. TechCrunch notes how strange it is to use that as a penalty, given that FTC rules already bar that behavior. This article originally appeared on Engadget at https://www.engadget.com/ai/ai-company-deletes-the-3-million-okcupid-photos-it-used-for-facial-recognition-training-195223996.html?src=rss
Meta is facing a new lawsuit over its advertising practices. The nonprofit group Consumer Federation of America (CFA) has filed a proposed class-action suit against Meta for "failing to protect users" from scam ads on Facebook and Instagram. The lawsuit, which was first reported by Wired , alleges that Meta has run afoul of consumer protection laws in Washington D.C. for misleading Facebook and Instagram users about scams on its apps and that the company has "chased profits rather than protecting its users." The filing includes numerous examples of alleged scam ads that CFA says it found in Meta's ad library. These include ads promoting a "free government iPhone," as well as those claiming to offer $1,400 checks to people born in certain years. Many of the ads use AI videos, according to CFA. Some of examples of alleged scam ads CFA includes in its lawsuit. CFA Meta's advertising practices have been in the spotlight since last year when Reuters reported on internal documents that indicated the company was making billions of dollars from ads promoting scams and banned goods. The report also highlighted how Meta's own processes have at times made it harder for its own employees to fight malicious advertisers. "Meta claims it is doing all it can to crack down on scam advertising on its platforms," CFA's lawsuit states. "But in reality, Meta has knowingly taken steps and adopted policies that pad its bottom line at the expense of its users’ safety and well-being. In fact, rather than prohibiting advertisers who the company itself has determined pose a higher risk to its users (as other tech companies like Google have), Meta just charges these advertisers more. The perverse result is that the riskier the advertiser, the more money Meta makes." CFA's allegations "misrepresent the reality of our work and we will fight them," a Meta spokesperson said in a statement. "We aggressively combat scams across our platforms to protect people and businesses — last year alone, we removed over 159 million scam ads, 92% of which we took down before anyone reported them, and took down 10.9 million accounts on Facebook and Instagram associated with criminal scam centers. We fight scams because they are bad for business — people don't want them, advertisers don't want them, and we don't want them either.” This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-has-misled-users-about-scam-ads-on-facebook-and-instagram-lawsuit-says-193220235.html?src=rss
OpenAI is rolling out the latest version of its AI-powered image generator with new "thinking capabilities," allowing it to search the web to help it create multiple images from a single prompt. On Tuesday, OpenAI announced that ChatGPT Images 2.0 can now create more "sophisticated" images, with improvements to its ability to follow instructions, preserve […]