Related stories
New York is the latest state to take a stand against prediction markets. Attorney General Letitia James has sued Coinbase Financial Markets and Gemini Titan on charges that both are illegally running unlicensed gambling operations. The suit also claims that these prediction markets violate state laws that prevent betting on games involving New York college sports teams. "Gambling by another name is still gambling, and it is not exempt from regulation under our state laws and Constitution," James said. "Gemini and Coinbase’s so-called prediction markets are just illegal gambling operations, exposing young people to addictive platforms that lack the necessary guardrails." Multiple states have taken similar actions over the proliferation of prediction markets, but they may face a new roadblock at the federal level. Earlier this month, the US Commodity Futures Trading Commission sued three of the states that have charged prediction markets with running unlicensed gambling. The CFTC claimed that it should be the sole regulator for prediction markets and called the efforts by Arizona, Connecticut and Illinois an overreach of authority. This article originally appeared on Engadget at https://www.engadget.com/big-tech/new-york-attorney-general-sues-two-prediction-markets-on-illegal-gambling-allegations-192012225.html?src=rss
Florida Attorney General James Ulthmeier has announced that the state's Office of Statewide Prosecution has opened a criminal investigation into OpenAI and ChatGPT. The investigation was opened because the suspect in a mass shooting at Florida State University in 2025 reportedly used ChatGPT in the lead up to the shooting. Per Uthmeier, "Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime." That means that the responses provided by ChatGPT to the shooter could be interpreted as the AI assistant aiding and abetting his actions. Or at least that's what Florida seems interested in arguing. OpenAI provided the following statement when asked to comment on the Florida investigation: Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime. After learning of the incident, we identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement. We continue to cooperate with authorities. In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity. ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise. As part of the investigation, Florida has subpoenaed OpenAI for information on "all policies and internal training materials" related to how the company handles things like users threatening to harm others, threatening to harm themselves and how OpenAI responds to law enforcement. The state is also asking OpenAI to share its organizational chart and any publicly released statements on the shooting. "Florida is leading the way in cracking down on AI's use in criminal behavior, and if ChatGPT were a person, it would be facing charges for murder," Attorney General James Uthmeier said. "This criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT's actions in the shooting at Florida State University last year." Florida’s investigation isn’t the first time OpenAI has been connected to a mass shooting. Canadian regulators called for OpenAI to change how it approaches threats of harm following a Wall Street Journal report that claimed the company flagged the account of a Canadian shooting suspect in 2025 but failed to bring their threats to law enforcement. The company agreed to new policies around how it works with Canadian law enforcement in March. Separately, OpenAI is still in the midst of a wrongful death lawsuit from 2025 for the role it may have played in the suicide of a teenage user. This article originally appeared on Engadget at https://www.engadget.com/ai/florida-ag-opens-criminal-investigation-into-openai-and-chatgpt-190200227.html?src=rss
OpenAI is rolling out the latest version of its AI-powered image generator with new "thinking capabilities," allowing it to search the web to help it create multiple images from a single prompt. On Tuesday, OpenAI announced that ChatGPT Images 2.0 can now create more "sophisticated" images, with improvements to its ability to follow instructions, preserve […]
When online platforms violate their own privacy policies to sell your photos, have no fear: They just might have to pay an undisclosed settlement fee 12 years later. (Who says justice is dead?) According to Reuters , AI company Clarifai says it has deleted 3 million profile photos taken from dating site OkCupid in 2014. It follows a settlement reached last month between the FTC and Match Group , OkCupid's owner. The Delaware-based Clarifai reportedly certified the data deletion to the FTC on April 7. The company also confirmed to US Representative Lori Trahan (D-MA) that it deleted any models that trained on the data. Clarifai told the representative's office that it hadn't shared the data with third parties. The FTC opened the investigation in 2019, after The New York Times reported that Clarifai had built a training database using OkCupid dating profile photos. The behavior was a direct violation of OkCupid’s privacy policy. Court documents reviewed by Reuters reveal that Clarifai asked OkCupid executives for the data in 2014. Apparently, they obliged. Clarifai uses this creepy facial profiling example to sell its services. Clarifai "We're collecting data now and just realized that OkCupid must have a HUGE amount of awesome data for this," Clarifai founder Matthew Zeiler wrote in an email to OkCupid co-founder Maxwell Krohn. The AI startup used the dating site's images to build a facial recognition service that can identify a person's age, gender and race. (Another brilliant and totally ethical idea from Clarifai, tapping into unsecured city surveillance cameras without authorization, was reportedly shuttered.) Zeiller suggested to The New York Times in 2019 that people needed to, well, get over it. "There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use, and get comfortable with that," the AI founder declared. Some of OkCupid's founders were reportedly investors in Clarifai. As part of the settlement, the FTC "permanently prohibited" OkCupid from misrepresenting its data collection and privacy controls. TechCrunch notes how strange it is to use that as a penalty, given that FTC rules already bar that behavior. This article originally appeared on Engadget at https://www.engadget.com/ai/ai-company-deletes-the-3-million-okcupid-photos-it-used-for-facial-recognition-training-195223996.html?src=rss