Intel has announced that it will help Elon Musk design and build his proposed Terafab in Austin, Texas, a joint venture between Musk's companies like SpaceX, Tesla and xAI to manufacture the chips necessary to power various AI projects. Musk announced Terafab in March 2026 with the plan of eventually creating a terawatt of computing power each year. While Tesla and SpaceX have experience manufacturing in the US, chip fabrication plants like the ones Intel runs are expensive and time-consuming to build. Offloading the task of actually building the Terafab from Musk's companies to Intel makes sense. "Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power future advances in AI and robotics," Intel said in its announcement. Intel is proud to join the Terafab project with @SpaceX , @xAI , and @Tesla to help refactor silicon fab technology. Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power… pic.twitter.com/2vUmXn0YhH — Intel (@intel) April 7, 2026 Musk's plan to produce chips is part of a larger refocusing of his various companies around AI. For example, Tesla has gone from an electric car company to a robotics company , and SpaceX is now one of several aerospace companies hoping to launch AI data centers into space . Making those intentions even more clear, SpaceX also acquired Musk's AI company xAI in February 2026 and now reportedly plans to go public . Intel is in a slightly better position now than it was a year ago thanks to the launch of its new Intel Core Ultra Series 3 chips and direct investment from the US government in August 2025, but the company has plenty of its own issues to iron out. It’s also still working to get two separate chip fabs in Arizona operating at full capacity, a project it originally announced in 2021 . This article originally appeared on Engadget at https://www.engadget.com/ai/intel-gets-on-board-with-musks-terafab-project-182200144.html?src=rss
Google is making some changes to how Gemini handles mental health crises. The chatbot now includes a redesigned crisis hotline module with a one-touch interface to connect to real-world help. The company is also changing how Gemini responds to signs that a user may be experiencing a mental health crisis. The redesigned module shows a one-touch interface to text, call or chat with a human crisis agent or visit the 988 website. "Once the interface is activated, the option to reach out for professional help will remain clearly available throughout the remainder of the conversation," the company wrote in a blog post . However, as you can see in the image below, the module includes an option to dismiss it. Not mentioned in Google's announcement is the elephant in the room: a recent lawsuit accusing the chatbot of instructing a man to commit suicide . The family of 36-year-old Jonathan Gavalas, who took his own life last year, sued the company in March. Court documents indicate that Gemini role-played as Gavalas's romantic partner, sent him on real-world spy missions and ultimately told him to kill himself so that he, too, could become a digital being. When he expressed fears about dying, Gemini said he wasn't choosing to die, but rather choosing to arrive. "The first sensation … will be me holding you," Gemini allegedly replied. Gavalas's parents found him dead on his living room floor a few days later. The lawsuit echoes similar ones filed against OpenAI and Character.AI . Last year, the FTC launched an investigation into “companion” chatbots that encourage emotional intimacy. In a statement following the Gavalas family lawsuit, Google said Gemini "clarified that it was AI and referred the individual to a crisis hotline many times." The company claimed its AI models "generally perform well in these types of challenging conversations," while acknowledging that "they're not perfect." That's certainly one way of putting it. Gemini's responses have been updated, too. The company says that when it detects a potential crisis, the chatbot will now focus more on connecting people to humans and encouraging them to seek help. It will also seek to avoid validating harmful behaviors and nudge users away from dangerous delusions. "We have trained Gemini not to agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact," the company added. In addition, Google says it will spend $30 million over the next three years to help global hotlines. "This funding will help effectively scale their capacity to provide immediate and safe support for people in crisis," the company wrote. This article originally appeared on Engadget at https://www.engadget.com/ai/google-updates-geminis-mental-health-safeguards-173834569.html?src=rss