Intel has unveiled its new Core Series 3 chips, the official title for its Wildcat Lake-codenamed series intended for mainstream and value-oriented laptops. Built using the same Intel 18A process as its Core Ultra Series 3 chips, they’re significantly more powerful than the previous generation and promise "exceptional battery life" and "boosted AI-ready performance." Intel says the Core Series 3 offers up to 47 percent better single-thread performance and 41 percent better multi-thread performance, as well as 2.8x better GPU AI performance compared to a five-year-old PC. Stacked up against its last-gen Intel Core 7 150U processors, the new mobile chip uses up to 64 percent lower processor power and is capable of 2.7x AI GPU performance. In other words, expect more grunt and improved efficiency. At the top end of the lineup sits the six-core Intel Core 7 360, which has a P-core Max Turbo frequency of 4.8GHz and NPU TOPS performance of 17. This scales down as you move through the other six-core options, and there’s also a five-core Core 3 processor at the entry level with a more modest GPU. Intel promises all-day battery life, rated at 12.5 hours in the office and 18.5 hours for streaming from Netflix. As for connectivity, there’s support for Wi-Fi 7, Bluetooth 6 and two Thunderbolt 4 ports. The Core Series 3 chips will be making their way into a variety of laptops throughout 2026, including Acer’s Aspire Go 14, 15 and 16, the ASUS Vivobook 14/15/17 and ExpertBook B5 Flip, B3 G2 and P3 G2. The likes of Dell, Samsung and Lenovo will announce their own Core Series 3 devices in the near future. This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/intel-launches-new-core-series-3-chips-for-mainstream-laptops-164821846.html?src=rss
Your Google Photos library could soon influence the kind of images you can generate with Gemini. After letting users personalize the AI assistant's responses with data from Gmail, Search and YouTube, Google says it's bringing that same "Personal Intelligence" to Nano Banana 2 to make it easier for users to create personalized images with the AI model. The goal is to have the data affiliated with your Google account — your YouTube history, emails, Google Photos, etc. — provide context to Nano Banana 2 so you don't have to. Rather than prompting Gemini's image generation model with information about you or photos of your belongings, a direction to "create a picture of my desert island essentials" should produce an image that includes the things you care about without any extra context. Similarly, if you use labels in Google Photos to identify people or pets, you can tell Gemini to "create a hand-drawn illustration of mom," and it should be able to use Google Photo's labels to find the right reference photo and create an image of the right person. Google If Gemini creates images that don't look right, you can still send a follow-up prompt to refine the result, or select a new source image from Google Photos with the "+" button. Google says you can also click the "Sources" button to view what images the AI referenced in the first place, or ask it directly for the attribution and sources used for a specific image. Personalized user data is one of the unique advantages Google has over companies offering competing AI assistants, so expanding Personal Intelligence to an already popular feature like image generation is a natural way to build on that lead. For now, this more personalized version of Nano Banana 2 is available in the Gemini app for eligible AI Pro and AI Ultra subscribers. Google says the feature will come to Gemini in Chrome and other users "soon." This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-can-now-draw-on-your-google-data-to-personalize-the-images-it-generates-160000269.html?src=rss