- Help
- Google+
If the AI lovefest of Google I/O 2025 were a TV show, you might be tempted to call it It's Always Sunny in Mountain View. (It's not, by the way, especially in the fog-filled month of May, even if the company's confidence in booking an outdoor amphitheater suggests otherwise).
But here's a better sitcom analogy for the event that added AI Mode to all U.S. search results, whether we want it or not. It's The Good Place, in which our late heroes are repeatedly assured that they've gone to a better world. A place where everything is fine, all is as it seems, and search quality just keeps getting better.
Don't worry about ever-present and increasing AI hallucinations here in the Good Place, where the word "hallucination" isn't even used. Forget about the one live demo (among a dozen prerecorded ones) that went spectacularly wrong, where two presenters failed to translate each others' words via their Google AI glasses. Responsible AI, the focus of Google I/O 2023, went unmentioned.
Skyrocketing AI data center usage contributing to global warming? In the Good Place, Google AI is fighting global warming by helping to pinpoint wildfires. Don't think too hard about that one.
And as for that whole Hollywood strike that lasted nearly a year, largely over creatives' concerns about studios using AI? Fuggedaboutit. Creative folk love AI in the Good Place — just listen to the testimonials from the filmmakers and musicians Google has cherry-picked!
Is Google search the Bad Place?Meanwhile, SEO experts warn, search results continue to get worse with AI Mode-style overviews. According to internal memos obtained in the ongoing Department of Justice lawsuit, which the DOJ just won, Google has a perverse incentive to make them that way.
"If users don't get what they want the first time, they have to search again," says Lily Ray, VP of SEO Strategy at Amsive, and the author of a recent viral LinkedIn post, Google AI Overviews Have a Major Spam Problem. "So if you're serving them multiple AI overviews because they have to search multiple times, Google can then say 'we have more people using AI every day.' It's like, 'yeah, but there's no way to turn it off.'"
Or to put it in the plot-pivoting words of The Good Place's Eleanor Shellstrop (Kristen Bell): "Wait a minute. This is the Bad Place!"
Google has a roughly 90 percent share of the search market, after all; it can afford to make the product worse by using AI so long as investors keep juicing the stocks of AI-heavy companies. It can pretend to look like the Good Place for search, while under the hood it's anything but.
Like Eleanor Shellstrop, however, users know what's up. Google search results have "kind of become the laughingstock of the Internet," Ray says. "Whenever Google communicates about AI Overviews, they say 'our users really love it.' But then when you read Google Forum, it's always like 'how do I turn this thing off, I want Google search back.'" (Even turning AI search off, according to one Google Forum user, doesn't turn it off.)
As in The Good Place, this awareness may not do a lick of good. If investors continue to reward Google for frothy presentations filled with cool-sounding AI features, there's no incentive for quality control. As often appears to be the case in 2025, we have to get used to living in separate realities.
So users may breathe a little easier knowing that Google stock fell by 1.5 percent in the aftermath of I/O (and is down 12 percent in 2025 as a whole). Is that enough to nudge the company to pay attention?
It would take Google I/O levels of Pollyanna optimism to think so. Instead, let's draw our attention to a Good Place fact you won't find easily in AI Mode. It took demon Michael (Ted Danson) 300 years to stop resetting Eleanor's memories every time she realized she was in the Bad Place.
So, only a few hundred years to go before Google is working for us again. Everything is fine.
Connections: Sports Edition is a new version of the popular New York Times word game that seeks to test the knowledge of sports fans.
Like the original Connections, the game is all about finding the "common threads between words." And just like Wordle, Connections resets after midnight and each new set of words gets trickier and trickier—so we've served up some hints and tips to get you over the hurdle.
If you just want to be told today's puzzle, you can jump to the end of this article for the latest Connections solution. But if you'd rather solve it yourself, keep reading for some clues, tips, and strategies to assist you.
SEE ALSO: Mahjong, Sudoku, free crossword, and more: Play games on Mashable What is Connections Sports Edition?The NYT's latest daily word game has launched in association with The Athletic, the New York Times property that provides the publication's sports coverage. Connections can be played on both web browsers and mobile devices and require players to group four words that share something in common.
This Tweet is currently unavailable. It might be loading or has been removed.Each puzzle features 16 words and each grouping of words is split into four categories. These sets could comprise of anything from book titles, software, country names, etc. Even though multiple words will seem like they fit together, there's only one correct answer.
If a player gets all four words in a set correct, those words are removed from the board. Guess wrong and it counts as a mistake—players get up to four mistakes until the game ends.
This Tweet is currently unavailable. It might be loading or has been removed.Players can also rearrange and shuffle the board to make spotting connections easier. Additionally, each group is color-coded with yellow being the easiest, followed by green, blue, and purple. Like Wordle, you can share the results with your friends on social media.
Here's a hint for today's Connections Sports Edition categoriesWant a hint about the categories without being told the categories? Then give these a try:
Yellow: Time periods during a game
Green: Football lingo
Blue: Good pass
Purple: Basketball shoes
Need a little extra help? Today's connections fall into the following categories:
Yellow: Sections of a Game
Green: Football Defensive Terms
Blue: Words Used to Describe a Good Pass
Purple: Eponymous Basketball Shoes, Minus the S
Looking for Wordle today? Here's the answer to today's Wordle.
Ready for the answers? This is your last chance to turn back and solve today's puzzle before we reveal the solutions.
Drumroll, please!
The solution to today's Connections Sports Edition #240 is...
What is the answer to Connections Sports Edition todaySections of a Game - HALF, INNING, PERIOD, QUARTER
Football Defensive Terms - 4-3, 46, NICKEL, PREVENT
Words Used to Describe a Good Pass - APPLE, ASSIST, DIME, DISH
Eponymous Basketball Shoes, Minus the S - CHUCK, JORDAN, LEBRON, PENNY
Don't feel down if you didn't manage to guess it this time. There will be new Connections for you to stretch your brain with tomorrow, and we'll be back again to guide you with more helpful hints.
Are you also playing NYT Strands? See hints and answers for today's Strands.
If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.
Not the day you're after? Here's the solution to yesterday's Connections.
Connections is the one of the most popular New York Times word games that's captured the public's attention. The game is all about finding the "common threads between words." And just like Wordle, Connections resets after midnight and each new set of words gets trickier and trickier—so we've served up some hints and tips to get you over the hurdle.
If you just want to be told today's puzzle, you can jump to the end of this article for today's Connections solution. But if you'd rather solve it yourself, keep reading for some clues, tips, and strategies to assist you.
SEE ALSO: Mahjong, Sudoku, free crossword, and more: Play games on Mashable What is Connections?The NYT's latest daily word game has become a social media hit. The Times credits associate puzzle editor Wyna Liu with helping to create the new word game and bringing it to the publications' Games section. Connections can be played on both web browsers and mobile devices and require players to group four words that share something in common.
This Tweet is currently unavailable. It might be loading or has been removed.Each puzzle features 16 words and each grouping of words is split into four categories. These sets could comprise of anything from book titles, software, country names, etc. Even though multiple words will seem like they fit together, there's only one correct answer.
If a player gets all four words in a set correct, those words are removed from the board. Guess wrong and it counts as a mistake—players get up to four mistakes until the game ends.
This Tweet is currently unavailable. It might be loading or has been removed.Players can also rearrange and shuffle the board to make spotting connections easier. Additionally, each group is color-coded with yellow being the easiest, followed by green, blue, and purple. Like Wordle, you can share the results with your friends on social media.
SEE ALSO: NYT's The Mini crossword answers for May 21 Here's a hint for today's Connections categoriesWant a hint about the categories without being told the categories? Then give these a try:
Yellow: To stop someone or something
Green: Found on an Apple computer
Blue: Ways you'd take a remedy
Purple: They flip open and closed
Need a little extra help? Today's connections fall into the following categories:
Yellow: Prohibit, as entry
Green: Folders on a Mac
Blue: Medicine formats
Purple: Things that open like a clam
Looking for Wordle today? Here's the answer to today's Wordle.
Ready for the answers? This is your last chance to turn back and solve today's puzzle before we reveal the solutions.
Drumroll, please!
The solution to today's Connections #709 is...
What is the answer to Connections todayProhibit, as entry: BAR, BLOCK, DENY, REFUSE
Folders on a Mac: DESKTOP, MUSIC, PICTURES, TRASH
Medicine formats: CREAM, PATCH, SPRAY, TABLET
Things that open like a clam: CLAM, COMPACT, LAPTOP, WAFFLE IRON
Don't feel down if you didn't manage to guess it this time. There will be new Connections for you to stretch your brain with tomorrow, and we'll be back again to guide you with more helpful hints.
SEE ALSO: NYT Connections Sports Edition today: Hints and answers for May 21Are you also playing NYT Strands? See hints and answers for today's Strands.
If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.
Not the day you're after? Here's the solution to yesterday's Connections.
If you're reading this, you're looking for a little help playing Strands, the New York Times' elevated word-search game.
Strands requires the player to perform a twist on the classic word search. Words can be made from linked letters — up, down, left, right, or diagonal, but words can also change direction, resulting in quirky shapes and patterns. Every single letter in the grid will be part of an answer. There's always a theme linking every solution, along with the "spangram," a special, word or phrase that sums up that day's theme, and spans the entire grid horizontally or vertically.
SEE ALSO: Mahjong, Sudoku, free crossword, and more: Play games on MashableBy providing an opaque hint and not providing the word list, Strands creates a brain-teasing game that takes a little longer to play than its other games, like Wordle and Connections.
If you're feeling stuck or just don't have 10 or more minutes to figure out today's puzzle, we've got all the NYT Strands hints for today's puzzle you need to progress at your preferrined pace.
SEE ALSO: Wordle today: Answer, hints for May 21 SEE ALSO: NYT Connections hints today: Clues, answers for May 21 NYT Strands hint for today’s theme: Three's a crowdThe words are number-related.
Today’s NYT Strands theme plainly explainedThese words are ways to describe two of something.
NYT Strands spangram hint: Is it vertical or horizontal?Today's NYT Strands spangram is horizontal.
NYT Strands spangram answer todayToday's spangram is Double Trouble
Featured Video For You Strands 101: How to win NYT’s latest word game NYT Strands word list for May 21Couple
Twosome
Double Trouble
Partners
Twins
Pair
Match
Looking for other daily online games? Mashable's Games page has more hints, and if you're looking for more puzzles, Mashable's got games now!
Check out our games hub for Mahjong, Sudoku, free crossword, and more.
Not the day you're after? Here's the solution to yesterday's Strands.
Oh hey there! If you're here, it must be time for Wordle. As always, we're serving up our daily hints and tips to help you figure out today's answer.
If you just want to be told today's word, you can jump to the bottom of this article for today's Wordle solution revealed. But if you'd rather solve it yourself, keep reading for some clues, tips, and strategies to assist you.
SEE ALSO: Mahjong, Sudoku, free crossword, and more: Play games on Mashable SEE ALSO: NYT Connections today: Hints and answers for May 21 Where did Wordle come from?Originally created by engineer Josh Wardle as a gift for his partner, Wordle rapidly spread to become an international phenomenon, with thousands of people around the globe playing every day. Alternate Wordle versions created by fans also sprang up, including battle royale Squabble, music identification game Heardle, and variations like Dordle and Quordle that make you guess multiple words at once.
Wordle eventually became so popular that it was purchased by the New York Times, and TikTok creators even livestream themselves playing.
What's the best Wordle starting word?The best Wordle starting word is the one that speaks to you. But if you prefer to be strategic in your approach, we have a few ideas to help you pick a word that might help you find the solution faster. One tip is to select a word that includes at least two different vowels, plus some common consonants like S, T, R, or N.
What happened to the Wordle archive?The entire archive of past Wordle puzzles was originally available for anyone to enjoy whenever they felt like it, but it was later taken down, with the website's creator stating it was done at the request of the New York Times. However, the New York Times then rolled out its own Wordle Archive, available only to NYT Games subscribers.
Is Wordle getting harder?It might feel like Wordle is getting harder, but it actually isn't any more difficult than when it first began. You can turn on Wordle's Hard Mode if you're after more of a challenge, though.
SEE ALSO: NYT's The Mini crossword answers for May 21, 2025 Here's a subtle hint for today's Wordle answer:To startle.
Does today's Wordle answer have a double letter?The letter A appears twice.
Today's Wordle is a 5-letter word that starts with...Today's Wordle starts with the letter A.
SEE ALSO: Wordle-obsessed? These are the best word games to play IRL. The Wordle answer today is...Get your last guesses in now, because it's your final chance to solve today's Wordle before we reveal the solution.
Drumroll please!
The solution to today's Wordle is...
ALARM.
Don't feel down if you didn't manage to guess it this time. There will be a new Wordle for you to stretch your brain with tomorrow, and we'll be back again to guide you with more helpful hints.
Are you also playing NYT Strands? See hints and answers for today's Strands.
SEE ALSO: NYT Connections Sports Edition today: Hints and answers for May 21Reporting by Chance Townsend, Caitlin Welsh, Sam Haysom, Amanda Yeo, Shannon Connellan, Cecily Mauran, Mike Pearl, and Adam Rosenberg contributed to this article.
If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.
Not the day you're after? Here's the solution to yesterday's Wordle.
If you like playing daily word games like Wordle, then Hurdle is a great game to add to your routine.
There are five rounds to the game. The first round sees you trying to guess the word, with correct, misplaced, and incorrect letters shown in each guess. If you guess the correct answer, it'll take you to the next hurdle, providing the answer to the last hurdle as your first guess. This can give you several clues or none, depending on the words. For the final hurdle, every correct answer from previous hurdles is shown, with correct and misplaced letters clearly shown.
An important note is that the number of times a letter is highlighted from previous guesses does necessarily indicate the number of times that letter appears in the final hurdle.
If you find yourself stuck at any step of today's Hurdle, don't worry! We have you covered.
SEE ALSO: Hurdle: Everything you need to know to find the answers Hurdle Word 1 hintReferencing the past.
SEE ALSO: Apple’s new M3 MacBook Air is $300 off at Amazon. And yes, I’m tempted. Hurdle Word 1 answerOLDEN
Hurdle Word 2 hintWhen animals reproduce.
SEE ALSO: Wordle today: Answer, hints for May 21, 2025 Hurdle Word 2 AnswerBREED
Hurdle Word 3 hintWhat bees do.
SEE ALSO: NYT Connections Sports Edition today: Hints and answers for May 21 SEE ALSO: NYT Connections hints today: Clues, answers for May 21, 2025 Hurdle Word 3 answerSTING
Hurdle Word 4 hintA soft boy.
SEE ALSO: NYT Strands hints, answers for May 21 Hurdle Word 4 answerSISSY
Final Hurdle hintFrom Wales.
SEE ALSO: Mahjong, Sudoku, free crossword, and more: Games available on Mashable Hurdle Word 5 answerWELSH
If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.
This year, Google I/O 2025 had one focus: Artificial intelligence.
We've already covered all of the biggest news to come out of the annual developers conference: a new AI video generation tool called Flow. A $250 AI Ultra subscription plan. Tons of new changes to Gemini. A virtual shopping try-on feature. And critically, the launch of the search tool AI Mode to all users in the United States.
Yet over nearly two hours of Google leaders talking about AI, one word we didn't hear was "hallucination".
Hallucinations remain one of the most stubborn and concerning problems with AI models. The term refers to invented facts and inaccuracies that large-language models "hallucinate" in their replies. And according to the big AI brands' own metrics, hallucinations are getting worse — with some models hallucinating more than 40 percent of the time.
But if you were watching Google I/O 2025, you wouldn't know this problem existed. You'd think models like Gemini never hallucinate; you would certainly be surprised to see the warning appended to every Google AI Overview. ("AI responses may include mistakes".)
The closest Google came to acknowledging the hallucination problem came during a segment of the presentation on AI Mode and Gemini's Deep Search capabilities. The model would check its own work before delivering an answer, we were told — but without more detail on this process, it sounds more like the blind leading the blind than genuine fact-checking.
For AI skeptics, the degree of confidence Silicon Valley has in these tools seems divorced from actual results. Real users notice when AI tools fail at simple tasks like counting, spellchecking, or answering questions like "Will water freeze at 27 degrees Fahrenheit?"
Google was eager to remind viewers that its newest AI model, Gemini 2.5 Pro, sits atop many AI leaderboards. But when it comes to truthfulness and the ability to answer simple questions, AI chatbots are graded on a curve.
Gemini 2.5 Pro is Google's most intelligent AI model (according to Google), yet it scores just a 52.9 percent on the Functionality SimpleQA benchmarking test. According to an OpenAI research paper, the SimpleQA test is "a benchmark that evaluates the ability of language models to answer short, fact-seeking questions." (Emphasis ours.)
A Google representative declined to discuss the SimpleQA benchmark, or hallucinations in general — but did point us to Google's official Explainer on AI Mode and AI Overviews. Here's what it has to say:
[AI Mode] uses a large language model to help answer queries and it is possible that, in rare cases, it may sometimes confidently present information that is inaccurate, which is commonly known as 'hallucination.' As with AI Overviews, in some cases this experiment may misinterpret web content or miss context, as can happen with any automated system in Search...
We’re also using novel approaches with the model’s reasoning capabilities to improve factuality. For example, in collaboration with Google DeepMind research teams, we use agentic reinforcement learning (RL) in our custom training to reward the model to generate statements it knows are more likely to be accurate (not hallucinated) and also backed up by inputs.
Is Google wrong to be optimistic? Hallucinations may yet prove to be a solvable problem, after all. But it seems increasingly clear from the research that hallucinations from LLMs are not a solvable problem right now.
That hasn't stopped companies like Google and OpenAI from sprinting ahead into the era of AI Search — and that's likely to be an error-filled era, unless we're the ones hallucinating.
From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long.
During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago.
But, we know what you're really here for: Product updates and new product announcements.
Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year.
We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them.
Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available.
Google Search AI ModeGoogle tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top.
At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means.
Google announces AI Mode in Google Search Credit: GoogleAI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail.
In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable.
According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today.
AI ShoppingOut of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees.
Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description.
Google AI Shopping Credit: GoogleGoogle also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet.
The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change.
Google said users will be able to try on different looks via AI in Google Labs starting today.
Android XRGoogle's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR.
One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that.
While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year.
Project Moohan Credit: GoogleIn addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker.
Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that.
GeminiEasily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first.
Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files.
Jules coding agent Credit: GoogleDevelopers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages.
The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts.
Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer.
Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S.
Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs.
Generative AI updatesGemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too.
Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward.
Flow AI video tool Credit: GoogleA new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue.
Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2.
Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform.
Project Starline aka Google BeamAnother big announcement out of Google I/O: Project Starline is no more.
Google's immersive communication project will now be known as Google Beam, an AI-first communication platform.
As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks.
Google Meet translations Credit: GoogleGoogle also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display.
Project AstraWhile Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now.
Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it.
Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users.
Gemini Live Credit: GoogleIt appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement.
Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities.
Project MarinerAnother one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner.
While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future.
Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.
It's hard to believe, but we're about to experience the tenth year of Google Pixel annual upgrades.
What started as an uncertain experiment a decade ago is now a solid and dependable smartphone lineup from Google, year in and year out. 2025 looks to be no different, according to various rumors and leaks. The Google Pixel 10 lineup is likely a few months away from the spotlight, but that doesn't mean we can't get prepared in the meantime.
Here is everything we know about the Google Pixel 10 right now.
SEE ALSO: The battle of the mid-range phones: Google Pixel 9a vs. iPhone 16e Google Pixel 10 rumors: Everything we knowThere isn't a lot of concrete info yet about the Google Pixel 10 lineup, but we have enough to get a general idea of what to expect.
Google Pixel 10 models and release dateFor starters, don't expect to hear about any Pixel 10 phones until around August. That's when Google traditionally announces each year's lineup.
As for how many Google Pixel 10 phones there will be in August, we can look to last year's launch to make a prediction. In 2024, Google launched the base Pixel 9, the Pixel 9 Pro and Pro XL, and the Pixel 9 Pro Fold all at the same time. So far, there are no indications that Google plans to do anything differently this year. Until that changes, it's probably safe to assume this year's lineup will look the same, except with "10" in each phone's name instead of "9."
I should also note that there may be a Pixel 10a at some point in the future, but if tradition holds, that won't be until 2026. The Pixel 9a arrived roughly six months after the Pixel 9.
Google Pixel 10 specsFor the past few years, Google has been using its own custom Tensor chipset to power Pixel phones. It would be a huge surprise if anything other than a "Tensor G5" chipset was powering the Pixel 10 family. Android Authority has a detailed technical report on Tensor G5, but in summary, the main thing to know is that it likely won't be a huge jump over Tensor G4. Google hasn't really been setting benchmark scores on fire with the past few Pixel phones, but the Tensor chips have always provided smooth performance and good battery life, so maybe that doesn't matter.
Of course, Tensor G5 will also enable plenty of wacky new AI features, but we don't know about any of those for certain yet.
Google Pixel 10 priceAgain, we don't know for sure how much any of these phones cost until Google tells us. That said, Android Headlines has done some reporting about this, and the answers are pretty interesting.
According to that report, the Pixel 10 and Pixel 10 Pro will retain the same price points as the Pixel 9 and Pixel 9 Pro, or $799 and $999, respectively. Unfortunately, if you want a Pixel 10 Pro XL (which typically comes with a bigger display and battery), you'll apparently have to pay $1,200, up $100 from last year.
Fascinatingly, it sounds like the Pixel 10 Pro Fold will actually get a price drop from last year's Pixel 9 Pro Fold. The 2024 model started at $1,799, while this year's phone should be $1,599. Again, none of this is confirmed and is all subject to change, but for now, these are the prices we expect to see in August.
Google Pixel 10 cameras The camera setup on the new Pixel 9a smartphone from Google. Credit: Alex Perry / MashableLastly, cameras are obviously a huge part of the appeal with smartphones, and Pixel phones have generally performed well in that respect. It sounds like Google is making a few interesting changes this year, if leaks are to be believed.
Namely, it looks like the base Pixel 10 could get a third camera lens on the back, after years of Google only putting a wide and ultra-wide lens back there. This is according to renders obtained by Android Headlines earlier this year. A third telephoto lens could give the base Pixel 10 a huge leg up on zoomed in photos against similarly priced competition.
That might come at a cost, though. According to Android Authority, the Pixel 10's rear camera array is a downgrade in terms of megapixel count from the Pixel 9. The new phone will have a 48MP wide lens and a 13MP ultra-wide lens, down from 50MP and 48MP, respectively, on Pixel 9. Those are actually the same camera specs as the recent mid-range Pixel 9a. It seem that the inclusion of a third telephoto lens necessitated these compromises in some way.
It seems like the 10 Pro and Pro XL models will stay the same as the 9 Pro models, at least. According to Android Authority, they'll have the same 50MP/48MP/48MP array as last year. Lastly, the Pixel 10 Pro Fold is allegedly receiving a slight bump in its main rear sensor up from 48MP to 50MP, but the rest of the array is the same: a 10.5MP ultra-wide lens to go with a 10.8MP telephoto lens.
A ton of new AI featuresWe don't have any specific insight here, but considering that Google I/O 2025 focused entirely on Gemini, artificial intelligence in general, and the upcoming Android XR augmented reality platform, we can confidently say Pixel 10 will launch with a full suite of AI tools. That includes the standard AI features you get with Google products like Gmail, Docs, and Gemini, as well as tools specifically for Pixel hardware. If we had to guess, our money would be on new AI image editing and video editing tools, as that's been a big focus for Google recently. Ditto real-time language translation.
The crypto industry is celebrating this week as a controversial stablecoin bill dubbed the GENIUS Act advanced to debate in front of the full U.S. Senate.
Earlier this month, an Abu Dhabi investment firm announced that it would be making a $2 billion investment in the cryptocurrency exchange Binance using a brand new stablecoin called USD1.
Unlike risky cryptocurrencies with prices that fluctuate wildly and constantly, stablecoins are basically crypto tokens pegged to a more stable asset, like fiat currency such as the U.S. dollar. Crypto companies behind stablecoins typically profit based on interest rather than speculation. According to Fortune, the crypto firm behind USD1, World Liberty Financial, could make around $80 million annually off of a deal like the Abu Dhabi one.
What makes this notable? For one, World Liberty Financial is majority owned by President Donald Trump and his family. Meanwhile, a first-of-its-kind stablecoin bill has just advanced in the Senate. And although some Democrats had hoped the bill would address this type of crypto profiteering from the Trump family, the bill is moving forward regardless.
The GENIUS Act advances in SenateOn Monday, in a 66-32 vote, the U.S. Senate advanced the GENIUS Act, a bill that sets up the first regulatory framework for stablecoins. In total, 16 Democrats joined the Republican majority to advance the bill. Two Republicans joined the opposing Democrats who voted against it.
Just two weeks ago, the stablecoin bill was shot down by every Democratic senator, which stalled the legislation from moving forward. However, an amendment was added to the bill in an effort to appease some Democrats.
That amendment included new consumer protections and regulatory limits on companies that issue stablecoins. In addition, the amendment also extended ethics standards to special government employees, which would cover Elon Musk and Trump's AI and crypto czar David Sacks, as long as they hold their special government roles.
However, notably absent from the bill is any language that could definitively end President Trump's own crypto and stablecoin dealings.
What are critics of the GENIUS Act saying?The bill does include language that would "prohibit any member of Congress or senior executive branch official from issuing a payment stablecoin product during their time in public service.” However, Democrats opposing the bill say that the language doesn't go far enough to address the Trump family's stablecoin endeavors.
Speaking from the Senate floor, Massachusetts Senator Elizabeth Warren said, “It doesn’t have to be this way. A bill that meaningfully strengthens oversight of the stablecoin market is worth enacting. A bill that turbocharges the stablecoin market, while facilitating the President’s corruption and undermining national security, financial stability, and consumer protection is worse than no bill at all.”
In addition to the new stablecoin investment, Trump's memecoin $TRUMP has netted his crypto companies hundreds of millions of dollars in fees. In addition, Trump has offered special access to investors in his memecoin. Memecoins are different from stablecoins and would not be covered under this bill.
While some Democrats will be offering additional legislation to cover Trump's stablecoin project, it's unlikely to pass the Republican-controlled Senate.
When I was a child, computers were a fixture in my home, from the giant Atari on which I learned my ABCs, to the Commodore Amiga that my dad used for his videography business, to the PC towers that facilitated my first forays onto the internet. But tech was still a niche hobby back then. Even in college in the late 1990s and early 2000s, many of my friends got by just fine without computers.
For people in college now—namely, my students—things are decidedly different. Gadgets are everywhere, and are increasingly designed to insert themselves into every aspect of our consciousness, colonizing every spare moment of our time and attention. Gen Z and Gen Alpha have never known a world without mini-computers within arm’s reach. They learned to relate to the world through gadgets, to turn to them for everything from entertainment to education to escape. And when the COVID-19 pandemic disrupted their lives, it took away even more of their access to the offline world, making tech feel paradoxically both like a lifeline and a prison.
It's easy to call young people “screenagers” and blame them for being glued to their devices. But I know better. My students feel conflicted; they know they’re hooked, and they worry for their younger siblings who seem even more in the grip of all-consuming tech.
Several years ago, it occurred to me that I could do something to help. I began requiring students to put away all devices, including laptops and tablets, in my classes. It was an experiment both for them and for me: What happens when we remove the barrier tech has put between us and other people, between us and our own thoughts? What does that teach us about how to handle the explosion of hype around generative AI?
How I went from gadget geek to tech skepticMy own journey with tech predates our always-on devices, way back to that old Atari. I had always been a little obsessed with gadgets, and when I bought my first iPhone in 2008, it was almost a religious experience.
My wife and I were living in New York City, and my entire family drove down from Boston to witness my initiation. Like pilgrims, we journeyed together to the flagship Apple Store on Fifth Avenue. We all stood in reverence at the foot of the spiral staircase, beneath the illuminated glass cube, as I was welcomed into the cult of Apple.
From then on, almost without fail, I’ve upgraded my phone annually, a September ritual as cyclical for me as going back to school. And it wasn't just the iPhone; I had the first or second iteration of the iPad, AirPods, and the Apple Watch, too. Back then it felt like Steve Jobs might announce something that would reshape the world every time he stepped on stage.
But in the 2010s, something started to change. Underwhelming new tech releases grew increasingly common, and the constant hype around them began to feel empty and manipulative. As both a college professor and a parent, I began to see the benefits of our always-connected devices becoming overshadowed by the negatives. The young people in my life are obsessed with their gadgets, legitimately afraid they’ll be disconnected from society if they aren’t extremely online, and they hate it. Many worry as much as their parents do about their phone use.
So, even before the hype that greeted the AI revolution of the last few years, I’d begun to look a lot more skeptically at claims that tech was changing our lives, and that more apps, devices, or wearables were automatically better.
What happens when we turn off the tech?One day, near the end of the spring semester in 2019, I looked out at my class to see rows of students focused intently… on their laptop screens. They presumably had their devices out to take notes, but I wasn’t lecturing. I was trying to lead them into a discussion. This moment for me is trapped in time: It was the moment I decided I had to take drastic measures to recapture my students’ attention.
The following fall, my syllabus included a new section, which has remained in place since. I call it my in-class technology use policy and it begins, “This class is a laptop/mobile phone/tablet/headphone/AirPods-free zone. Bring a notebook and pens to each class.” I explain my reasoning and, like a good academic, cite my sources. I provide exceptions for emergencies, explaining that if a student has to take an urgent call, they can quietly slip out of the classroom to do so without judgment or penalty.
That first fall, I was nervous. Would they go along with it? Would my classes, previously well-loved, suddenly struggle to fill? To my great relief there was no significant pushback, no mass exodus. Going tech-free is still a shock, to be sure. At the start of each semester, an hour and fifteen minutes without a phone seems impossible for many students. But in time, most find it to be a relief. It gives them permission to take a break from the requirement to be always connected, always reachable, always on. Hopefully, it also creates space for deep and sustained thought.
I begin most classes by distributing an article to read—often a recently-published opinion piece—printed on paper. I encourage students to read it with pen in hand, marking it up as they go. As they read quietly, I look around the room at a group of so-called screenagers concentrating, without a device in sight. When they finish reading, they open their notebooks and write a response, by hand. In those first few weeks, I often see students massaging their palms, sore from lack of practice. After they write for five minutes or so, I open a discussion on what we just read and, distraction-free, the students engage.
In those discussions, I love that my students are actually paying attention to one another when they speak. Not everyone of course; some look sleepy and bored, but even that is better than distracted. I call this productive boredom: Without a phone or laptop to divert them, there is little left to do other than sit with their thoughts. What a gift. I ask them, “When was the last time your only task was to think?”
Lessons for the AI invasionThis experiment with a device-free classroom has also shaped my response to the AI revolution (I sometimes think of it more as an AI invasion) that has swept higher education since the debut of ChatGPT in 2022. Like smartphones before them, AI tools are wrapped in revolutionary rhetoric, trying to convince all of us that we’ll be left behind if we don’t drop our old habits overnight and jump on the bandwagon.
I’m not a luddite: I continue to be as curious about new technologies as ever. As soon as it came out, I peppered ChatGPT with questions to see if it could imitate my writing style. (It kind of can!) And I know there’s no going back; whether we like it or not, AI will be a significant presence in our lives, and I see it as my job to teach students how to use it responsibly. In my long journey with tech, I’ve learned that we can incorporate devices into our work without surrendering to marketing hype and manufactured FOMO.
As a writing professor, my job is to convince students that, as William Zinsser wrote, “writing is thinking on paper.” The process of writing — not the final product — is what sharpens our logical reasoning and self-expression. For students who don’t use AI in smart ways, the result is essays that are all product, no process — and no process means no real learning.
In my classes, students glimpse a time before they were born, when fewer distractions inhibited learning, when sitting with one’s thoughts—and, yes, being bored—could be productive and creative. I’m reminded, too, of why I love teaching, for the magic that happens when 20 people sit together in a room attending to one another and talking about ideas.
When we leave the classroom, we’ll go back to our devices, and even to our new AI tools. But hopefully the time away from them reminds us we have the power to keep tech in its place—and gives us a taste of what only human minds can do.
In the spring of 1995 I was too old for summer camp yet too young to be (legally) employed at a crappy seasonal job. I knew I would likely be spending those hot months hanging out at my aunt and uncle's pool with my nose buried in horror novels, mostly of the R.L. Stine and Dean Koontz oeuvre.
At this point in my adolescence, I didn’t fit the mold of the rest of my peers. While they were shooting hoops at the park or riding their bikes throughout the neighborhood, I was absorbing the editorial content and schedules in TV Guide, noting the listings of slasher movies I could record on the family VCR.
And while my classmates obsessed over the teen shenanigans of Beverly Hills, 90210, I found more compelling entertainment in what I thought was more mature fare like David E. Kelley’s Picket Fences and cable reruns of Knots Landing. I also knew I would be spending my summer speculating about the fate of nearly a dozen characters on Melrose Place, the Fox primetime soap that would go on to define ‘90s television for me.
Anatomy of an explosionOn May 22, 1995, Melrose Place ended its third season with one of the biggest cliffhangers of the decade – and one of the most memorable in TV history. It was a delicious convergence of storylines that cemented the show’s legendary status in pop culture.
After two seasons of being betrayed, bothered, and bitchslapped, Dr. Kimberly Shaw had plenty of reasons to hate everyone who resided at the titular apartment complex. Played by Marcia Cross nine years before she moved to Wisteria Lane on Desperate Housewives, Kimberly hated Michael (Thomas Calabro) for driving drunk and getting her into the car accident that ruined her life. She hated Matt (Doug Savant) for conspiring with Michael to hide damning evidence against him, and for literally snatching her wig in front of her colleagues. She hated Sydney, Michael's former sister-in-law, played by Laura Leighton, for making a move on him. She hated Jane (Josie Bissett) after the plan to frame her for Michael's hit-and-run backfired. She hated Amanda (Heather Locklear) for meddling with Michael and stealing Peter's (Jack Wagner) attention away from her. And she hated Jo (Daphne Zuniga) because she gave birth to the baby Kimberly could never have, which subsequently led to Kimberly kidnapping and breastfeeding said infant like Rebecca De Mornay in The Hand That Rocks the Cradle.
Everyone else in the building was just collateral damage.
So what's a gal to do with all that rage while — oh, yeah — being haunted by visions of her mother's rapist whom she killed in self-defense as a little girl? Answer: Strategically plant four firebombs (yes, real bombs) around the building and detonate them, of course!
A delayed explosionKimberly’s bombs didn't go off until the fourth season premiere in September, a decision made by the network in response to the sensitivity surrounding the real-life April 19, 1995 bombing of a federal building in Oklahoma City. But what went down 30 years ago was enough to captivate my 15-year-old self.
After waiting four long months, millions of fans and I finally got to watch Kimberly press the button and obliterate half the building, the blast flinging her into the pool like a ragdoll. One life was claimed (Morgan Brittany's Mackenzie Hart), Alison (Courtney Thorne-Smith) was temporarily blinded, and everyone else walked away with minor cuts and bruises, leaving their photogenic features intact.
As for the characters who missed the big bang, there was still plenty of death to deal with. Jake (Grant Show) killed his evil brother Jess (guest star Dan Cortese), pushing him off a building in self-defense, and Matt was framed for the murder of his boyfriend’s estranged wife.
Also killed (arguably): the writers’ ability to concoct more compelling stories throughout the show’s highly demanding 32-episode order per season. Melrose may have jumped the shark with this ratings stunt, but it made for OMG-worthy TV that would have made TikTok gag, had it existed at the time.
An alternate explosionMelrose’s whopping 32-episode order for each season was a result of “double-ups” in which the cast and crew shot two episodes simultaneously across multiple soundstages. On the January 5 episode of the Melrose Place recap podcast Still The Place, writer-producer Chip Hayes explained to podcast co-hosts (and Melrose alums) Laura Leighton, Courtney Thorne-Smith, and Daphne Zuniga that double-ups eventually ”saved a lot of money because you were still paying rentals on the stages for the same amount of time… so I could split that out between all the episodes, and suddenly I’m banking some money.”
Therefore, by the time writers were plotting the end of Season 3, Melrose had a budget surplus that could afford them a big, stunt-filled finale. In fact, the original plan for the explosive episode featured Kimberly flying a plane into the apartment complex (six years before the tragic events of 9/11). Hayes said, “When [series creator] Darren Star calls me up and says, ‘I wanna fly an airplane into the building and blow it up,’ I’m not going, ‘Yeah, right.’ I said, ‘We can blow it up though,’ and that’s because that money came from my little slush fund from double-ups.”
Taking a plane out of the equation, production built a replica of half of the building in the parking lot of Santa Clarita Studios, where the show regularly filmed. It proved to be their most challenging shoot. “We had done some small explosions on stage with windows blowing out, and we had air canons blowing Heather [Locklear] off the balcony,” Hayes said. “And then we went outside and blew it up and burned it down… It cut together great.”
After the explosionKimberly detonating those bombs in the apartment complex was the moment the show fully embraced its campy brilliance, solidifying itself as pop culture gold in my eyes. At the time, no other drama on TV came close to its over-the-top mayhem, yet the show was never quite seen the same again. The explosion ignited a whirlwind of outrageous storylines in Season 4 that blew up the traditional soap opera mold. Some viewers jumped ship as things got crazier, but I was hooked. I leaned into the madness and stayed with the show all the way through to its final episode in 1999.
As a young TV viewer growing up in New Rochelle, New York, in the ‘90s, Los Angeles was to me the fascinating crown jewel of the West Coast, a glimmering city full of beautiful people and fabulous places where beautiful people mingled with each other and, yes, slept with other beautiful people. This impression was mostly informed by a steady and possibly unhealthy diet of other Aaron Spelling dramas throughout my formative years (Models Inc., Malibu Shores, Pacific Palisades, Titans). But none had affected me as much as Melrose Place; I still remain wary of bomb-happy doctors in wigs concealing nasty head wounds.
Now that I proudly call Los Angeles my home, I can't help but feel a tickle of nostalgia whenever I drive by the real Melrose Place, a two-block strip in West Hollywood that's home to pricey boutiques and coffee shops. Also thrilling is knowing my own residence is a mere 20-minute walk from the Spanish-tiled complex that stood for the Melrose apartments in the show’s exterior shots.
I like to think that 1995 me would be in awe of 2025 me, especially after surviving 23 years of everything this city has thrown at me – personally and professionally. I’m an L.A. veteran now, and I honestly can't imagine living anywhere else. My narrative has had its fair share of twists and turns, tragedies and celebrations. Characters have come in and out of my life; some have been special guest stars while others have gone on to become series regulars. And there will always be a "new season" on the horizon filled with new and exciting developments for me and everyone I care about. One that will probably make us look back at this current storyline and ask, "What were the writers thinking?"
All seven seasons of Melrose Place are currently streaming on Paramount+.
Google is trying out smart glasses again.
At its Google I/O keynote address on Tuesday, Google showed off a product that was referred to simply as "Android XR glasses" in both the presentation and the accompanying blog post. No price or release date was given, so between all of that and the fact that the product doesn't seem to have a real name yet, this could be quite a ways off from being on store shelves.
If you think these glasses are just a ripoff of Meta's Ray-Ban smart glasses, think again. While Meta's specs are really just a camera you wear on your face with zero augmented reality features of any kind, Google's glasses have a full AR UI built into the lenses, which you can use to search for information about things you're looking at or navigate with Google Maps while you're out and about. Google also mentioned that you can use these glasses to message your friends and make appointments. However, it's worth noting that Google described the in-lens UI as "optional" in the blog post. Of course, it's all powered by Google's Gemini AI tech.
Now with messaging functionality. Credit: Screenshot: Google A preview of a VR headset on the Android XR platform. Credit: Google SEE ALSO: All the Gemini announcements from Google I/O 2025: Free Gemini Live, a Sora competitor, AI UltraThe glasses also have cameras, microphones, and speakers built-in. The one and only live demo Google attempted during the I/O keynote was for a conversational language translation tool that would theoretically enable two people speaking different languages to speak to each other in-person. However, "theoretically" is doing a lot of work there, as the demo failed after a couple of sentences, leading Google to hit the ejector seat before it could really get going.
During this segment, Google also announced that Samsung's Project Moohan XR headset will be the first consumer device to run on the Android XR operating system. Moohan will launch later this year, but other than that, Google didn't say a lot about it. On top of that, Google announced that the XR glasses will be manufactured in collaboration with glasses brands like Warby Parker and Gentle Monster, so there could potentially be a little bit more variety to the way they look than Meta's Ray-Ban glasses.
Some of the product types coming to the Android XR platform. Credit: Google The live translation demo from Google I/O 2025. Credit: Screenshot: GoogleReaders with functional memories might remember Google Glass, a similar product Google released to nearly universal derision in 2015. Glass very quickly transitioned from a consumer-focused product to an enterprise device within a couple of years of release, before Google stopped selling it altogether in 2023. It didn't work then, but maybe infusing the Android XR glasses with Gemini AI features will give them a leg up over Glass.
It should also be noted that one of Google's biggest competitors is reportedly interested in developing smart glasses, but today's news would put that company way behind the ball. Apple has been rumored to be developing AR glasses for a while now, but a report earlier this year indicated the project was dead. A different report just last month suggested Apple is still working on smart glasses, but they would be more akin to the Meta device than Google's new glasses. That would give Google a serious advantage in the market, to say the least.
A Kansas mom is suing several porn sites for violating the state's new age-verification law.
Age-verification laws vary, but they typically require some type of proof of age to enter an explicit site — beyond a "yes or no" pop-up, which used the honor system — such as a government ID or facial recognition scan. Since 2022, nearly a third of the states in the U.S. have enacted age verification on porn sites.
Kansas's law requires any site with over 25 percent of its content deemed "harmful to minors" (as defined in Kansas, nudity and other sexual content) to age-verify site visitors with a commercially available database or "any other commercially reasonable method of age and identity verification." The phrase "harmful to minors" comes from the 1968 Supreme Court case Ginsberg v. New York, which concluded that content that isn't obscene (and is therefore protected by the First Amendment) can still be "harmful to minors."
If someone can access an explicit site in Kansas without age verification, the law states that they can report it to the attorney general, who may seek a monetary penalty from the commercial entity (the website). A parent or guardian can bring a private action against the commercial entity, as well.
SEE ALSO: New GOP bill would basically ban porn, experts warnBy requiring a more rigorous age-verification system, lawmakers and those advocating for these laws are hoping to stop minors from viewing online porn. But a recent study on age verification suggests that it doesn't work for its intended purpose. One reason is that some sites aren't complying, which is the case here.
According to several complaints filed on May 12 against adult sites Chaturbate.com, Jerkmate.com, Techpump (owner of Superporn.com), and Titan Websites (owner of Hentaicity.com), the mother found her 14-year-old child called "Q.R." on all these sites after Kansas enacted its age-verification law in July 2024.
"On August 12, 2024, Q.R. found Jane Doe’s [friend of mother's] old laptop in her closet. She had stored the device there a couple of years ago after purchasing a new laptop and had since forgotten about it. Unfortunately for Q.R., it was still in working condition," the complaints read.
"Q.R., using his mother’s old laptop, had unfettered access to the internet and began searching for hardcore pornography."
Each of the four complaints details the number of instances and the dates in which the teen accessed the site. They go on to say that the 14-year-old has suffered "pain, suffering, disability, disfigurement, and mental anguish; psychological injury; past and future love of enjoyment and pleasure of living; and past and future expenses of necessary medical care and treatment."
Anti-porn group National Center on Sexual Exploitation (NCOSE) joined the lawsuits representing the plaintiff. NCOSE advocates for age-verification laws and calls them "vital" to protecting children.
"Kansas law requires pornography companies to implement reasonable age-verification methods, and the companies named in these lawsuits failed to do so, resulting in Q.R.'s access to material that is harmful to minors," NCOSE senior vice president and director of its law center, Dani Pinter, said in the group's press release.
The press release states that porn is harmful to children, but the preliminary study out of NYU suggests that age verification doesn't work to stop minors from accessing porn. In addition to a lack of compliance, it's also possible that minors can use VPNs to pretend to be in a location outside the jurisdiction. Free speech advocates Mashable has spoken to previously warn that age verification can also degrade internet security and privacy. And according to sexual freedom non-profit the Woodhull Freedom Foundation, minors aren't accessing porn at unprecedented levels anyway.
These new laws are also supported by conservatives seeking to ban porn altogether. Project 2025, the far-right policy blueprint for Trump's second term, calls for a ban on pornography and the imprisonment of its creators. One of the writers of Project 2025, Russell Vought, was caught on a secret recording last year calling age verification the "backdoor" banning of porn. (Vought is now director of the Office of Management and Budget.)
Unlike sites like Chaturbate, Pornhub has blocked itself from Kansas and other states, saying that age-verification laws are a burden. In the recording, Vought said when these laws pass, "The porn company then says, 'We're not going to do business in your state.' Which of course is entirely what we were after, right?"
This slew of complaints follows a January lawsuit Kansas Attorney General Kris Kobach filed against an owner of 13 porn sites, claiming it also violated the state's age-verification law.
The same month, the Supreme Court heard Free Speech Coalition v. Paxton, a case about the constitutionality of Texas's age-verification law. The decision will likely come next month.
Google's latest shopping feature makes Cher Horowitz’s computerized closet a reality. The new virtual try-on tool within its "AI Mode" search option lets users see how outfits look on photos of themselves.
Announced during the opening keynote at Google I/O 2025 on Tuesday, the tool uses a new custom image-generation model to place clothing pictured in online product listings onto a full-length shot provided by the user. Per a company blog post, the model "understands the human body and nuances of clothing — like how different materials fold, stretch and drape on different bodies." According to Google, it will also be able to accommodate different poses.
Google introduced a similar virtual try-on tool for Search back in 2023, but it relied on pre-picked photos of various models.
How to use Google's "AI Mode" to try on clothes virtuallyU.S. users who have joined Google's Search Labs testing platform will be able to use the try-on tool starting today. Those who have opted in will see a "try it on" icon overlaid on Google product listings for shirts, pants, skirts, and dresses. After tapping the icon, Google will have you upload a well-lit, full-body picture of yourself. (A how-to page says the tool works best if you're wearing fitted clothing in the picture.) Google's AI will then use you as a virtual mannequin of sorts.
Google says it will roll out additional AI shopping features in the coming months, including a personalized "Shopping Graph" of product inspiration and an agentic checkout feature that can help users track deals within their budget.
Google has injected AI features into practically all of its products, now including Chrome.
At Google I/O, the tech giant's annual event, Google announced that Gemini is coming to Chrome as it transitions into the generative AI era, antitrust issues be damned. Gemini's integration with the browser means users can ask questions about information on sites, or even navigate to those sites, while browsing the web.
SEE ALSO: Google AI Mode is launching in the U.S., kicking off a new era of AI searchGemini on Chrome will be available to Chrome users on Windows and macOS, but only for paying subscribers to Google AI Pro and AI Ultra plans, which cost $20 and $250 a month, respectively.
Meanwhile, Google is in the remedial phase of its antitrust case, which the U.S. Department of Justice is prosecuting. Google has been ruled a monopoly for leveraging its Chrome browser in anti-competitive ways. The fate of Chrome is yet to be decided, but a potential outcome is forcing Google to sell Chrome. If that's the case, OpenAI has said it would be interested in buying it, which adds a whole new AI-powered layer to Chrome's future.
Google announced Gemini integration with Chrome at I/O, along with other Gemini updates, including a new AI filmmaking tool called Flow and Gemini Live, which is free for Android and iOS users.
Google just rolled out a product that might make coding a lot easier.
Google introduced Jules, its AI coding tool, in December in Google Labs. Today, Jules is available to everyone and everywhere the Gemini model is available, without a waitlist.
SEE ALSO: Here's everything AI coming to Google Gmail"Just submit a task, and Jules takes care of the rest — fixing bugs, making updates. It integrates with GitHub and works on its own," Tulsee Doshi, the senior director and product lead for Gemini Models, said at Google I/O 2025. "Jules can tackle complex tasks in large codebases that used to take hours, like updating an older version of Node.js. It can plan the steps, modify files, and more in minutes."
According to a Google blog post, Jules is an "asynchronous, agentic coding assistant that integrates directly with your existing repositories. It clones your codebase into a secure Google Cloud virtual machine (VM), understands the full context of your project, and performs tasks."
How to sign up for Jules right nowTo try Jules out for yourself, you can sign up at jules.google. Click "Try Jules" in the top right corner to create your own account.
Google isn't the first tech giant to use AI to create a coding tool. Coding is one of the more impressive capabilities of AI tools so far, as seen from coding tools from the likes of Anthropic.
At Google I/O, the company unveiled a major push to infuse AI deeper into Gmail. The tech giant is promising users that the change will translate to less time spent writing emails and more time doing literally anything else. With the help of Gemini, Google’s flagship AI model, Gmail is moving from a helpful assistant to a full-on writing partner, scheduler, and inbox manager.
Here are all the new Gemini-powered features coming this summer to Gmail.
Smart replies Credit: GoogleGoogle’s first big update is personalized smart replies. Unlike the canned one-liners we’re used to (“Sounds good,” “Thanks!”), this new system draws context from your inbox and Google Drive. It adapts to your tone — whether you’re casual or formal — and pulls in relevant details, so replies feel more natural.
In a demo, a dog groomer named Stephanie replies to customers without typing a word. Gemini generates answers in her voice, grabs appointment details from past messages, and inserts pricing from her Drive files. It’s fast, frictionless, and a little uncanny.
Decluttering Credit: GoogleNext comes inbox cleanup, the kind of task we all put off. Gemini can now archive and delete emails on command. Ask it to clear out last month’s confirmations, and your inbox refreshes itself — no clicks required.
It’s undeniably useful, especially for anyone buried in years of digital clutter.
Meeting schedulingDepending on who you ask, native appointment booking in Gmail might be the most practical new feature announced. No more bouncing between email threads and Calendar tabs — now, when Gemini detects you're trying to set up a meeting, it’ll prompt you to insert availability directly into your reply.
This feature could be handy when coordinating with people outside your organization. Recipients can book directly via your shared booking page.