Google’s latest attempt to integrate generative AI into our living spaces, Gemini for Home , is generating headlines—but perhaps not the kind Google intended. Rolling out as part of Google’s paid Home subscription, the feature promises smart daily summaries and conversational insights from your Nest camera footage. However, early reports from users suggest that Gemini for Google Home system frequently misidentifies common household events, even erroneously detecting the presence of deer or fake people, generating accuracy doubts.
The new feature utilizes the Gemini AI model to process video clips, generating summaries and answering questions via an “Ask Home” chatbot. According to reports, it handles basic tasks like creating automations well. However, its video identification skill is proving unreliable sometimes, leading to some genuinely unsettling notifications.
Google Gemini for Home misidentifies dogs as deer, accuracy in question
As noted by Ars Technica , recurring issue involves the AI reporting fictional events. One user was alerted that, “Unexpectedly, a deer briefly entered the family room.” At that point, the camera will be simply looking at a dog. This isn’t a one-off mistake; the system frequently confuses dogs and shadows with “deer,” “cats,” or even “a person” roaming around an empty room.

Source: Ryan Whitwam (Ars Technica)
The problem is particularly jarring when Gemini mislabels security-critical events . An alert that “A person was seen in the family room” can cause genuine alarm, only for the user to check the feed and find absolutely nothing. After a few false positives, users quickly learn to distrust the system. These failures defeat the entire purpose of a security monitoring system.
Google attributes these errors to the nature of large language models. The firm explains that Gemini can make “inferential mistakes” when it lacks enough visual detail or base-level common sense. In fact, the AI is reportedly great at recognizing car models and logos. However, it struggles with the simple, necessary context that a human observer would never miss.
The question of launch readiness
For the security monitoring system to be genuinely useful, these inferential errors should decrease quite a bit. Google is “investing heavily in improving accurate identification” and encourages users to correct the model through feedback. However, at least for now, the core issue remains.

Source: Ryan Whitwam (Ars Technica)
It is surprising that Google chose to launch a premium feature that requires this much “hand-holding” to function correctly out of the box. Gemini for Home is a new product, and it will likely improve significantly. The company must gather more data and refine the model. However, releasing a security feature that effectively cries wolf about intruders right at launch risks eroding user trust and makes it difficult to justify the $20-per-month Advanced subscription fee. Users may find the $10 per month subscription, which offers less video history but avoids the unreliable AI features, a much smarter bet for now.