Cases of Google accounts deleted due to CSAM false positives skyrocket on Reddit
The number of users reporting permanent closure of their Google accounts for alleged violations related to child sexual abuse material (CSAM) is experiencing a notable increase. A quick search on Reddit shows dozens of recent cases where people claim to have lost complete access to Gmail, Drive, YouTube and other Google services after syncing family photos, medical images or even artificial intelligence datasets.
While it is impossible to verify every individual testimonial on Reddit and we must treat them with caution, the growing volume of reports is alarming. Most of these cases seem to arise after massive syncs which include downloaded memes, personal files where the account owner’s children appear, or completely legal content that Google’s system misinterprets. Presumably, these would be false positives from Microsoft’s PhotoDNA algorithm that Google uses to detect this type of material.
The problem is not new. In 2022, The New York Times documented two verified cases of parents whose accounts were permanently closed after photographing their young children’s genitals on medical advice. In San Francisco, a father named Mark photographed his son’s swollen genital area following a request from his pediatrician’s nurse for a video consultation during the pandemic. Two days later, your account was permanently deactivated. She lost emails, contacts, documentation from her son’s early life, and her Google Fi phone number, blocking her from accessing other digital accounts that relied on security codes sent to that number.
In Houston, another father lost his account after photographing his son at the direction of a pediatrician to diagnose an infection. The images were automatically synced with Google Photos and sent to his wife via Google Messenger. The couple was in the process of purchasing a home when the account was deactivated, further complicating the situation. In both cases, police investigated and determined that no crime had been committedbut Google kept the accounts closed despite appeals with police reports certifying the users’ innocence.
A recent case on Reddit involves a developer who lost his account for using an AI training dataset to develop Punge, an on-device NSFW content detection app. The marked image turned out to be simply a woman’s leg. Another user reported losing access after using Google’s Nano Banana image generator to recreate a viral TikTok trend where people hugged younger versions of themselves. None of these cases involve actual CSAM.
The cloud is convenient, but never forget that it is someone else’s computer
Beyond the individual veracity of each case reported on Reddit, the situation raises an important reflection on dependence on cloud services. Google, in its official statement, assures that “human reviewers also play a critical role in confirming matches and content discovered through AI.” However, documented cases suggest that these controls fail more frequently than is desirable.
The recommendation is clear: keep local backups of everything you consider important in Google Drive, Photos or any other cloud service. The cloud is convenient and useful, but it will always be someone else’s computer, subject to their algorithms, policies and errors. No user should lose years of emails, family photos, work documents or access to critical services because of a false positive with no real possibility of appeal. Taking control of your own data through local storage and regular backups is the only way to ensure that an algorithmic error doesn’t erase your digital life overnight.
The Apple paradox: sued for the opposite problem

The situation with Google stands in ironic contrast to the lawsuit West Virginia just filed against Apple. As Google faces criticism for scanning too much and generate false positives that destroy innocent users’ accounts, Apple is being sued for not scanning enough, allegedly allowing iCloud to become a safe platform for illegal material.
Apple abandoned development of its CSAM detection system in 2022 following strong backlash from privacy advocates who argued that any backdoor, even well-intentioned, could be abused. He now faces legal consequences for that decision. Between both extremes, the technical and ethical dilemma remains unresolved: is there a solution that effectively balances child protection with privacy and that does not destroy the digital lives of innocent users in the process?
