Stay at Home Dad’s “Life Ruined” After Google Flags Photos of His Sick Son as “Child Abuse”

3,307
Google Flags Photos of His Sick Son
The New York Times reported on the incident involving a man in his 40’s who was reportedly investigated after he had taken a series of pictures of his son’s genitals in February 2021 in order to keep track of the progression of a serious-looking infection the child had developed. File photo: TY Lim, Shutter Stock, licensed.

SAN FRANCISCO, CA – A San Francisco man claims that his life was “ruined” after Google’s AI flagged a series of photographs that he had been taken to document his toddler son’s illness for his doctor during the COVID-19 pandemic – when remote contact with medical professionals was commonplace – as child sexual abuse, with a subsequent investigation following after the internet giant reported him to authorities.

The New York Times reported on the incident involving a man in his 40’s only identified as “Mark,” who was reportedly investigated by police after he had taken a series of pictures of his son’s genitals in February 2021 in order to keep track of the progression of a serious-looking infection the child had developed.

In the midst of the pandemic, virtual doctor’s consultations often supplemented in-person office visits for safety reasons, and at the recommendation of his doctor’s nurse, Mark obliged and uploaded a series of photos of his son’s infection to share with his doctor via Google’s cloud service. The doctor remotely diagnosed the infection based on the photos and prescribed antibiotics that cleared up the condition; at that point, Mark though the matter was over and done with.

However, due to the nature of the pictures – and the fact that Mark’s hand could be seen in one of them – the artificial intelligence governing Google’s cloud flagged the content and disabled Mark’s account for “harmful content” that could be related to “a severe violation of Google’s policies and might be illegal.”

Google’s AI is programmed to scan for content that could be considered child sexual abuse material (CSAM); in the instance that such potential content is allegedly detected, a human content moderator would then review the offending material in question, determine if it violates the law, and then – as required – refer the matter to the authorities.

“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”

Mark, who has worked in the Information Technology industry, suddenly found himself locked out of all of his Google-related services – including his emails, contacts, documents, and phone – bringing his professional life to a screeching halt. He attempted in vain to appeal the decision, but Google wouldn’t budge. Shortly afterwards, the San Francisco Police Department began an investigation into him.

After several months Mark was cleared of all wrongdoing, but by that point law enforcement had combed through his internet searches, location history, messages, and every photo and video he had digitally stored on Google’s cloud. But despite being exonerated, Google not only refused to reinstate Mark’s account, but informed him that it will be permanently deleted; Mark considered legal action, but decided the price tag was not worth it.

Mark’s photos were not considered illegal or exploitative by the police; nonetheless, Google’s AI – which Jon Callas of the Electronic Frontier Foundation said is “intrusive” and lacks the refinement to discern the CSAM from harmless content – which not only put a massive dent in Mark’s professional life, but nearly cost him the custody of his son.

“A family photo album on someone’s personal device should be a private sphere,” Callas said. “This is precisely the nightmare that we are all concerned about. They’re going to scan my family album, and then I’m going to get into trouble.”

Comment via Facebook

Corrections: If you are aware of an inaccuracy or would like to report a correction, we would like to know about it. Please consider sending an email to [email protected] and cite any sources if available. Thank you. (Policy)


Comments are closed.