Babysitters Beware: AI Technology May Soon Be Lurking on Your Social Media

We’ve heard that artificial intelligence — human intelligence exhibited by machines — is getting into just about everything. The latest AI target? Babysitters.
Predictim, an AI service, offers background checks and personality assessments to families looking for just the right babysitter. The system scans publicly available online data about an individual, often through social media platforms, to determine risk for elements like “bullying” and “bad attitude.” That’s right, this AI bot is checking out your Instagram and Facebook feeds for photos and posts that indicate whether or not you are worthy to look after a family’s kids. Facebook, Instagram and Twitter recently blocked Predictim for violating the tech firms’ rules on data harvesting and user privacy, but the company has vowed to continue its service.
How Clean Is Your Digital Footprint?
Can sophisticated computer technology really understand your online presence in the true spirit in which you posted? Teens are wondering. “It’s intrusive in a way because they look into your life,” says Delaney Finn, a 16-year-old from Michigan, U.S., who has babysat for five different families in the last two years. Though Finn says she uses her social media conservatively and is careful about what she says online, she worries that technologies like Predictim “can’t discern different things that an actual person could” if they were to be used more regularly to assess babysitters. “They’re not accounting for the mistakes and the humanistic aspect of jobs, and I feel like that’s something that is lost when you have AI do the work.”
According to a recent Washington Post article, parents can purchase a Predictim scan starting at $24.99. With the consent of the scanned individual, the system uses his or her name, email address, and access to social media accounts to craft an assessment of the individual’s entire digital history. Predictim executives told the Washington Post that the company uses “language-processing algorithms and an image-recognition software known as ‘computer vision’” to evaluate candidates. The company’s technology generates a “risk score” based on social media activity that ranks a person’s attitude and their likelihood of engaging in online harassment or abusing drugs.
For teenage babysitters, this new technology is yet another challenge to your online digital footprints. While services like Predictim can be a benefit for parents who are looking to go deeper than an in-person interview and a quick Google search on their potential babysitter’s history, some teens wonder if AI technology belongs in the realm of making babysitting hiring decisions – and if it can make accurate assessments about who they are as people. In the Washington Post article, Jessie Battaglia, a mom using the Predictim service to find a babysitter for her 1-year-old, said that she believes “social media shows a person’s character.”
While Finn has never experienced a client asking to run a scan on her background or social media profiles, she says she has a close friend who also babysits and was asked for her social media handles during an interview. The friend obliged, and the family didn’t find anything amiss.
“In some ways, the AI technology is valid and understandable, as parents are worried,” Finn adds. “But with testing personality based off of something on social media, it should be more interpretive than something that is summed up in a composite score like an SAT or ACT.”
It’s All About the Chemistry
Alexandra Clugh, 16, is another teen babysitter from Michigan who feels that AI would be a net negative presence to the job-search process. Clugh regularly babysits three special needs kids between the ages of eight and 12, and has been babysitting for her younger brother and cousins since she was 12.
Clugh says that she is fine with some of her babysitting clients following her on social media because she doesn’t have “anything to hide.” However, she finds the prospect of AI technology like Predictim being used for job interviews “kind of scary, considering all the things it can find, but also the things it can’t tell because it’s not an actual person.”
“If the parent is worried about letting someone into their home and giving care to their child, background checks really can help and at times might be necessary depending on the situation,” Clugh says. “But the personality analysis is what I’m not a fan of.”
Clugh attests that babysitting is often about the in-person chemistry between the babysitter, the child, and the parent, which she believes cannot be reliably tested through an online personality scan or risk-factor assessment. She personally got to know the families she babysits for through a summer program where she had daily interactions with their children. They saw the relationship that she developed with their children, and ultimately, they hired her.
If asked to consent to an AI scan like Predictim in the future, Clugh says that she would most likely decline and take her babysitting job search elsewhere. “I don’t want AI to determine what my ability level [is], when I myself have a résumé that I’d say is kind of impressive,” she notes. “If that’s not enough — if my past experience isn’t enough to show that I’m willing to work and that I will work well with children — then I’ll just move on to a different family.
How do you feel about Predictim’s AI technology? Do you think it’s too invasive or that it provides a valuable service to parents?
Have you ever been asked by a potential babysitting client to submit to a Predictim scan? Share your story in the commenting section of this article.
Do you agree that “social media shows a person’s character?” Why or why not?
I think this is one of the most fascinating discoveries I have made about the use of AI in everyday life. It almost seems too good to be true. It’s hard for me to wrap my head around the idea that an intangible, non-human AI software has the ability to assess someone based on their digital footprint. I agree that the Predictim program can be helpful in identifying hateful comments made by a prospective babysitter. A simple background check can easily help parents see if a babysitter will be dangerous. However, I don’t necessarily think that this logic translates seamlessly into real life. Someone might present themselves as an angel online, while their true colors only come out in person. It’s hard for AI software to pick up on this nuance, since it lacks the mindset of a human. Therefore, I maintain that the skills of AI should definitely be used. However, it must be taken with a grain of salt. No online risk detector can replace in-person interviews. This is especially relevant when it comes to the importance of choosing the perfect babysitter for your kids.
Additionally, I think it’s important to note that Predictim has other limitations. For example, people with no online presence will be overlooked. Even if they are a qualified babysitter, the lack of online activity might unfairly disqualify them in comparison to someone with a stronger profile. I would also like to note that AI is prone to making mistakes, especially when trying to understand content from social media. Online platforms are already confusing as they are, containing slang and satire. AI could easily misinterpret a post without knowing the context or the intentions someone had when uploading it.