By Bartlett D. Cleland
Both state and federal legislators are actively pushing legislation designed to “protect our children online.” Of course, there is already a federal Children’s Online Privacy Act that regulates the online collection of personal information about children under 13 years of age to such a degree that social media sites largely disallow those under 13 in the first place.
So, who we’re really talking about are teens, and the legislators who want government to step in and limit their time or completely stop them from accessing social media by requiring social media sites to implement age verification.
But these proposals go well beyond a self-declaration of age like one might find on a liquor company’s website. Rather, the legislation would impose legal liability and large fines on companies that do not block teens from opening a social media account, or from being online at all, unless they can prove parental consent.
Often the sponsors assert that “we” should know if “kids” are online. That’s fair enough since those enabling access to being online should definitely know if their teens are online and what they are doing. But rarely are proponents of age verification forthright about what’s involved in requiring age verification and knowing who is online.
There are moral, ethical, constitutional and even patriotic arguments against these age verification schemes but the operational challenges alone are worth noting. Because you cannot verify age without verifying identity, the first task a company must face is determining that an account is being set up for a teen.
Data and proof will have to be collected to verify the identity of the teen who wants the account and also for the adult who is required to approve. Evidence will be necessary to document the relationship between the teen and the adult, parent, guardian, teacher, etc. Proof that the adult has the authority to approve the teen will also be needed. Then consider the complications of divorce and custody, or teens in foster care. The documentation required will be extensive and intimate.
What about adults who simply want to open their own account? They will have to prove their identify to prove they are an adult. No more anonymity. Think that’s a great policy goal? Perhaps for those who aren’t victims of stalking or an abusive marriage, or people trying to start over while reconnecting with supportive friends.
How will a company avoid liability if the data is correct but the submitter is gaming the system? Perhaps a biometric identifier will be needed, eye scan or fingerprints on the device so that the person claiming to be an adult can be verified as a specific adult.
Of course, all the collected data will have to be stored/retained at least until a statute of limitations runs out just to protect the company from legal liability. Do we really want all that information about adults and their teens stored in a giant database?
These proposals seem non-serious, or perhaps the legislators and organizations peddling such intrusive government across the country hope people just won’t think through the details. The shift toward heavy regulation and liability might lead some to speculate whether the real motive is to drive up costs for social media so the companies will be put out of business by weaponized government.
Protecting kids is a worthy goal for us all, but there are ways to do it that don’t require sacrificing our liberties and violating our privacy, which is a harm in itself, including for our teens.
This TechByte was written by Bartlett D. Cleland, research fellow with the Institute for Policy Innovation.