Gene Policinski, inside the First Amendment
Gene Policinski is chief operating officer of the Newseum Institute and senior vice president of the Institute’s First Amendment Center. He can be reached at email@example.com.
Follow him on Twitter:
As we hurtle through the innovative and endlessly updated second decade of the 21st century, the prospects seem brighter and better than ever that our new web and social media tools will help us better communicate and more effectively confront serious challenges like terrorism.
But then, there are the reminders that the Algorithmic Age is still in its infancy and that all the programming in the virtual world sometimes falls short of good old people brainpower. And therein are the early warning signs that tech companies need to take in consideration of free expression rights into the inevitable — and perhaps even desirable — tilt toward artificial intelligence over human “editors” controlling the flow of information.
Why not just use people instead of machines to oversee our posts, tweets, website content and such? ISIS is a good example of why not to do so. The terror group is in a running battle with social media sites to promote itself to the current and next generation of young people.
Hundreds of thousands, perhaps millions of bits of propaganda have been tossed into the internet information flow of billions of images, messages, rants and raves. Recruiting videos, images of beheadings, even a slick feature film threatening Twitter CEO Jack Dorsey and Facebook founder Mark Zuckerberg, are among the social media posts by ISIS and its offshoots.
The response to the persistent and global electronic tactics by these inhumane criminals requires constant sifting through the billions of messages, posts, sites and images that make up the World Wide Web — and that requires algorithmic surrogates to constantly prowl the internet.
Earlier this year, Twitter announced it had eliminated more than 125,000 accounts linked to ISIS. Facebook has deleted posts and blocked accounts. Google and subsidiary operation YouTube have aggressively moved to block content submitted by the extremists. Hence, the video threat days later from ISIS aimed at Dorsey and Zuckerberg.
But with the good comes the bad — or at least actions that are not in keeping with the web’s promise of free expression for all. Machines and methods are only as good as the people who create and instruct them, and technology alone does not guarantee freedom.
For example, you might have seen the brief international flap over an automated decision by Facebook to ban a Pulitzer Prize-winning photo of a young girl, naked and facing the camera, running down a road. The image — posted by several Norwegians — was removed because it violated the social media behemoth’s rules on nudity and child pornography.
If you viewed the photo through the lens of a mechanical eye, case closed. Full-frontal nudity, perhaps even child porn. Check. Delete.
Except that the image was photographer Nick Ut’s Pulitzer Prize-winning photo in 1972 of nine year old Phan Thi Kim Phuc, screaming as she ran burned from a napalm attack by South Vietnamese forces.
As Facebook CEO Sheryl Sandberg admitted in a Sept. 10 letter to Norway’s prime minister about Facebook restoring the photo on its pages: “We don’t always get it right.”
Sandberg explained that the photo was restored because of its “global and historical importance,” even though on the surface, the photo conflicted with “global community standards.” Sandberg added that “screening millions of posts on a case-by-case basis every week is challenging. Nonetheless, we intend to do better.”
Well, that’s good — but not a guarantee.
Facebook and the U.S.-based social media community are not bound by the First Amendment. As private companies, they have the right to make their own decisions on overall standards. The amendment’s reach in any case only applies in the United States, a fraction of the global communities now engaged in instant interaction. The insistence by Google, Facebook, Twitter and others that they are merely “technology” companies would seem to argue content considerations are not their domains.
Still, it’s incumbent on the titans of social media to “do better” on considering and defending free expression. The tremendous impact on our lives elevates them to “quasi-government” status, where core freedoms must be protected. A report by the Pew Research Center and the Knight Foundation found that Facebook and Twitter are now seen as a prime news provider by 63 percent of their audiences.
Real governments are now turning to social media companies to help combat terrorism. But there are concerns that blocking tactics will have negative impacts. Eliminating images posted by terrorists might also eliminate the true shock and horror the civilized world might need to experience to fully appreciate the depravity of its enemies. Attempts to remove all ISIS recruiting videos might leave opponents unable to discuss fully or oppose effectively this misbegotten use of the web. And such tactics could even hamper the boots-on-the-ground work of anti-terrorist forces by pushing would-be ISIS advocates off common screens and into less-traceable means and methods.
Human editors have always had to create a balance between reporting the news we need to know and being manipulated by media-savvy groups with non-news motives aimed at political or social tactics. But that balance historically tilted toward “news” and information rather than less of both.
As social media operations increasingly deploy cyber editors to make those same decisions, users in their “communities” ought to insist that somewhere in those zillion bits of code and autonomous commands is at least the electronic spirit of the 45 words of the First Amendment.