Peter Julian, the New Democrat member of Parliament for New Westminster, recently tabled a bill to address what he suspects are algorithms that encourage online extremism. B’nai Brith’s annual audit of antisemitic incidents, released recently, said three-quarters of antisemitic incidents last year took place online. And, as the Centre for Israel and Jewish Affairs notes in their Not On My Feed campaign, “Online hate leads to real-world violence.”
Few people would disagree that online hate and incitement are problems. How we confront it – that’s where we get into the weeds.
It is possible to control what people read on the internet – countries like China and Iran have demonstrated that, in anti-democratic and oppressive ways. Democracies like Canada should not join the high-tech book-burning that is internet censorship by government. Governments and regulatory bodies, of course, do have a role, however. Setting parameters for acceptable online behaviour and then enforcing these to the extent possible must be a role authorities take on. Staffing limitations are obviously a challenge, but several precedent-setting cases could send a message to others.
Social media behemoths like Facebook and Twitter should take action where they can to delete the most dangerous incitement. These corporations have proven themselves either incapable, unwilling or incompetent at this task. Governments need to incentivize vigilance by making lack of response financially unsustainable. In Germany and France, for example, social media platforms have 24 hours to take down hate speech or risk fines. Likewise, internet service providers (ISPs) have a responsibility to monitor the independent sites housed on their networks, the places where hate groups recruit and train.
Interventions like these are important, but of limited impact. For example, ISPs are based everywhere and every country has different rules around online content. Even Canada and the United States – countries perhaps as comparable as any two on earth – have dramatically different ideas about limitations on freedom of expression.
Attacking online hate and incitement is a perpetual game of Whac-A-Mole. However, just because a task is difficult does not mean we should shy away from it. On the contrary. We must do more of what is difficult.
We are a mere two generations into a connected civilization. We are still babes in the online woods. Yet, in many ways, we behave as though we are in the world we once knew.
We are no longer in a world of three TV networks and two daily papers. We are on a planet of nearly eight billion people – and anyone with an internet connection has an ability to reach audiences exponentially greater than the most powerful voices of a century ago.
It is simply not possible to effectively police online content – though we are correct to monitor and identify the worst of the worst.
There are two things that democratic societies that seek social peace and coexistence must strive toward. First, we need to empower individuals and organizations to counter untruths with truths. We must make it as easy to access the facts as it is to stumble upon misinformation and disinformation.
Google News, for one, has taken to adding a fact-checking section to their search results pages. The site Snopes.com provides a compendious analysis of online truths and fictions (although it has had its own veracity issues, involving a plagiarism controversy in 2021.) On issues of antisemitism, a veritable constellation of organizations exists to identify and correct misinformation, including HonestReporting, the Committee for Accuracy in Middle East Reporting and Analysis (CAMERA) and the Middle East Media Research Institute (MEMRI).
But we also need to attack this problem at the other end, on the “consumer” side. We must do a better job of educating and equipping people in democratic societies to critically discern fact from fiction, news from commentary, legitimate criticism from unfounded bias and hate. In a time when parents and others are concerned that education systems are not effectively teaching what are collectively, if imprecisely, called “the basics,” anyone asking teachers to also become instructors in the complexities of media bias and online incitement is going to come up against preexisting groups calling for more life skills training, more “three Rs,” more economic literacy, more mathematics and science, more physical fitness, etc. There is only so much that can be fit into a six-hour school day.
We live in a time and place where one of the most watched “news” networks routinely feeds falsehoods to viewers, even if a cost of doing business is a legal settlement of $787 million. Those lies led to an insurrection that tested the strength of American democracy more than anything probably since that country’s Civil War. That was an early warning signal for every democracy about the price of disinformation. We cannot hope this problem goes away, because what is likely to go away in such a scenario are our most cherished societal values.
We must do more of everything we are already doing. We must confront and contest the lies and hatred online (and in other media). We must allocate our philanthropic funds to organizations that counter lies and incitement. We must include everywhere we can – in formal and informal educational settings – lessons on identifying facts from falsehoods.
In an online world where conflict and hatred get algorithmic kicks to the front of the line, we must teach the young (and the older and less tech-savvy) to value that which unifies and enriches. In the simplest formulation, we need to remind our children, our grandchildren and ourselves of that old truism: don’t believe everything you hear or read.