Google’s Unintentional Radicalization

Emily Hering
4 min readDec 27, 2020
Photo of a tilted screen with Google’s home screen, the search bar, displayed.
Photo by Christian Wiediger on Unsplash

In recent discussions of how the internet, particularly search engines can fuel misinformation, a controversial issue has been whether tech monoliths like Google have been contributing to the ever-growing divide between populations through politics, racial and religious identities, and social movements. Some argue that search engines like Google are creating an echo chamber-like environment where information is not neutral but biased based on how the user searches, its effects of which can be seen through products like Google News and Google Search. While search engines have rapidly revolutionized how people obtain information, they hinder democracy and further the divide between populations by narrowing what information is deemed worth knowing.

Google, to many people, is an enigmatic being that seemingly knows all, allowing a user to find anything they want whenever they want in just a few clicks and keyboard taps. As a user types into Google’s search bar, they are presented with suggested endings to their search query, or auto-suggestions. For example, when a user begins typing “is,” they are suggested queries like “Is Google a search engine,” “Is google a verb,” and “Is virgin media down,” reflecting what its users are frequently searching for. However, Google’s algorithm and how it works is a mystery, if not nonsensical to many, even its own engineers, with the only trackable data about how it works can be deconstructed by its output. Google’s vice president of news, Richard Gingras, acknowledges, “It’s a live and vibrant corpus that changes every day.” Google’s auto-suggestions may be helpful to many users in searching for what they are looking for faster, but it can also suggest morally-questionable inquiries. In a Wired article from 2018, Issie Lapowsky typed in “Islamists are” into Google and the auto-suggestions showed results like “Islamists are not our friends,” “Islamists are coming,” “Islamists are evil,” and many others, highlighting the xenophobic nature of its auto-generated suggested search terms. I typed the same starting words, “Islamists are” into Google to see if anything had changed between 2018 and now to see the top result be “Islamists are gaslighting us,” with the same suggested search from 2018, “Islamists are coming,” just underneath.

Screenshot of Google’s search bar reading the auto-completed results for “Islamists are.”

Out of curiosity, I sought out Google’s autocomplete policies to see what actions, if any, have been taken to weed out harmful or abusive suggested searches to find, “Our systems generate these predictions based on a variety of factors, including how often others have searched for a term.” While Google is not entirely absolved of its misleading search suggestions and their results, the algorithm it employs is not sentient. It cannot have thoughts and make moral decisions based on what people are searching. It calculates its results from what its users have been searching, compiling a list of some of the most popular, reinforcing and even suggesting racist and xenophobic sentiments its users are searching for.

As I suggested earlier, some argue that search engines like Google can help create and reinforce an environment somewhat like an echo chamber for its users, only showing them what is pertinent to them and nothing more. While scanning through a newspaper like The New York Times or the Mercury News, there are bound to be articles that not everyone is interested in, but with Google News, all of the news presented to its users are relevant to them. This phenomenon aligns with the concept of the “Daily Me” or the growing news personalization that websites like Google are helping to create. The phrase was coined by Nicholas Negroponte, co-founder of Massachusetts Institute of Technology’s Media Lab. Google’s aggregated news service quickly falls into this hyper-personalized phenomenon in which it shows a highly curated selection of news articles from around the web. This can easily fall into the same trap as Google’s search autocomplete search terms and results in which they become a direct reflection of what a user is searching for. I typically search for things relating to Animal Crossing, 90 Day Fiancé, and Taylor Swift, so the majority of the articles presented on my Google News homepage reflect those interests. The same happens to a user who might be a right-wing extremist — they could search for conspiracy theories and their Google News would be filled with posts from Alex Jones, right-wing media outlets, etc. The behavior of passively filtering out what is not interesting to a user through their searches is only becoming more and more reinforced by the rise and use of social media. The hyper-personalization aspects of news and media, prophesized by Negroponte’s concept of the “Daily Me,” are becoming more of a reality as seen on Google News. Google’s hyper-sensitivity to what users are searching for and using that data to bring up relevant articles and websites creates an echo chamber for each user, squelching diversity of media sources and various viewpoints.

Google’s search engine, while it has become a replacement for libraries and other databases of yesteryear, has revolutionized and radicalized what people are searching for on a daily basis. Instead of the unbiased, all-knowing website users think Google is, the media monolith instead maintains inequalities like racism and xenophobia in its own algorithms, unintentionally condoning the unjust phrases and terms users are searching for. While Google’s search engine and other products may have revolutionized how users search for information, its algorithm hinders true unbiased information, making the divide between its users and the population at large even wider.

--

--

Emily Hering

Media Studies student at the University of San Francisco.