Menu

Decision-Making: Re-evaluating Your Inner Algorithm

For those of you who are active on social media, you may notice that your home screen displays content that aligns with your unique preferences. From one user to another, content displayed and promoted can be vastly different. Rather than marketing to a specific demographic, social media has adapted and utilized technology to create tailored experiences for each user based on their likes and dislikes. Each app’s algorithm collects data on your social media usage – noting patterns of behavior based on your interests, who you follow, and the type of content that you interact with – and then uses this information to make its decisions.

However, these algorithms are not always accurate. Using data to create shortcuts and make assumptions opens the door for mistakes. For example, in 2016, Microsoft launched a Twitter account under the name “Tay,” whose purpose was to use the data collected from other Twitter users to “learn” how to speak conversationally. However, the project was off course less than 24 hours after launching, due to Twitter trolls flooding the bot’s timeline with racist, misogynistic, and anti-Semitic tweets. Unknowingly, Tay collected this information and was soon tweeting inappropriate tweets from its own account. The project had to be scrapped.

Much like apps such as Facebook, Amazon, and Instagram, we use the information we collect from our world and our experiences to make decisions. And much like these apps, the way that we make decisions can be riddled with errors. We use strategies called heuristics to help us make decisions. These shortcuts help us quickly make judgements, especially in situations where we have incomplete information. However, these shortcuts can be highly susceptible to bias, manipulated by our own experiences rather than being based in fact and logic. For example, we tend to overestimate the likelihood of dramatic events, such as terrorism and murder, because when they do happen, it dominates our news coverage. However, we tend to underestimate the impact of more common events, such as the risk of stroke or diabetes, because they are not as vivid and dramatic. Thus, a single, emotionally-charged event can continue to sway our views and opinions, even after being presented with a plethora of mundane data that would point to an opposite conclusion.

In order to avoid mistakes such as these, the algorithms that control your online content are constantly collecting new and different information and factoring in this information to its calculations in order to align itself as closely as possible to your true interests. The algorithms are constantly learning about who you are, collecting data every minute of the day so that the algorithm can adjust and make the best decision possible about the type of content to show you.

I challenge you to ask yourself, when is the last time you re-evaluated your inner algorithm? Think about your longest-held assumptions.

What new information might you incorporate to challenge this assumption?

From where are you collecting your data?

Do you have the complete picture?

What are the consequences of making an incorrect assumption?

Although the goal of algorithms is to help computers learn from humans, there are also lessons for humans to take away from the way computers operate. Too often, we make choices because that is the way it has “always been done.” We find comfort in our long-held ideals and beliefs and fail to incorporate new information into our decision-making process. By taking a step back and understanding that, just like computers, our decision-making process may be faulty, we can only then begin to correct our mistakes.

Author

  • Kelsey is a Consultant at CMA, where she has been serving clients since 2018. Kelsey received her master’s degree and PhD in Industrial-Organizational Psychology from the University of Oklahoma, along with a minor in Quantitative Psychology.

    View all posts