Ethics and Solutions of Choice Architecture in 2019
Every day humanity generates 2.5 quintillion bytes of data. It’s hard to fathom how any one person could manually organize this vast collection in search of valuable information. To solve this issue, IT specialists invented handy helpers, such as recommender engines. They help consumers choose what to watch, read, and buy; however, many question the ethics behind this choice architecture, as it came to be known. Can this technology be intentionally misused?
How Recommender Algorithms Work
To understand the possible issues this technology may create, let’s have a quick look at how it works.
There are two types of commonly used recommender systems: content-based and collaborative filtering. While the former draws on quite apparent content classification a user chooses (genre, year, type, etc.), the latter is about a user’s interactions which are more indirect (ratings, purchases, friends’ interests). Both require the creation of a user profile. The more data a profile includes, the more tailored recommendations appear.
The key idea we’ve got to rely on when referring to recommender systems is that they represent AI applications based on mathematical logics, excluding aspects of human psychology.
Although the system can filter out undesirable content for various ages and social groups, such systems are not completely reliable in terms of ethics. There have been criticisms of data gathering and privacy issues, irrelevant or inappropriate recommendations, such as extremist content, or marketing ploys. All this results in challenges that need to be overcome.
User Privacy at Stake
Thorny issues arise right from the point of creating a user profile. For an effective recommendation, a user has to enter as detailed information as possible online. This raises concerns of privacy and profile protection issues.
Often recommender systems gather data using cookies or through form filling. Although users’ permission is acquired, the ethics of such is subjective. Sites notify their users about cookies and data collection and storage policies, but users are not always ready to read terms and conditions due to their complexity or length. Users experience challenges understanding exactly which information is being collected, for which purpose, and how this process can be stopped.
To combat this, companies need to closely examine how they portray information gathering to their customers, which ways they use to make data protection more accessible, the benefits for end users, and how the latter can have their data deleted.
While its generally recognized that such useful data can help users navigate a site with ease and make purchases based on their preferences, it can also be employed by scammers who use it for less helpful purposes. Therein lies the second challenge for companies to protect their users from data exploits.
The Facebook scandal of 2016 when the data of 87 million users was shared with Cambridge Analytica, a consultant company that assisted in Donald Trump’s presidential campaign, highlights this issue. Some experts claim that it may have influenced the election results.
Every mobile application now requires access to certain personal data to operate. Users need to feel assured that data protection is a priority, and companies need to show this when gathering confidential information. This means taking into account security issues and employing the services of specialists to ensure information is properly encoded and protected according to regional regulations.
Recommendation Glitch
When recommender engines use complex algorithms to generate recommendations, the results are a challenge to predict even for data specialists due to the large number of possible outcomes. This why the quality of the information analyzed and system search results should be heavily moderated by human supervisors.
Recommender systems can’t stay ‘neutral’ in their recommendations as they have to rank items and provide the best match. This may lead to the system recommending controversial content, such as inappropriate or extremist YouTube videos. New York Times journalist Kevin Roose recently addressed the issue of YouTube radicalization stating the platform’s algorithms can be partially responsible for steering young people toward far-right extremism.
Another side effect of recommender engines is the personalization of news content. If it’s done too heavily, the phenomenon of ‘filter bubbles’ emerges. The system creates informational bubbles around every user, offering only relevant content, resulting in one-sided news and political propaganda. For example, if a person supports the Democratic Party, he or she will receive more suggestions based on their preferences and less diverging recommendations, resulting in bias
Finally, recommender engines disvalue the accuracy of information. For example, if you want to find a “proof” that the Earth is flat, you’ll find dozens of articles and videos supporting the idea. The internet is full of conspiracy theories and false facts. Recommender systems are not designed to rule these out, and still demand a human approach to moderation and validation.
While they cannot be responsible for the entirety of the internet, companies engaging recommender systems can take charge of developing their internal systems appropriately. This may mean banning certain words or phrases in the search, providing their users with unbiased choice and the possibility to manage search through filters, or even removing recommendations altogether and providing the option for a default search mode, kind of a ‘clean slate’.
Marketing Choice Manipulations
Some businesses interested in their product promotion may find recommender engines a winning method of revenue generation.
However, recommender systems do not always work in the favor of the customer. For example, if you choose a high-priced item, an Apple Mac for instance, you may be suggested other above mid-range items in the future. This is known as price discrimination and presents a real ethical problem for customers.
On the side of the business, this works as a disadvantage as customers are not able to access a full range of products, meaning their available market selection is limited.
Possible Solutions to Ethical Issues
So, what can businesses do to enhance users’ trust and avoid any type of discrimination? Let’s take a brief look.
- Businesses should enhance user data protection. This may involve stricter regulation and data storage policies, making users more aware of how their information is used.
- Businesses should supervise AI-powered recommendations. This will cover the exclusion of potentially discriminating and controversial results, furthering auto-moderation, and enhancing AI software to make better decisions based on available data.
- Businesses should make recommending as open as possible for users. For example, Amazon proposes simple explanations on why they recommend an item, such as ‘customers who bought this item also bought...’
- Businesses should make consent clearer. Users should be able to confirm their participation in various kinds of online experiments in advance and have the possibility of opting out anytime they want. This also applies to data gathering and processing.
- Businesses should provide the option of non-algorithmic choice. Users should have the possibility to change the search and matching settings in their profiles and to adjust the system to their needs. For example, indicating if and when the recommender system is not working for them.
Engaging Recommender Systems—the Final Word
For companies seeking to engage recommender systems, whether for an online apparel store or a VoD service suggesting the next big blockbuster to watch, it is vital to be aware of how recommender systems work, and how to make them meet the needs and address possible ethical concerns of end users.