New research commissioned by the Irish Council for Civil Liberties (ICCL) and Uplift found that almost three-quarters (74%) of people in Ireland believe Big Tech algorithms that select and insert content into users’ feeds should face stricter regulation. Additionally, over four-fifths (82%) support forcing social media companies to stop collecting specific data on users’ sexual desires, political and religious views, health conditions, or ethnicity and using that data to recommend videos. The research was conducted by Ireland Thinks using a representative sample of 1,270 people of varying ages, incomes, education levels, and regions across Ireland.
These findings come after Ireland’s new online regulator, Coimisiún na Meán, released draft rules stating that social media video platforms like YouTube, Facebook, and TikTok should have the Big Tech algorithms, also known as recommender systems, based on intimate user profiling turned off by default. Advocates argue these algorithms promote harmful content for teenagers, drive online addictions, and provide personalized feeds full of disinformation and hate solely for profit.
According to experts, Big Tech’s algorithmic “recommender systems” select emotive and extreme content showing it to users who the system estimates are most likely to be outraged. The result is they spend longer time on the platform, which allows the company to make more money showing them advertisements. Here is an opinion piece we found of interest relating to Ireland’s proposed regulation on Big Tech algorithms.
The EU should support Ireland’s bold move to regulate Big Tech
In an opinion piece “The EU should support Ireland’s bold move to regulate Big Tech” for The Hill, Zephyr Teachout and Roger McNamee, opinion contributors, examine a powerful proposal to address the problems of algorithmic amplification. Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for digital platforms. It has proposed a simple, easily enforceable rule that could change the game: All recommender systems based on intimately profiling people should be turned off by default.
In practice, that means that the platforms cannot automatically run their algorithms that use information about a person’s political views, sex life, health or ethnicity. A person will be able to switch an algorithm on, but those toxic algorithms will no longer be on by default. Users will still have access to algorithmic amplification, but they will have to opt in to get it.
The authors believe the proposal comes as a response to the Dublin riots a month ago marking the worst civil unrest seen in decades. The violence erupted in response to unfounded far-right online conspiracy theories about threats to children’s safety. Similar to the January 6th insurrection in the United States, these disturbances seem to have stemmed directly from the spread of misinformation across social media platforms like TikTok, YouTube, and Instagram. There is irony that some of the tech companies who own these networks possibly maintain European headquarters in Dublin solely for tax benefits, while their platforms fueled chaos in the same city.
The authors believe that we cannot trust technology companies to regulate themselves in the public interest. They do not want the government to be in the business of sorting through what is and is not harmful if amplified. The authors suggest the Ireland model offers a way forward: rules that are content-neutral, giving users control of one critical aspect of their online experience. Read more on The Hill.
Disclosure: Fatty Fish is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
The Fatty Fish Editorial Team includes a diverse group of industry analysts, researchers, and advisors who spend most of their days diving into the most important topics impacting the future of the technology sector. Our team focuses on the potential impact of tech-related IP policy, legislation, regulation, and litigation, along with critical global and geostrategic trends — and delivers content that makes it easier for journalists, lobbyists, and policy makers to understand these issues.
- The Fatty Fish Editorial Teamhttps://fattyfish.org/author/fattyfish_editorial/January 19, 2024
- The Fatty Fish Editorial Teamhttps://fattyfish.org/author/fattyfish_editorial/January 3, 2024
- The Fatty Fish Editorial Teamhttps://fattyfish.org/author/fattyfish_editorial/January 3, 2024
- The Fatty Fish Editorial Teamhttps://fattyfish.org/author/fattyfish_editorial/December 28, 2023