In many ways, 2016 was a political turning point. That year the wave of populism achieved two significant victories – the Brexit vote and Donald Trump’s victory in US presidential elections. Panic unfolded in the socio-political establishment of liberal democracy. Answers, explanations, and culprits were sought for what was considered the twilight of the democratic political order and the beginning of the post-truth era. Eyes were focused on social media platforms, mainly Facebook, and then Twitter and YouTube, accused of enabling intense distribution of false information which undermines the integrity of the public sphere, and thus the democratic process. These opportunities were abused by populist and authoritarian-minded politicians, purposefully creating and spreading false and manipulative content. After 2016 there is a sharp increase in awareness of social, political, and cultural power of social media, which over the previous decade have gradually developed from internet niches to unavoidable communication and economic infrastructures. They can no longer be understood as neutral technological instruments, but as specifically configured technological systems that determine the daily information offerings of billions of users, behind which lie the interests of the world’s richest companies.
By platforms I mainly refer to a few large platforms oriented towards public broadcasting of thematically diverse information—Facebook, Instagram, X (formerly Twitter), YouTube—and by false information I mean both disinformation (deliberately false) and misinformation (unconsciously false). Platforms hold the legal status of intermediaries and deny any editorial role, which means that de iure they are not responsible for and do not influence the content they host. Fierce criticism of their permissive attitudes towards false information resulted in attempts at regulation. Firstly, the self-regulation, where the platforms themselves design and implement methods that aim to suppress false information and inform users about their dangers and how to identify them. Some countries (e.g. France and Germany) opt for direct regulation, attempting to establish limited legal responsibility of platforms for the content they publish, introducing an obligation to remove certain illegal or fraudulent content with possible penalties if this obligation is not fulfilled. The consistency of implementing these laws varies quite a bit so far, both between and within countries. On the other hand, the United States of America, where all major platforms are headquartered, does not have any regulation against false information and generally lag behind the rest of the world in the field of platform regulation (although there have been attempts to pass federal laws, one currently underway).
A watershed moment came in 2018 when the European Union took several important steps in regulating the internet communication. In a communiqué, the European Commission recognized the spread of false information via online platforms as one of the key challenges facing modern democracies. In the same year, the commission, in cooperation with representatives of platforms and the online advertising sector, adopted the Code of Practice on Disinformation, a key document that defines self-regulatory standards. The code has been accepted by all major platforms, thus promising to act against false information through several general measures, such as removing and demonetizing user accounts that spread misinformation, increasing the transparency of advertising, ensuring greater visibility to credible sources, and cooperation with researchers who will provide access to data, with regular reporting to the EC on the measures taken. Additional measures were agreed in 2020 during the COVID-19 pandemic. The platforms met with the World Health Organization and agreed on an approach to the fight against the "infodemic". Platforms then for the first time began to independently determine what counts as a credible source, a move that once again attracted criticism and warnings that the self-regulatory approach gives platforms an unjustified power in controlling the flow of information.
Shortcomings of this model soon become clear. To remedy this, the Code of Practice on Disinformation was updated in 2022 with the aim of being integrated into the Digital Services Act (DSA), a new EU regulation that will enter into force starting with 2024. Along with strengthening previous obligations, regulatory novelties include increased transparency of recommender systems, cooperation of platforms with fact-checkers, establishment of a more robust monitoring system for the implementation of the Code and DSA, and in case of non-compliance, the possibility of a fine of up to 6% of the platform’s annual income. The system of self-regulation evolved into a system of co-regulation – under the new code and DSA, a whole range of stakeholders, such as state authorities, civil society associations, research institutions, and media organizations, participate in implementation and supervision. But in addition to co-regulation, there is also an element of direct regulation – although the Code is still nominally voluntary, with DSA and the possibility of punishment, it effectively becomes mandatory.
The reason for the intensification of EU regulation are poor results of the self-regulation model. Agreed measures against false information were often not consistently implemented – removal or flagging of content was often missing or came too late, user reports of questionable content were often left unprocessed, content that did not actually violate the rules was removed, platforms turned in scant reports on implemented measures, etc. True, directly suppressing false information is a technically and organizationally very demanding job that is impossible to consistently implement with enormous daily platform traffic. Human moderators can only cover a certain amount of content and detection through technological methods is prone to errors.
However, there is another important reason to the shortcomings of measures against false information – it is simply not in the interest of the platforms to implement them. Their business models depend on the most intensive flow of information exchange and consumption, regardless of whether this information is true or not. The point is to offer as much information content as possible, precisely delivered to users, and precisely the content that algorithmic recommenders determine is more likely to be consumed by the user. Users are encouraged to consume as much content as possible and participate in the activities of the platform, which is accompanied by the collection of large amounts of data on user behavior and characteristics. The more a user participates, the more is learned about their preferences. This data is fed back into the algorithmic system, which now delivers desired content with even more precision. However, the collection of user data is not only carried out for the purpose of delivering content, but also for delivering advertisements. The more information – the more accurate the advertising. This is the essence of the business model of platforms, earning them the vast majority of income from advertising. For example, Meta, the company that owns Facebook, Instagram and WhatsApp, makes as much as 97% of its revenue in this way. The function of truth is completely irrelevant to the above-described cycle of information. What is important is primarily its greatest possible intensity. If profit-oriented platforms are completely dependent on this cycle, it is understandable that they have no interest in implementing a system that will hinder it in any way (and false information has repeatedly proven to be very attractive content). Accordingly, platforms regularly resist and lobby against any regulation, while agreeing to self-regulation to avoid potential future obligations and restrictions. Under the pretext of defending free speech, the platforms claim that they do not want to act as arbiters of truth.
Regardless of claims that the problem can be solved by self-regulation, periodic explosions of false information, manipulation, and general communicational chaos prove that the platforms are either not up to the task or are not even interested – or both. The latest episode of information collapse occurred in October of this year with Hamas’ attack on Israel and the resulting military retaliation. People flocked to social media to follow live updates on the conflict. Instead, they were met with a flood of contradictory, dubious, and flat-out incorrect information. Of course, this was partly information warfare by both sides, but the overall chaos cannot be explained without considering the structural properties of the platforms that enable and encourage the rapid spread of new information, whether false or true. News commentary quickly revived discourse on the demise of social media. What once made possible a connection to global communication flows, delivering the latest info from all corners of the world, is now barely usable. It is not just a poor offering of useful info, but a general decline in service quality. The quality of the content has degraded to misinformation, internet scams and hidden advertisements, and the user experience suffers from confusing interfaces, aggressive advertising, locking once free features behind subscriptions, etc.
Although the discourse on the decline of social networks is by no means new, nor is the conflict between Israel and Hamas the first war that has caused confusion on the platforms, the current situation still resonates differently when certain contextual factors are taken into account. The first one is economic conditions and their consequences. Platform companies experienced enormous growth during the COVID-19 pandemic when many communication and social activities were moved to the platforms themselves. Over time, the pandemic subsided and human activity returned to the real world, and the platforms’ economic growth plummeted. In addition, this year’s inflation and the consequent rise in interest rates have stopped the flow of cheap capital on which the previous expansion depended. As expected, downsizing followed. On the surface of the platform, this involves channeling users towards profitable activities (to the very noticeable detriment of user experience). Inwardly looking, the companies reorganized, which included comprehensive layoffs. In that series of layoffs, the departments in charge of security, ethics, and suppression of false information were decimated in almost all platform companies. With reduced capacities, platforms are partly returning to earlier permissive approaches to false information. X (formerly Twitter) has gone so far as to withdraw its agreement with the EU Code, and what’s more, the new owner Elon Musk is actively involved in spreading false information himself. While X’s situation can be explained by Musk’s eccentric and personalized approach to platform governance, Facebook and YouTube are also loosening the reins. Facebook has decided to allow political ads that claim the 2020 US presidential election was rigged, with YouTube also announcing it will no longer remove such claims.
In the future, we can expect open conflict between platforms and regulators. It is indeed an unusual step to relax measures against false information at a time when the most comprehensive regulatory framework to date comes into force, along with the heaviest fines. The European Commission did not waste time and used a new tool provided by the DSA – Meta, X, and TikTok received formal requests for information from the Commission, where they are required to explain what measures, consistent with the DSA, they are taking to suppress illegal content and false information surrounding the conflict between Israel and Hamas. The Commission and other regulators have also repeatedly expressed concern that the platforms are not taking their responsibilities seriously in the context of the upcoming elections. Namely, 2024 is a truly a 'super-election' year, where more than 30 countries will be conducting some kind of general election, including the USA, India, Mexico, the United Kingdom, Croatia, etc., totaling more than 2.5 billion voters. Elections have proven to be the most fertile ground for disinformation, and it has become a common tool for trying to gain campaign advantage. False information on social networks is considered a first-class political problem, not only among politicians, but also among voters themselves. Next year, like 2016, could also be politically decisive. It remains to be seen how much regulators, armed with new legal and surveillance tools, will pressure platforms to act against false information; how much the platforms themselves will back down in the face of pressure and what measures they will take; whether we will see a whole series of information disasters in the run up to elections and whether and how much this will affect the results themselves. These and many other questions are open for now, and the answers to them will determine the future configuration of power in the field of truth politics.
But it is worth mentioning another open question – what will be the impact of generative artificial intelligence (AI)? Models for image, sound, and text generation have been available to the public for over a year and continue to be developed, as well as tools for training 'homebrew' models. Although existential fears about the techno-apocalypse should certainly be tempered, the question of social, economic, and cultural consequences remains. Not much can be inferred from broader patterns of public discourse about artificial intelligence – there, AI is regularly understood as a major game-changer for human civilization, although the path to it is not always clearly explained. Part of it certainly relates to commercial hype, an attempt to present AI as very powerful and inevitable, and therefore a desirable product and investment. It is difficult to distinguish how the practices of using AI will ultimately be established and what their consequences will be, but one thing is certain – AI is already capable of automatically producing a huge amount of textual, visual, and sound information that has no basis in the real world, and by itself cannot judge the truth content of any of them.
In other words, AI is the perfect tool for producing false information and it is certain that it will be used for this purpose. Of course, the question of how remains open. It is currently, like its legitimate uses, in the experimental stage. Some AI-generated disinformation has appeared in the context of the Israel-Hamas conflict, but only marginally. But perhaps their use will mostly not be for the purpose of producing and distributing specific false content, i.e. pushing a specific narrative. It is more likely to be used as a ‘flooding the zone’ technique, where the communication field is flooded with an overwhelming amount of disparate information and it becomes impossible to make any judgments about current events. The use of AI for publishing purposes could be similar – if the cost of producing content becomes almost none, at least compared to human production, the internet could become flooded with low-quality generated content that, again, has no basis in reality (because by definition it cannot), but is exclusively oriented towards satisfying the incentives and signals of the advertising system, i.e. it seeks to maximize the monetization of cheaply produced content. In other words, the relationship between noise and signal and the communication system of the internet could be significantly reversed in favor of noise.