December 8, 2022
The first phase of the internet brought us information and services. The second aimed to connect us through social media. TikTok is the forerunner of a new phase of social media, one that if not regulated will represent a fundamentally different threat to societies and democracies than the social media companies we know today.
With TikTok, there is little pretence at being a “social” media platform that connects you with friends and family. Instead an endless stream of videos from people we don’t know is algorithmically selected to maximise view time and engagement. Short, memetic, addictive.
TikTok is different from its peers in four key ways. It is:
In essence these four characteristics of TikTok give the potential to be used as an extraordinary behavioural modification tool; one with a billion users worldwide that is growing faster than any other social network and under the influence of the CCP. And it is only six years old.
Much of the current debate about regulating the internet happening in capitals around the world was conceived before the TikTok era and hence is ill-equipped to deal with the threats that TikTok and its successors and imitators pose.
But by understanding a few simple principles we can create a new paradigm for regulation that will go some way to reducing the harms of behavioural modification engines such as TikTok, while protecting some of the key pillars of democracy and free speech.
The range of internet services we use is vast. Yet at its core, many of the conversations around privacy, competition between companies, disinformation and online safety have at their heart essential principles that we would do well to keep in mind as we design the next generation of internet regulations:
These three principles taken together form the basis of a regulatory framework that will enable the benefits of the internet to be shared more equally, and the harms ameliorated.
Simplifying hugely, there are three stakeholders in the story
So we have three fields of action — privacy [purple box], data access or disclosure [orange box] and regulation [pink box], with the data flows between the stakeholder groups being indicated by black arrows. The key principles for a large part of future tech regulation should be to change the nature and direction of these data flows.
Firstly, privacy. Some of the current harms caused by tech companies products and services are underpinned by wholesale invasions of user privacy. The move from privacy invading to privacy protecting design in the products and services of tech companies is the bedrock of moving to a more sustainable information ecosystem. While some regulations have been passed in some parts of the world to this extent (GDPR in the EU, for example), a lack of enforcement capability has meant that the benefits to citizens have yet to fully materialise.
Secondly, data access. The most advanced proposals to open up the black box of the tech companies to allow researchers to evaluate their products and services are again in the EU’s Code of Practice on Disinformation and its interaction with the DSA. Yet even here the tone of the conversation is still “data access by request.” Researchers must specify the data they wish to receive from each tech company. These research proposals and the researchers themselves must be vetted by an intermediary body and then if both researcher and proposal are deemed suitable, the request is passed to the tech company which must then fulfil it. While an improvement on the current system under which tech companies are under no obligation to share data, the current proposals do not go far enough. Researchers do not know what they do not know. Without knowing what data is available it is impossible to specify every potential data set that will be required to fulfil a specific research brief.
TikTok is a textbook case of this. So little is known about how TikTok works and so little data comes out of the company, that it is hard for researchers to even know what data to request.
The paradigm needs to shift from this “access by request” to “disclosure by default.” In this paradigm much of the data tech companies hold on the products and services they produce shared by default with appropriate safeguards for privacy. Access to this much wider pool of data would allow researchers to properly evaluate potential harms caused.
Such data access is also crucial to the third field of the new paradigm: regulation of the tech industry. New regulations such as those in the EU’s DSA and Code of Practice on Disinformation require independent third parties to audit whether the regulated tech companies have indeed met their obligations under the DSA rules. If the data is not available and tech companies are able to “mark their own homework” as happened under the first iteration of the Code of Practice on Disinformation from 2018, no real progress on reducing societal harms will have been made.
Those third parties that also assess content —such as GDI, Journalism Trust Initiative, Newsguard, Media Ownership Monitor — can provide data signals that could form part of the “quality signal” used in search, recommender, and news feed algorithms, among others. This will ensure that we are avoiding the potential (real or perceived) conflicts of interest inherent in relying on the tech platforms themselves to judge whether content via which they seek to make money, is harmful.
So a new paradigm is required, one that moves societies and the tech companies that intermediate citizens and the world from the current model of “privacy violations and data access by request” to one of “privacy by design and Data disclosure by default.”
Forty years ago seat belts in cars were optional extras in their design, and there was no consistent legal requirement to wear them across the world. Today cars cannot be manufactured or sold without them, and the majority of the 193 countries on the planet mandate their use.
Similarly, privacy regulations requiring “privacy by default” designs of products and services, coupled with simple customer consent tools must become a global minimum standard.
The presumption of data disclosure by default with appropriate safeguards for privacy must also become a global minimum standard, just as ingredients in food or financial data of publicly traded companies must be disclosed.
And expert third parties must be brought into determinations about news, recommender and search algorithms to avoid the potential conflicts of interest inherent in tech companies having sole say in deciding which content to promote. Third parties’ expertise should also be sought by the regulators charged with assessing tech companies’ adherence to the new regulatory settlements around the world.
These measures will not solve all of the issues with the internet, and they will not stop the development of new potentially dangerous and highly addictive tools. But they will represent a step in the right direction while also protecting some fundamental pillars of democracies — free speech, free markets, transparency and a level playing field.
GDI analysed current ad tech policies and their enforcement in the present policy landscape. Read our recommendations on how to strengthen initiatives aimed at protecting our online spaces from harmful content.
GDI has examined the current legislation approaches of a dozen countries to address the problem of disinformation. Our study provides an overview and captures the gaps in the approaches of these governments that need to be addressed.
Major brands are unwittingly funding disinformation domains. The GDI estimates that a quarter billion dollars (US$235 million) is paid annually to our database of 20,000 disinformation sites by ad tech companies placing adverts for many well-known brands.