€2.5 million project examines use of AI and platform providers in the datafied welfare state

Primary page content

A Goldsmiths academic has been awarded a €2.5 million grant to investigate the implementation of artificial intelligence (AI) and data-driven technologies by public bodies across welfare states in Europe.

A sad-looking girl looks at her smartphone while lying in bed

Professor Lina Dencik, University Research Leader in AI Justice based in the Department of Media, Communications and Cultural Studies, was awarded the significant funding by the European Research Council (ERC) Advanced Grant competition, as part of the Horizon Europe programme. 

The cutting-edge project is entitled State-tech relations in the datafied welfare state: Examining the computational transformation of the European social model, using the acronym STATETECH. 

It will examine the providers and uses of data-driven technologies by public bodies across Europe, from predictive benefit fraud systems in social security to safeguarding apps in schools to algorithmic diagnostics.

Looking at questions of governance, infrastructure, economic models and regulation, the project asks what the provision of such technologies mean for how the welfare state operates and functions.

The ERC’s Advanced Grant competition is one of the most prestigious and competitive funding schemes in the EU. It gives senior researchers the opportunity to pursue ambitious, curiosity-driven projects that could lead to major scientific breakthroughs.

This round of funding is worth €721 million in total, with the awards shared between 281 top academics. This competition attracted 2,534 proposals, which were reviewed by panels of internationally renowned researchers.

The use of data-driven technologies has been shown to lead to potential harms that include discriminatory outcomes, often relating to existing forms of inequality, exploitation and stigmatisation, exclusion and denial of access to essential services, and limited possibilities for redress

Professor Lina Dencik, University Research Leader in AI Justice

Q and A with Professor Lina Dencik

What led you to the research AI and justice issues, what was it that stimulated your interest to do research in this area?  How unique is this area of research?

My interests in researching the relationship between data-driven technologies such as AI and social justice stem from a long-standing concern with the way non-elites are able to influence and engage with decisions that govern their lives.

The focus on data came from early research I did on the uses of digital technologies for the purposes of surveillance, particularly with regards to the so-called “Snowden leaks” first published in 2013. In doing that research, I became interested in how we understand the nature and implications of these technologies as they become more prevalent in society and what the extensive collection and use of data as a core aspect of digital technologies mean for ‘ordinary’ people’s lives.

In particular, I sought to advance a research framework that broadened the parameters for how we understand the societal implications of such processes, beyond the focus on efficiency and individual privacy, which continues to dominate public and policy debates, and instead situate data, and more recent iterations like AI, in relation to social justice.

Since co-founding the Data Justice Lab back in 2016, we have seen this orientation grow substantially as a research area, with extensive research now documenting the multifaceted ways in which datafication is bound up with life opportunities and human flourishing. 

What is datafication and why does it have implications for social justice? 

Datafication refers to the trend of turning more and more of social life, behaviour and activities into data points that can be collected and analysed by computational means with the view to make predictions or recommendations. 

Increasingly we see the reliance on such processes across key areas of society, including for decisions that have a significant impact on people’s lives, such as health, employment, policing, migration and welfare. Beyond questions of privacy infringements and the protection of personal data, the use of data-driven technologies has been shown to lead to potential harms that include discriminatory outcomes, often relating to existing forms of inequality, exploitation and stigmatisation, exclusion and denial of access to essential services, and limited possibilities for redress. 

More generally, concerns have been raised about the lack of transparency and accountability that often surrounds the use of data-driven technologies, and the “violence” of reducing and abstracting identities and lived experiences into quantifiable data and profiling tools. In this sense, datafication implicates many core social justice concerns.

Governments globally have adopted and incorporated many technological and data advances into the delivery of public services. Are there inherent risks to doing this?

The integration of data-driven technologies into the delivery of public services has been a prominent approach in countries across the globe, and is now being accelerated with the advancement of AI strategies as a central geopolitical concern, including the UK’s recent AI Opportunities Action Plan which was launched in January 2025.

There is a notable ambition to establish countries as “leaders” in what some describe as an “AI race” which includes transformations in the public sector and across the economy to become AI-driven. Yet what these transformations actually look like in practice, and how they are going to be implemented remains under-researched. This is important in part because of the social justice concerns I have already outlined, but also because it potentially has significant consequences for how the state operates and its capacity to carry out core functions and meet obligations towards citizens. This is an area I will be investigating in my ERC project.

In what ways are state-tech relations transforming the welfare state?

The relationship between the state and technology providers has garnered increased attention, perhaps most prominently with the inauguration of President Donald Trump in the United States, which shone light on the central role of Silicon Valley in politics today.

In Europe, we are seeing a growing concern with these developments, particularly in the EU, with more and more emphasis on the need to minimise dependency on US-based technology companies and to enhance what they refer to as ‘digital sovereignty’ within Europe. In the UK, the debate has been a bit different, with the previous and current government actively seeking to court large technology providers like Palantir and Amazon to have a greater presence in the UK, including within the public sector.

For a notion such as the welfare state, these relationships are potentially very transformative. In part, they are so because there is a substantial body of research that suggests that integrating digital platforms and AI into the delivery of public services leads to a whole new operational logic for welfare states, a transformation in governmentality, that shapes decision-making and how citizens experience service delivery. But there are also some distinctive features in how platforms and AI are produced and maintained that arguably create new forms of dependencies that ‘lock in’ governments into services over which they have little control.

For example, growing emphasis is on the infrastructure that underpins these technologies and the ‘compute’ power required for these technologies to operate as they do that has raised concerns not only about the environmental consequences of such infrastructure, but also its social and democratic consequences, as the provision of such technology becomes centralised around a few large actors who are able to set the standards and rules for what technological futures are possible.

We saw this very starkly during the Covid-19 pandemic, for example, where Apple and Google were effectively able to set the standards for contact-tracing apps across major governments by virtue of their control of dominant operating systems. Although this was during a crisis, it is also something that we need to consider for the more mundane functions of the welfare state and what such dependencies on commercial actors mean for the capacity of the welfare state to regulate and protect citizens against risks associated with a market society.

For my ERC project, I am engaging with a new concept I refer to as the “tenant-state” that seeks to assess this particular context of the welfare state, drawing on current discussions on the significance of ‘rent’ in the economic models favoured by platforms and (re)positioning the welfare state as a ‘client’ or ‘tenant’ as they are often referred to in contracts with technology providers, in contemporary state-tech relations.

What would need to different for a “more just and sustainable computational infrastructure” to be in place? 

This is probably one of the most pressing questions in my area of research and one for which there is no easy answer. I am hoping that with my research, and looking into the current conditions of state-tech relations, I can shed light on some of the key areas that need to be addressed, and what might be needed in order to imagine, let alone advance, alternatives.

For another project I am currently leading, funded by the ESRC, for example, I am looking into the activities being pursued by ‘Big Tech’ in relation to AI and sustainability, that includes key questions of how large technology companies are able to influence national strategies and public debate on AI through activities such as lobbying, public relations, revolving doors and government contracts.

I think that by providing evidence of these activities, it might be easier to challenge and address the current terms of our technological futures, and to further discussions on alternatives. There is an array of suggestions for such alternatives that we can already draw on, ranging from efforts to advance public interest-oriented technologies, ensuring communities or citizens are at the centre of technological design, mobilising around different governance structures and economic frameworks such as data commons or platform cooperatives, to re-thinking supply chains and production models that might also entail forms of de-computing or de-growth.

More generally, I think it requires a destabilisation of Big Tech dominance and a scaling back of their influence and power, including at an infrastructural level, in order to nurture initiatives that challenge many of the assumptions and premises currently informing AI strategies across the world. Doing so needs substantial political mobilisation and pressure to change the direction we are in at the moment.

What outcomes do you hope to achieve at the end of the ERC-funded research? 

From a research perspective, I hope the ERC project will allow for a more comprehensive understanding of what is happening in welfare states across Europe in a context of data-driven innovation. More concretely, I hope the project will help inform new theories of the state that can allow us to assess the historical significance of current state-tech relations and their implications for the welfare state. I am particularly interested in getting a better understanding of how the historical commitments to a “social dimension” within Europe, which has always occupied an awkward and often marginalised position, might be realised in our current context and what might be needed to enhance functions of the welfare state in light of their growing reliance on computational infrastructures.