Thursday, 25 May 2017

Second Demos Published: Visual Geolocations by Colombo, Ciuccarelli and Mauri

Demos are curated essays in which authors reflect on projects related to the concerns of Big Data & Society and is led by co-editors Paolo Ciuccarelli and Richard Rogers. A second has recently been published,  Visual Geolocations: Repurposing online data to design alternative views, by Gabriele Colombo, Paolo Ciuccarelli and Michele Mauri.  Their essay explores how researchers, designers, and artists have taken up heterogeneous, visual, and location based data. They examine how different visual manipulations and techniques have been deployed to repurpose data  to add meaning and generate new perspectives. Three different approaches are described: the design of interfaces for exploring satellite footage in novel ways, the analysis of urban aesthetics through the visual manipulation of collections of user-generated contents, and the enrichment of geo-based datasets with the selection and rearrangement of web imagery.

Wednesday, 24 May 2017

Book Launch, Digital Sociology: The Reinvention of Social Research by Noortje Marres

Madras HQ Bibliotheque, CODE/CITY, Manu Luksch © 2017
Digital Sociology* was launched on 9 May 2017 at Central Saint Martins, University of the Arts London. The launch included a panel discussion chaired by Lucy Kimble (University of the Arts London) with contributions from the author, Noortje Marres (University of Warwick; BD&S Editorial Board member) and four panellists.  In advance of the panel, the Journal also filmed Hannah Knox (UCL; BD&S Editorial Board member) interviewing Marres about some key arguments in her book.  The video is available here.

Noortje Marres began by noting that her final manuscript was submitted shortly after the Brexit vote and Trump election, which have sparked considerable debate on the role of social media, data analysis tools to detect fake news and new forms of blocking manipulative content. What is the relation of these two events to her book? She argued that these are piecemeal technical solutions that do not go to the heart of the problem. As much research has shown, communities amongst which fake news circulates are separate from the platforms and their mitigating technical services. What technical solutions do not address is that sharing is a logic underpinning digital platforms and from which their social value is derived. It is this value and logic that digital sociology attends to, of how knowledge generation is a social process rather than narrowly behaviorist or configured by individual platforms. Marres’ understanding of social logics is at the heart of her book and each panelist took this up in different ways.

Les Back (Goldsmiths) began with a quote from Marres’ book, that ‘digital sociology is ultimately a form of awareness, nothing more, nothing less’ (44) and that it is not fringe but key to understanding social life and that we are already inside the thing we are trying to understand. To exemplify this point he noted how the monitoring of social media is part of the very techniques of digital border surveillance and the traceability of migrants.  Beyond police and border guards, the movement and management of migrants are in numerous and troubling ways implicated in the digital.

Amanda Windle (University of the Arts London) offered a feminist critique by referencing a quote inspired by Donna Haraway’s work that ‘we must stay with the trouble’ (37). Marres offered this in relation to the troubling question of framings of the digital as either an object or instrument of inquiry. Windle added to the ‘troubles’ raised in Marres’ book by posing a number of questions such as ‘whose digital sociology?’; are research subjects active or passive participants? what are the situated practices that make up digital life? which bodies are potentially silenced?

Mike Savage (LSE) reflected on the reference in Marres’ book to the 2007 article he co-wrote with Roger Burrows on the ‘coming crisis of empirical sociology’ and how Marres effectively critiques the presumed binary they set up between ‘new’ and ‘old’ methods. Taking the example of inequalities research, he argued that rather than debating for or against new methods the challenge is how to persuasively tell stories through aesthetic devices and visualisations and what he names the ‘symphonic aesthetic.’ 

Hannah Knox (UCL) took up another claim in Marres’s book that while people have always been active in world making, digital devices blur the boundaries between methods and the tools people use in their everyday lives. She argued that scientists need to become sociologists to understand data and interpret visualisations. Through this provocation she made an appeal for interdisciplinarity and questioned whether we are all becoming digital sociologists or if there are new forms of expertise emerging in the interstices of existing disciplines.

In these and other ways, all presentations attested to the importance and wide applicability of Marres' book for sociological studies of digital worlds.  An audio transcript of the panel can be found here.

*Marres, Noortje.  2017. Digital Sociology: The Reinvention of Social Research. Cambridge: Polity Press.

Tuesday, 16 May 2017

The Cloud, the Crowd, and the City: How New Data Practices Reconfigure Urban Governance

Phil Ashton, Rachel Weber and Matthew Zook

The urban archetype of the flâneur, so central to the concept of modernity, can now experience the city in ways unimaginable one hundred years ago. Strolling around Paris, the contemporary flâneur might stop to post pictures of her discoveries on Instagram, simultaneously identifying points of interest to the rest of her social network and broadcasting her location (perhaps unknowingly). The café she visits might be in the middle of a fundraising campaign through a crowdfunding site such as Kickstarter, and she might be invited to tweet to her followers in exchange for a discount on her pain au chocolate. As she ambles about Paris, the route of her stroll is captured by movement sensors positioned on top of street lights, and this data – aggregated with that of thousands of other pedestrians – could be used by the City of Paris to sync up transit schedules. And if those schedules were not convenient, she might tap Uber to whisk her home to her threadbare pension booked on AirBnB.

This vignette attests to the transformation of the urban experience through technology-enabled platforms that allow for the quick mobilization and exchange of information, public services, surplus capacity, entrepreneurial energy, and money. However, these changes have implicated more than just consumers, as multiple technologies have been taken up in urban governance processes through platforms variously labeled as Big Data, crowd sourcing, or the sharing economy. These systems combine inexpensive data collection and cloud-based storage, distributed social networks, geotagged locational sensing, mobile access (often through “app” platforms), and new collaborative entrepreneurship models to radically alter how the needs of urban residents are identified and how services are delivered and consumed in so-called “smart cities” (Townsend 2013).

In the rhetoric used by their boosters, the vision and practice of these technologies “disrupts” existing markets by harnessing the power of “the crowd” – a process fully evident in sectors such as taxi (Uber/Lyft), hoteling (AirBnB), and finance (peer-to-peer lending). However, the notion of disruption has also targeted government bureaucracies and public services, with new initiatives seeking to insert crowd mechanisms or characteristics – at once self-organizing and collectively rational (Brabham 2008) – into public policy. These mechanisms envision reconfiguring the traditional relationship of public powers with planning and governance by vesting data collection and problem-solving in crowd-like institutional arrangements that are partially or wholly outside the purview of government agencies. While scholars are used to talking about “governance beyond-the-state” (Swyngedouw 2005) in terms of privatization and a growing scope for civil society organizations, technological intermediation potentially changes the scale and techniques of governance as well as its relationship to sovereign authority.

For instance, civic crowdfunding models have emerged as new means of organizing public service provision and funding community economic development by embracing both market-like bidding mechanisms and social-network technologies to distribute responsibility for planning and financing socially-desirable investments to laypeople (Brickstarter 2012; Correia de Freitas and Amado 2013; Langley and Leyshon 2016). Other practices are even more radical in their scope. Toronto’s Urban Repair Squad – an offshoot of the aptly named Critical Mass bike happenings – urges residents to take transportation planning into their own hands and paint their own bike lanes. Their motto: “They say city is broke. We fix. No charge.” (All that is missing is the snarky “you’re welcome” at the end.)

Combined, these emerging platforms and practices are challenging the tactics, capabilities, and authorizations employed to define and govern urban problems. This special theme of Big Data & Society picks up these issues, interrogating the emergence of digital platforms and smart city initiatives that rely on both the crowd and the cloud (new on-demand, internet-based technologies that store and process data) to generate and fold Big Data into urban governance. The papers contained herein were presented as part of a one-day symposium held at the University of Illinois at Chicago (UIC) in April 2015 and sponsored by UIC’s Department of Urban Planning and Policy. Setting aside the tired narratives of individual genius and unstoppable technological progress, workshop participants sought to understand why these practices and platforms have recently gained popularity and what their implementation might mean for cities. Papers addressed numerous questions: How have institutional supports and political-economic contexts facilitated the ascendance of “crowd” and “cloud” models within different spheres of urban governance?  How do their advocates position them relative to imaginaries of state or market failure/dysfunction? What kinds of assumptions and expectations are embedded in the design and operation of these platforms and practices? What kinds of institutional reconfigurations have been spurred by the push to adopt smart city initiatives? How is information collected through these initiatives being used to advance particular policy agendas? Who is likely to benefit from them?

The four articles in this special theme take different slices on these questions. Robert Lake’s analysis reviews the ontology and politics of Big Data practices beginning with the recognition that issues of definition and politics are fundamental to data collection in cities. From this foundation, he focuses his paper on the concern that Big Data suffers not only from the politicization of practice, but from its foundational ontological premise of “hyper-individualism” – i.e., or treating persons, events and phenomenon within a city as independent units unconnected to each other or to any larger context. Similarly, John West’s research focuses on the abstracting logics of Big Data in the case of a large public school in the Bronx and how Big Data systems, implemented with the laudable goal of increasing transparency, instead resulted in what he terms new “opacities.” West argues that by opening new scales of analysis for comparison and benchmarking – the teacher, the classroom, the school – this Big Data exercise transferred knowledge and power from classroom and principals to central city administrators, facilitating systemic reorganization to the detriment of the quality of this particular high school.

Taylor Shelton’s article draws on the concept of “performativity” to argue that the sources of Big Data are changing the way decision makers are conceptualizing the city, resulting in changes to the types of policies and interventions that are planned. He critiques the “new urban science” that seeks to borrow methods drawn from the natural sciences and apply them to urban geography and planning. Such a borrowing ensures that quantitative analysis is the only correct approach, resulting in an ontological definition of the city reduced to whatever is most easily counted and valorizing technical expertise while issues of injustice or local concerns are rendered less important. Matthew Zook first reviews the genealogy of key ideas within smart city governance and earlier antecedents generated by motivations for social justice and progressive socio-economic reform that differ quite markedly from the goals emerging from today’s technology and neoliberal rhetoric. Recognizing the promise of Big Data for urban governance, he also cautions that “metrics don’t simply measure; in the process of deciding what is important and possible to measure, these data are simultaneously defining what cities are” (p. 15).

As a collection, these papers offer insights into how future research into smart city initiatives might examine the nexus of Big Data and urban governance. Their contributions can be read as both methodological and political. By combining close attention to the work of socio-technical systems of measurement with institutional ethnographies or studies of policy-making controversies, the papers show how data is enmeshed in the dynamics of austerity, privatization, or neoliberal urbanism more generally. Here, smart city initiatives might be read as institutional practices of control, rooted in attempts to produce an actionable future out of a chaotic and ever-changing present. Whereas this necessarily highlights how data systems strip urban problems out of their context to make them actionable for policymakers – a point reinforced by all the papers – it also shows Big Data's highly-productive role in animating the thick relational entities known as institutions. Whether we're looking at the apparatuses of urban security or the role of data analytics in restructuring public school systems, the hyper-individualism of measurement (as noted in Lake's paper) is but one moment in a rich process of institutional transformation.

Monday, 27 March 2017

Introducing Veillance and Transparency: A Critical Examination of Mutual Watching in the Post-Snowden, Big Data Era

by Vian Bakir, Martina Feilzer and Andrew McStay

This Special Theme examines veillance (mutual watching) and transparency in the context of big data in a post-Snowden period. We propose that today we live in a techno-cultural condition of increased and normalised transparency through various veillant forces. We interrogate the technical, social, economic, political, legal, ethical and cultural implications of this situation.

Veillance is Steve Mann’s (2013) term for processes of mutual watching and monitoring by surveillant organizations and sousveillant individuals. The latter refers to: the capacity for people to monitor from a position of minimal power; but also monitoring by those who are participating in the activity being watched or sensed (from life-logging to using mobile phone cameras for monitoring police at demonstrations).

The past decade has seen an intensification of veillant forces from all quarters (state, commercial, civil society, citizens), leading to questions of whether resistance is possible or desirable.  For instance, Edward Snowden’s whistle-blowing in June 2013 exposed governments’ secret mass surveillance, storage and real time analysis of ordinary citizens’ digital communications (content and metadata). Increasingly too, use of big data analytics and machine learning is applied by commercial organisations to understand people in ever more intimate ways and to target marketing communications.

Accepting the inevitability of surveillance, and the rapid growth of sousveillance, Mann and Ferenbok (2013) envisage a state of equiveillance, where there is equality between surveillant and sousveillant forces, leading to a transparent society.

Yet, transparency does not always take an equiveillant form. For instance, liberal transparency is, historically, an enlightenment norm that opens up workings of power for public inspection, an exemplar being journalism acting as the Fourth Estate. Radical transparency opens up not just public processes but also the private lives of citizens for inspection. Next, where radical transparency is enacted without citizens’ knowledge or consent, we enter the situation of forced transparency where resistance to surveillance is tantamount to guilt, and where choice, control and autonomy is denied (McStay 2014). An exemplar of forced transparency is intelligence agencies’ bulk data collection that, pre-Snowden, ensued without citizen knowledge or consent; post-Snowden the now familiar meme used to justify forced transparency is ‘nothing to hide, nothing to fear’.

While a balanced condition of mutual watching may be unrealisable in practice, this Special Theme critically examines a range of veillant forces, resistances and tensions, seeking to understand these operations across three key debates in the context of big data post-Snowden.

The first set of debates query how useful theories of veillance and transparency are in explaining practices of mutual watching in the post-Snowden, big data era. Dan McQuillan’s analysis of algorithmic paranoia dismisses Mann’s concept of veillance as the wrong sort of metaphor for the forms of seeing introduced by big data algorithms. Critiquing the idea that equiveillance captures our contemporary condition of mutual watching, Clare Birchall advances the notion of shareveillance in her discussion of subjectivity, open data (that governments willingly share with citizens) and closed data (such as that collected by intelligence agencies). Focusing on the pre-crime assemblage, Peter Mantello advances the notion of ikeaveillance: ‘a do-it-yourself, voluntary opt-in approach to algorithmic governance’ that contributes to the pre-crime assemblage. Piro Rexhepi focuses on peripheral political spaces to query the ability of sousveillance to destabilise and disrupt what she terms sur/violence (such as drone strikes killing people via metadata identification).

The second set of debates that we examine concern norms, ethics, regulation, resistance and social change around veillance and transparency. Anthony Mills and Katherine Sarikakis examine journalists’ experiences with surveillance in non-Western and Western countries, finding that investigative journalists have been intimidated through surveillance; but that they fight back through often-fraught cooperation with hacktivists, and through self-directed protection of communications and sources. Lina Dencik, Arne Hintz and Jonathan Cable examine British social justice activists’ resistance to state surveillance, arguing that this should be connected to broader social justice agendas.  Focussing on advertising and the net rise in empathic media (namely, technologies that track bodies and react to emotions and intentions), Andrew McStay advances and problematizes the notion of emotiveillance: the use of biometrically sensitive technologies to infer peoples’ emotions.  Focusing on regulations and rights, Yvonne McDermott-Rees observes that implementation of the EU-created right to data protection faces challenges in an era of ubiquitous veillance practices and big data.

Our third set of debates centre on whether post-Snowden veillance and transparency discourses and practices adequately educate and engage people on abstract, secretive surveillance practices, or the possibilities and pitfalls of sousveillance. We present innovative engagement tools and interactive art including Evan Light’s Snowden Archive-in-a-Box; Derek Curry and Jennifer Gradecki’s Crowd-Sourced Intelligence Agency; and Benjamin Grosser’s Tracing You.  Yuwei Lin reflects on her experiences of teaching privacy and surveillance to media arts practice university students in the UK. Ben Brucato considers efforts by journalists and activists to construct databases that document and measure killings by US police, examining how they exemplify the new transparency.  Steve Mann highlights the need for bottom-up transparency in computer engineering, arguing that scientists have the right and responsibility to be able to understand the instruments they use to make their discoveries: he posits that veillance is important not just in human-human interaction (such as people watching other people) but also in terms of Human-Computer Interaction.

Through these three debates, this Special Theme shows that the veillance field is multi-perspectival, and characterised by tension. We argue that to understand contemporary data transparency, modern watching, sensing and data analytics, we need to examine all the various forms of veillance (not just surveillance). While it remains to be seen whether we will ever see Mann’s equiveillance in practice, we call for continued critical, technical, legal, political, educational and artistic intervention into the veillance field.

Thursday, 2 February 2017

Introduction to the special issue on Environmental Data

By Jennifer Gabrys

Big Data research often focuses on particular datasets and types of data. From analysing data from the Twitter ‘Firehose’ to scraping data from websites, the practices of Big Data typically engage with social media, extended databases, and any number of data infrastructures about individual online activity. Environmental data, on the other hand, presents a somewhat different set of dynamics for considering how these long-standing and already considerably sized datasets are now becoming even bigger and more pervasive.

Environmental data is generated through a wide range of technologies and practices, from satellites to sensors and from sustainability reporting to eco tweets. The special issue, ‘Practicing, Materializing and Contesting Environmental Data’, addresses the specific ways in which environmental data is now amassing in multiple ways, while also discussing the implications of these new forms of data for addressing environmental problems.

Contributions to the special issue include articles on the situated and material engagements with environmental data. Emma Garnett analyses air quality modelling practices and the specific ways in which data is stabilised through affective attachments to data. Yanni Loukissas draws attention to the overlooked place attachments that characterise datasets at the Arnold Arboretum. Ingmar Lippert suggests that data gathered for corporate sustainability reporting can often counteract the very objectives of achieving sustainability that this data is meant to enable. And Tahani Nadim also takes a critical view when discussing how earth observation satellites work through a seemingly comprehensive, yet distanced, view of environments and environmental problems.

Additional contributions to the special issue especially focus on the ways in which contestations expressed through environmental data can generate new political possibilities. Brooke Singer accounts for her data-related environmental art practices, and the ways in which different forms of political engagement emerge through these practices. Kim Fortun et al. consider the role that critical data designers play in shaping environmental data and data repositories, and the strategies these designers adopt in order to facilitate public encounters with government datasets. And Jennifer Gabrys et al. discuss how air pollution data gathered through citizen sensing practices and technologies can shift the forms of evidence that are accounted for when dealing with the effects of the fracking industry. Data stories, as Gabrys et al. suggest, can present a way not just to contest official accounts made with environmental data, but also to figure new data worlds.

This special issue makes the case for attending to these multiple forms of environmental data within wider discussions of Big Data. Many of these practices shift the usual subjects and relations that might characterise Big Data, while also demonstrating different material arrangements of data. At the same, the contestations that unfold with and through environmental data can at once reveal the particular contours of environmental problems, while also suggesting new forms of engagement and political possibility.

The ‘Environmental Data’ special issue, including full text of papers in the issue, can be accessed at

Sunday, 29 January 2017

BD&S Editor Evelyn Ruppert speaks at the World Economic Forum in Davos, Switzerland

BD&S Editor Evelyn Ruppert recently spoke at two events at the World Economic Forum (WEF) in Davos, Switzerland. In her talk, ‘Enabling digitally inclusive societies’, she drew on her research on citizen rights and data to discuss how the internet  impacts social cohesion - an increasingly pertinent theme for the WEF, which this year has put digital technology and its impact on economies and societies worldwide at the heart of its programme. Her talk was part of an Ideas Lab session, ‘The Science of Social Cohesion', organized by the European Research Council (ERC). Evelyn joined 8 other ERC grantees as part of a delegation to the WEF led by ERC President Prof. Jean-Pierre Bourguignon.

Referring to her ERC project ARITHMUS, she argued that  fostering citizen engagement in how the internet works and rights to the data that it generates are key to making digital societies inclusive 
rather than divisive and controlling. While expanding access to the internet is usually regarded as an answer to ending a digital divide, she argued it is also necessary to provide openings for people to be not merely users and consumers of the internet, but digital citizens with the power to shape what it should be.  

At another invited session Evelyn joined a panel of business leaders and human rights lawyers to discuss the timely question, ‘What if Privacy Becomes a Luxury Good?’ Organised as a partnership between the WEF and TIME Magazine, the session involved a discussion of the implications of the ‘Fourth Industrial Revolution’ for societies. The panel addressed how digital devices are monitoring and compiling personal data and the uneven consequences this has for privacy.  The session was live streamed and can be viewed here.

Monday, 14 November 2016

Introducing the Critical Data Studies Special Theme

by Andrew Iliadis (University of Ontario Institute of Technology) and Federica Russo (University of Amsterdam)

Big Data science, along with its methodologies and practices, has reshaped the landscape of the natural and social sciences. Much has been written about the benefits of Big Data’s contributions to advancing research, training, and encouraging engagement at the intersection of computation and society. Much less has been said about the existing and potential harms caused by Big Data. As the product of multiple sites of work, layered analytic techniques, experimental practices, and various competing discourses, Big Data must remain open to cultural, ethical, and critical challenges.

The Big Data & Society Critical Data Studies (CDS) special theme brings together established and emerging CDS researchers who seek a critical engagement with Big Data in various contexts, including food and agriculture, policing and governance, finance, environmental regulation, philosophy, statistics, epidemiology, and geography. Each of the articles focus on what Rob Kitchin has called “data assemblages”—apparatuses that contribute to or generate Big Data science, including systems of thought, forms of knowledge, finance, political economy, governmentalities and legalities, materialities and infrastructures, practices, organizations and institutions, subjectivities and communities, places, and the marketplace where data are constituted.

This project grew out of the Society for the Philosophy of Information’s Seventh Workshop, “Conceptual Challenges of Data in Science and Technology” (2015, University College London).