Release of A Field Guide to “Fake News” and Other Information Disorders (Final Version)

Today sees the launch of A Field Guide to “Fake News and Other Information Disorders, a new free and open access resource to help students, journalists and researchers investigate misleading content, memes, trolling and other phenomena associated with recent debates around “fake news”.

The field guide responds to an increasing demand for understanding the interplay between digital platforms, misleading information, propaganda and viral content practices, and their influence on politics and public life in democratic societies.

It contains methods and recipes for tracing trolling practices, the publics and modes of circulation of viral news and memes online, and the commercial underpinnings of this content. The guide aims to be an accessible learning resource for digitally-savvy students, journalists and researchers interested in this topic. Continue reading

Slides from Talk on Actor-Network Theory, Digital Methods and Data Journalism at Ghent University

Yesterday I gave a talk at the Center for Journalism Studies at Ghent University about how Actor-Network Theory (ANT) and digital methods can be used to study and inform data journalism.

I will be using these approaches to study data journalism in my joint PhD with the University of Groningen and the University of Ghent. I will also be exploring the opportunities that these techniques afford for informing data journalism practices in my fellowship at the Tow Center for Digital Journalism at Columbia University. The Tow project is called ‘Controversy Mapping for Journalism’ and aims to convene pioneering Science and Technology Studies and digital methods researchers at Sciences Po and the University of Amsterdam with leading journalism scholars, information designers and computer scientists at Columbia University to explore how emerging digital traces, tools and methods can be utilised to transform the coverage of complex issues.

Below are the slides from this talk.

“Stop searching, Start Questioning!”: The Society of the Query, Amsterdam, Nov. 2009

The Society of the Query conference was held in Amsterdam between the 13th and 14th of November 2009. It was organized by the Institute of Network Cultures lead by Geert Lovink. The conference aimed to generate reflection on the role of the search engine in our society, and particularly in our culture. What happens to our knowledge and culture when stored on online platforms and accessed through search engines? The dominant role of one particular search engine, Google, was one of the main themes of the conference, along with potential alternatives to web search and interface design, as well as Internet and search engine art.

One may be skeptical of the potential of such Humanities approaches to influence the course of technological developments. However, theory, critical thinking and art play a significant role in that they generate a cultural flow which could alter the course of technology developments  and potentially lead to a different direction.

The posts in this section are articles which I contributed to The Society of the Query blog.

Matteo Pasquinelli: Are We Renting our Collective Intelligence to Google?

Matteo Pasquinelli’s presentation this morning at the Society of the Query was based on his paper, Google’s PageRank Algorithm: A Diagram of Cognitive Capitalism and the Rentier of the Common Intellect. The paper can be downloaded from his website.

The essay and presentation of the Italian media theorist and critic focused on an alternative direction for research in the field of critical Internet/ Google studies. He proposed a shift of focus from Google’s power and monopoly and the associated critique in Foucauldian fashion developed within fields such as surveillance studies, to the “political economy of the PageRank algorithm.” According to Pasquinelli, the PageRank algorithm is the base of Google’s power and an emblematic and effective diagram for cognitive capitalism.

Society of the Query

Google’s PageRank algorithm determines the value of a website according to the number of inlinks received by a webpage. The algorithm was inspired by the academic publications’ citation system, in which the value of an academic publication is determined by the number of quotations received by the journal’s articles. Pasquinelli takes this algorithm as a starting point in order to introduce into critical studies the notion of “network surplus-value,” a notion inspired by Guatarri’s notion of “machinic surplus value.”

Society of the QueryThe Google PageRank diagram is the most effective diagram of the cognitive economy because it makes visible precisely this aspect characteristic of the cognitive economy, namely network value. Network value adds up to the more established notions of commodity use value and exchange value. Network value refers to the circulation value of a commodity. The pollination metaphor used by the first speaker, Yann Moulier Boutang, is useful in understanding network value. Each one of us as “click workers” contributes to the production and accumulation of network value, which is further being embedded in lucrative activities, such as Google’s advertising model. While in the knowledge economy a particular emphasis is placed on intellectual property, the notion of cognitive rent to which Matteo Pasquinelli draws attention becomes useful here. Google as “rentier of the common intellect” refers to the way in which free content produced with the free labour of individuals browsing the internet is being indexed by Google and used in profit generating activities.  From this perspective Pasquinelli challenges Lessing’s notion of “free culture” in that Google offers a platform and certain services for free, but each one of us contributes to the Google business when performing a search, data which is being fed into the page ranking algorithm. The use of the notion of common intellect or collective intelligence in this context is however debatable, as shown in the discussion session which followed the presentation, because there is only a certain relatively limited segment of individuals – the users which contribute content to the web – , whose linking activity is being fed into the PageRank algorithm. The prominence of the PageRank algorithm as generator of network value has also been questioned, as the algorithm is not the only ranking instrument. As the posting on Henk van Ess’ website shows, human evaluators also participate in page ranking.

What is there to be done about Google’s accumulation of value by means of exploitation of the common intellect? Or to use Pasquinelli’s metaphor, are there alternatives to Google’s parasitizing of the collective production of knowledge? How can this value be re-appropriated? As the speaker suggested, perhaps through voluntary hand made indexing of the web? Or an open page rank algorithm? Or perhaps a trust rank? This question is still open.

Photos by Anne Helmond.

Teresa Numerico on Cybernetics, Search Engines and Resistance

Society of the QueryTeresa Numerico is a lecturer at the University of Rome, where she teaches history and philosophy of computer science and epistemology of new media. Her presentation brought a historical and philosophy of science perspective into the themes of this conference: web search, search engines and the society of the query. She attempted to see search engines today through the lenses of cybernetics. According to her, digital technologies today intertwine the cybernetics concepts of communication and control. Just as cybernetics had to deal with communication and control, so search engines today mediate between cooperation and monopoly.

But how more precisely is the cybernetics approach embedded into search engines? According to Teresa Numerico, there are areas in which search engines have a lot in common with the cybernetic approach to machines and creating a cognitive framework, such as: search engines are black boxes in that the ranking process is not transparent, the search function offers output almost automatically to external input, and the ranking algorithm hypothesizes the self-organization within the network.

By offering a strong cognitive framework, search engines are doing the work of the archive, hence her call for an “archaeology of techno-knowledge of search.” Her  notion is influenced by Foucault’s Archaeology of Knowledge. According to Foucault, “The archive is the first law of what can be said. […] But the archive is also that which determines that all these things said do not accumulate endlessly in an amorphous mass […]; but they are grouped together in distinct figures composed together in accordance with specific regularities.” (Foucault, 1969/1989: 145- 148).

Her main questions in relation to this direction of research into search engines were: Who controls the archive and its meanings?, as we have no control on the meaning that comes out this work; Who is defining the web society archive?, and ultimately, what is there to be done? According to Teresa Numerico, the only possible reaction is resistance. She concluded her presentation with a practical list of suggestions for potential actions of resistance which any of us can take: be creative, not communicative, in order to elude the control component of communication, as well as archiving and searching, minimize the number of online tracks that you leave, close internet devices every now and then, make efforts to vary your sources of knowledge by consulting different search engines, and maintain a cross-media orientation in order to verify the trust and authority of one source against others.

Society of the Query

Photos by Anne Helmond.

The Ippolita Collective: Stop Questioning and Start Building!

The Ippolita Collective brought a humorous and refreshing change of perspective into the attempt to search and formulate solutions for one of the issues addressed by the second session of the Society of the Query conference, namely Digital Civil Rights. They proposed to change the “what” style of questioning associated with positions of domination, as in “what is to be done?” into a “how” style of approaching issues in order to avoid surrendering to fear, paranoia or the desire to control and protect every aspect of your interactions with technology. While if you ask yourself the “what” questions you may end up in paranoid positions such as  luddism  or technocracy, if you have the “how” attitude, then you are a curious individual, with a desire to learn and to understand, to share and exchange knowledge with others. You may even be some sort of hacker.

Society of the Query

The “how” attitude, an attitude which will bring you to media literacy, is, as the Ippolita Collective explains, a convivial model. As opposed to the industrial model of productivity, the convivial model implies maintaining autonomy, creativity and personal freedom in interaction with individuals or technology. How would one build up this model of conviviality? The answer, according to the artistic and research group is to build convivial tools! A convivial tool is not something that you can purchase but something that you have to build yourself in order to have it match your own needs. It is something that you enjoy creating, like making your own wiki.

Society of the QueryCan the convivial attitude be applied in approaching our Google/ digital rights/ privacy issues? The Ippolita Collective already has, and the result is a tool named SCookies which you can download for free here. The application takes its slogan, “Share your Cookies!” literally and mixes your cookies with the cookies of other individuals who have installed it, in order to alter your profile and render it unreliable. While it may not be the solution, the SCookies application is emblematic of a style, an attitude of approaching an issue such as digital civil rights.

The Ippolita Collective has recently finished a book on Google, The Dark Side of Google, which you can download for free from their website.


Photos by Anne Helmond.

Florian Cramer on “Why Semantic Search is Flawed”

Society of the Query Florian Cramer, head of the Networked Media Master at the Piet Zwart Institute in Rotterdam, ended the last session of The Society of the Query conference. The Alternative Search 2 session presented a few of the latest web technologies as potential directions for the web and search engine design in the near future: RFDa, which would make the shift to what Steven Pemberton named the web 3.0, and semantic search, as implemented in the Europeana project.

Florian Cramer concluded this series of presentations with a critical and somewhat pessimistic evaluation of the current state of the web and the idea of a semantic web and semantic search, as one of its potential futures. His three main arguments revolved around: “why search is not just web search (and not just Google),” “why semantic search is flawed,” and “why the world wide web is broken.”

The first point expressed his frustration with the narrow understanding of the notions of query and search engine on which the conference focused. As he explains, wikis and social networking sites also include the search engine functionalities.

Society of the QueryAs far as semantic search is concerned, Cramer usefully pointed out to the difference between folksonomies, the currently used form of semantic tagging, and the universal semantic tagging which a semantic web would require. While folksonomies are “unsystematic, ad-hoc, user-generated and site-specific tagging systems,” (Cramer, 2007), like the tagging systems of Flickr for example, the semantic web would require a structured, universal tagging and classification system which would apply to the entire web. Cramer is skeptical of the possibility to create this unified, ‘objective’, meta-tagging system because classifications, or taxonomies, are not arbitrary but expressions of ideologies, which would call for the discussion of the politics of meta-tagging. While meta-tagging may have its advantages, such as arguably empowering the web users and weakening the position of large web services corporations, although still maintaining the necessity of search engines to aggregate data, it also has several potential weaknesses. The semantic web model must be based on trust in order to prevent some predictable problems, such as massive spamming.

In the concluding section, Cramer expressed his concern that the Internet as a medium for publication and information storage is not sustainable and argued for redundancy in web archiving. However this desire for permanence raises questions about the nature of the medium itself.

Photos by Anne Helmond.