Podcasts

Here are the podcasts of the Swiss STS Meeting 2014 (keynote speakers’ lectures and roundtables). Enjoy !

Welcome & introduction:
Émilie Bovet and Nicholas Stücklin (Co-Presidents of STS-CH, University of Lausanne)
Thursday 20 February 2014, 9:00-9:30, Amphimax 414

Keynote speakers’ lectures

Anne Beaulieu (University of Groningen):
Big Data as Knowledge Production
Friday 21 February 2014, 14:15-15:15, Amphimax 414


Introduction



Conference



Questions




Description furnished by the speaker : «In this talk, I will consider big data as a form of knowledge production that has developed in relation the changes we have observed in the past decades in terms growth, accountability, network effects and technology. From this analysis, the need to understand and coordinate kinds of formalisation and the focus on patterns detection as an epistemic strategy emerge as key features of big data as a form of knowledge production. This framing of big data, not only as a new ‘ object’ for science, but also as a set of practices, technologies and institutional arrangements enables us to design research programmes (such as EnergySense) that go beyond the one-size fits all approach of many funding schemes and centres– while mobilizing the promissory potential of Big Data.»

The slides and text of Anne Beaulieu’s lecture are available on her personnal website.

Chair: Vincent Pidoux (University of Lausanne)

Rebecca Lemov (Harvard University)
Dreams as the Stuff of Data: A Turning Point in Mid-Twentieth-Century Big Social Science
Saturday 22 February 2014, 09:30-10:30, Amphimax 414


Introduction



Conference



Questions




Description furnished by the speaker: «Contrary to popular depictions, big data is salient not only because it is big in terms of scale of the information compiled and available to be used, but also–and perhaps especially–because it is able to penetrate into the realm of the subjective. Big data promises access to ever-more-intimate parts of human experience. Its users can manipulate it (or so the promise goes) to describe or predict who will fall in love with whom, who will vote for whom, the preference for certain comestibles or sensations, the likelihood one may die tomorrow, or the way in which spiritual enlightenment works. This paper frames the large-scale search for ever-more-personal data historically by examining a sort of “data ruin,” an ambitious but forgotten archive from the mid-twentieth century American psycho-anthropological and social sciences that was an attempt to capture the un-capturable: more and more elusive forms of data residing at the very edge of visibility. The paper examines the network of techniques and technologies, the background of methodological zeal, and some of the scholarly institutions and research exponents that combined to make possible this unusual data clearinghouse. I will touch on the technique of the “human document,” a brief history of the microcard, and the targeting of dreams from non-literate people in large amounts, all of which suggest an ongoing operationalization of subjectivity itself.»

Chair: Jelena Martinovic (University of Lausanne)

Sabina Leonelli (University of Exeter)
What Difference Does Quantity Make? On the Epistemology of Big Data in Biology
Thursday 20 February 2014, 14:15-15:15, Amphimax 414


Introduction



Conference



Questions




Description furnished by the speaker: «This paper focuses the epistemological significance of big data within biology: is big data science a whole new way of doing research? Or, in other words: what difference does data quantity make to knowledge production strategies and their outputs? I argue that the novelty of big data science does not lie in the sheer quantity of data involved, though this certainly makes a difference to research methods and results. Rather, the novelty of big data science lies in (1) the prominence and status acquired by data as scientific commodity and recognised output; and (2) the methods, infrastructures, technologies and skills developed to handle (format, disseminate, retrieve, model and interpret) data. These developments generate the impression that data-intensive research is a new mode of doing science, with its own epistemology and norms. I claim that in order to understand and critically discuss this claim, we need to analyze the ways in which data are actually disseminated and used to generate knowledge, and use such empirical study to question what counts as data in the first place. Accordingly, the bulk of this paper reviews the development of sophisticated ways to disseminate, integrate and re-use data acquired on model organisms over the last three decades of work in experimental biology. I focus on online databases as a key infrastructure set up to organise and interpret such data; and on the diversity of expertise, resources and conceptual scaffolding that such databases draw upon in order to function well, including the ‘Open Data’ movement which is currently playing an important role in articulating the incentives for sharing scientific data in the first place. This case study illuminates some of the conditions under which the evidential value of data posted online is assessed and interpreted by researchers wishing to use those data to foster discovery, which in turn informs a philosophical analysis of what counts as data in the first place, and how data relate to knowledge production. In my conclusions, I reflect on the difference that data quantity is making in contemporary biological research, the methodological and epistemic challenges of identifying and analysing data given these developments, and the opportunities and worries associated to big data discourse and methods.»

Chair: Alain Kaufmann (Science-Society Interface, University of Lausanne)

Bruno J. Strasser (University of Geneva – Yale University)
The “Data Deluge” : The Production of Scientific Knowledge in the 21st Century
Thursday 20 February 2014, 09:30-10:30, Amphimax 414


Introduction



Conference



Questions




Description furnished by the speaker: «The notions of “data deluge” and “big data” have taken a firm hold in current discourses about science and society. They serve to define a new era in the history of science where data is more abundant than ever before and knowledge is directly derived from data. But as several scholars have pointed out, there have been many precedent to the current “data deluge”. And each era has devised its own material and social technologies to store, organize, and make sense of overwhelming amounts of data. By opening a dialogue about the past, present, and future of data, one can better distinguish what is new and what is not in today’s “data deluge”. Looking at the long tradition of collecting, comparing, classifying, and computing data, it becomes possible to reassess our current models of what knowledge is, how it is produced, to whom it belongs and who should get credit for producing it. Instead of debating superficial claims about the revolutionary nature of “big data”, one can get a deeper understanding of present debates about access, ownership, and authorship in science.»

Chair: Émilie Bovet (University of Lausanne)

Aaro Tupasela (University of Helsinki)
Preserving National Treasures: Productivity and Waste in Biobanking
Friday 21 February 2014, 09:30-10:30, Amphimax 414


Introduction



Conference



Questions




Description furnished by the speaker: «Within European innovation policy rhetoric, the terms efficiency, productivity and innovation rank high on the list of catchwords of the day. Within biobanking, donation and altruism have gained a similar status as catchwords within which collection activities have been traditionally framed. yet within the more commercially and industry dominated environment the relationship between collecting and producing has become, at times tenuous and strained. Calculations and expectations of commercial productivity based on publicly collected and funded collections and research have not always been good bed-fellows. In my talk I will trace some different discursive contours around which biobanking activities have been framed in an attempt to shift discussions surrounding the collection and use of tissue sample collections in a more constructive and productive setting.»

Chair: Marc Audétat (Science-Society Interface, University of Lausanne)

Roundtables

Roundtable with a panel of Big Data managers in the field of life sciences
Thursday 20 February 2014, 17:30-18:45, Amphimax 414

Speakers :
Thomas Heinis
Post-doctoral researcher, Data-Intensive Applications and Systems Laboratory – EPFL, Lausanne

Vincent Mooser
Professor, Head of the Lausanne Institutional Biobank – CHUV, Lausanne

Patrice Poiraud
Smarter Analytics & Big Data Initiative Leader – IBM France

Chair: Gabriel Dorthe

Roundtable with a panel of Big Data managers in the field of
social sciences and humanities

Friday 21 February 2014, 17:30-18:45, Amphimax 414

Speakers :
Jean-Henry Morin
Associate Professor, Institute of Services Science, HEC – UNIGE, Geneva
President of ThinkServices – Think Tank on Services Science and Innovation, Geneva

Boi Faltings
Director of the Social Media Lab – EPFL, Lausanne
Professor, Faculty of Information and Communication Sciences – EPFL, Lausanne
Founder and director of the Artificial Intelligence Laboratory – EPFL, Lausanne

Stéphane Grumbach
Senior Scientist – French Institute for Research in Computer Science and Automation (INRIA)
Adjunct Director – IXXI, Rhône-Alpes Complex Systems Institute

Chair: Olivier Glassey