Artificial intelligence: what is at stake for the university?

I was recently amused by a seemingly light-hearted piece of information: a French doctoral student has created a web page on which you can enter a word and find out in return what your political orientation is. It's fun, especially since you can ask a question that is absolutely vital for a Swiss: is fondue a left or a right-wing dish? 

To answer this question, the doctoral student explains that he uses a machine learning algorithm. His site has been a huge success, even though he readily admits its absurdity and limitations when the journalist points out that fondue is left-wing and raclette is right-wing... All this makes you smile, of course, but it is far from being insignificant, because artificial intelligence (AI) algorithms are now absolutely everywhere. They influence our searches on Internet, help us choose our trips or our interests, and not a day goes by without them being mentioned in the media. Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, authors of The Age of AI, refer to artificial intelligence as one of the greatest transitions for humanity since the Enlightenment or the introduction of printing. In their latest book, they show us how AI is changing the way we approach the economy, security, knowledge, medicine, (geo)politics, law and even our relationships with each other. In doing so, they urge us to assess its limits.

This advice also applies to UNIL, because AI is revolutionising education - and just about every scientific field. In our plan of intent, we state that we want to understand the opportunities and risks of digital technology and to pursue its development in a reflective and critical manner. But how do we do this in practice? Among the myriad of experts in the field on campus, I would like to give the floor to five UNIL professors who use AI as a method or as a research object, and who can help us position ourselves in the face of the advantages and pitfalls of this now unavoidable tool. 

Boris Beaude, Associate Professor in Digital Cultures, Societies and Humanities in the Faculty of Social and Political Sciences, is interested in the epistemological, methodological, social and political challenges of the digital world, the digital traceability of social practices and the potential for the exploitation of massive data in social sciences. In his view, the mobilisation of digital methods is a crucial challenge for the humanities and social sciences, which requires adequate human resources. However, the discipline seems reluctant to delegate problem solving to an algorithm. In particular, deep learning has bad press: it is feared that it will soon replace humans, leading to significant job losses, and its relative opacity is rather unwelcome in the academic world. However, AI sometimes proves to be essential for analysing impressive masses of data, such as the content of millions of Wikipedia articles in Boris Beaude's study on collective attention. For him, the Humanities and Social Sciences must accept to seize these technologies, but on three conditions: maintain a permanent reflexivity on the methods, keep in mind the biases inherent to this type of approach, and rethink society in parallel in order to guarantee the most equitable redistribution of productivity and employability.

Fairness is a difficult concept to reconcile with the impacts of AI on competition between companies. Roxana Mihet, an tenure-track assistant professor specialising in finance (HEC), analyses the expansion of big data technologies and their impact on the economy. The analysis of thousands of companies active in the field of conversational AI suggests that the use of artificial intelligence rarely leads to a distribution of wealth. The competitiveness of a company depends on two factors: its ability to collect data and its ability to exploit it. Large companies such as Amazon or Galaxus have the resources to transform big data into useful messages that allow them to anticipate market needs and propose "tailor-made" offers to customers. SMEs, on the other hand, do not have the resources to compete. The new law on data protection, which will limit the period for which data can be stored and sold to a third party, will not change the situation. It will be particularly detrimental to young organisations, which will not have had enough time to collect sufficient usable information. Of course, it is essential to invest without delay in these future technologies, even in fields that are not yet really interested in them (in agriculture, for example, where algorithms make it possible to better predict the ideal timing for planting and harvesting, leading to a potentially higher yield). But let's not fool ourselves: 15% extra profit for a farmer or for the GAFAs is not equivalent... In economics, the degree of sophistication of AI-type technologies undoubtedly generates benefits, but these are heterogeneous and benefit the richest.

This observation has an unexpected echo in climatology. Tom Beucler, tenure-track assistant professor of Geoenvironmental Data Science in the Faculty of Geosciences and Environment, is investigating how deep learning can help atmospheric science. His work focuses on two main areas: the development of weather and climate prediction models, and the use of physical laws to make existing algorithms more consistent. Surprisingly, ethics is a central concern for scientists in his discipline (see related articles below). Indeed, in order to learn, machine learning relies on a lot of data. This data is generated by sensors, which, unsurprisingly, are more widespread in affluent countries. As a result, tropical cyclone predictions are much more reliable for the United States than for the North Indian Ocean. Similarly, risk balances, when quantified in money terms, are clearly underestimated in disadvantaged regions: the destruction of a villa with a swimming pool in Florida can cost more than a person killed by a cyclone in Bangladesh. Researchers must therefore adopt a clear-sighted and responsible attitude.

The same conclusion applies to health. Raphaël Gottardo, Professor in the Faculty of Biology and Medicine, specialist in digital immunology and head of the Centre for Biomedical Data Science at the CHUV, confirms that biases do not spare Big Data in the medical field. Predictive algorithms are often racially biased due to deficient training data. For example, Afro-descendant populations in the United States are less studied, which may disadvantage their treatment. Human biology also faces specific problems with the general explosion of data and the increased need for computational and multidimensional efforts: such algorithms are difficult to train and complex to interpret. Even if a model is able to predict whether a patient will respond positively or negatively to a treatment, it will be difficult to explain its reasoning and extract the relevant variables. And while AI is becoming more and more accessible to everyone (for example, by offering people the opportunity to calculate their risk of developing cancer online), it is less and less easy to access what lies behind results that look true - but may not be. A good example of this is the resounding failure of Galactica, the AI software launched by Meta and trained on 48 million data sets to generate hyperbolic-looking but questionable scientific papers. We have no choice: digitalisation is accelerating, we need to invest in digital technology, but we also need to train specialists to read the data produced intelligently - and to protect it.

The invasion of privacy is one of the many research topics of Rebekah Overdorf, tenure-track assistant professor in the Faculty of Law, Criminal Sciences and Public Administration. Her work explores the manipulation of opinions on social networks, the de-anonymisation of online contributions through analysis of writing styles, and the effects of machine learning on data security and privacy. She also teaches ethical deployment of AI in digital forensics. In her view, the use of artificial intelligence in areas such as profiling or facial recognition is very dangerous, as studies show that the software discriminates against racialised women in particular, with potentially disastrous judicial consequences. The algorithms used to grant or withhold bank loans can also be highly biased. Can we fight back against this AI interference in our lives, and if so, how? This is what POTs (protective optimization technologies) are working on, for example, with Waze, the famous real-time navigation assistant. This application's calculations have overwhelmed certain localities with traffic whose infrastructures were not designed to absorb so many vehicles. The POTs allowed them to return to normal life by evaluating the number of speed limits, speed bumps and junctions to be implemented to get out of the preferred alternative routes... For Rebekah Overdorf, the global thinking practiced by AI creates local problems that need to be stopped for a moment to rebalance the powers.

As you will have understood by reading these lines, the conclusion is as obvious as it is unanimous: the digital revolution does not wait, and it is essential that our seven faculties continue to grasp the issue with excellence. However, this transformation must be carried out consciously, while respecting the values of equality, diversity and inclusion promoted by UNIL, and while limiting its impact on the environment. Only by combining technological excellence and respect for others can we truly advance science - and society. 

Links

en_GBEnglish (UK)