Skip To Content
Cambridge University Science Magazine
One of the most fascinating aspects of the growing field of Artificial Intelligence (AI) is trying to get interpretable meaning from its output. A novel approach to this question is being developed in the AI and Arts group at the Alan Turing Institute (https://www.turing.ac.uk/research/interest-groups/ai-arts). With around 70 members, encompassing multiple stakeholders drawn from academia to cultural heritage institutions to the creative industries to policy makers, the AI and Arts group delivers excellence in research and provides broad means of engagement through their website, webinars, publications, events, and conferences. BlueSci met with Lukas Noehrer, one of the group’s organisers, to gather insights on this innovative and multidisciplinary research project.

BlueSci: Could you tell us who you are and what your role is in the group?

Lukas Noehrer: I’m a PhD researcher and research assistant at the University of Manchester. The Arts and AI group was formed in 2019 as part of the Turing Institute’s so-called interest groups. I joined the group as a representative for Manchester, and I recently became one of the group organisers.

BS: You use algorithms to interact with art material. Could you tell us how machine learning works?

LN: We use different methods. We are a very science-driven group, but we also draw a lot of input from arts practices and the creative industries. Being aware of the interplay between these various research fields is useful in helping us answer each other's questions. We use a very broad set of machine learning (ML) algorithms due to the variety of applications we are working with: these range from supervised, semi-supervised, and unsupervised models to deep learning approaches. ML is a method of teaching computers to learn from data, typically by finding patterns or distinct groupings — something that would be difficult for a human to do by hand. There are a variety of approaches used in ML, but they broadly fall into one of two categories: supervised and unsupervised. In supervised learning, labelled datasets — that is, datasets where you know the outcome, e.g., house prices — are used to teach the computer to classify or predict the outcomes accurately. Whereas, in unsupervised learning, you do not know the outcome, so you teach the computer to identify groups (otherwise known as clustering) in the dataset. ML has found applications across a wide range of sections and in everyday life, from healthcare to suggesting movies.

BS: Your use of machine learning to work with arts is multifaceted, and one of the goals is directed towards cultural heritage. Could you give us an example of work you develop in this field?

LN: Yes. For example, in the group, we work with semantic image segmentation to get a systematic observation of castle walls to show how they change over time in terms of vegetation. Image segmentation in this case allows us to define what is a window, what is the wall, or other objects that could be present on the wall, such as vegetation, to see how they change over time. This is definitely very useful in supporting conservation and restoration of those castles, but it also saves time and resources compared to traditional methods of observation.

BS: One important notion in machine learning is the concept of explainability. Could you define this term and tell us to what extent the arts help in achieving it?

LN: Explainability in AI is the process of making sense of the machine’s output from a human perspective. The artistic practice gives different methods and tools to investigate datasets and human life. It gives a broader view of how things work. For instance, museum collection managers or curators, who have been working with the art data for years on a day-by-day basis, bring critical insights to interpret post hoc why certain algorithms work in the way they do. Explainability is also intersecting with ethics, which is a huge topic and issue in the field. It’s really important for people who work in the AI world to always question the data they are using — this is what we do in the group. For instance, in museum collections, the datasets have been accumulated through centuries. There have been bias issues due to, for example, their colonial past or the misrepresentation of gender. A good example are collections that were established by a primarily white bourgeoisie, who mainly collected white male artists.

BS: Another of your group’s projects is in the use of AI to support, enhance, simulate or replicate creativity. How would you define machine creativity?

LN: Machine creativity is a very narrow-minded understanding of creativity. ‘Autonomous creator’ doesn’t mean we put the machine at the level of the artist. Machines are autonomous in the sense that their outputs are not completely human-controlled, meaning that all the instructions to produce a result are not all hard-coded or pre-determined. These are deep learning approaches where we attribute some kind of creativity to the machine when we can’t completely explain what the output will be, or if a very specific algorithm performs very well in executing a human-like task, e.g., composing a piece of music or painting an image in a certain style. A trending example would be the applications of Generative Adversarial Networks (GANs) to multimedia data, a ML approach that tries to recreate an output so similar to its original training data that humans can’t tell if it’s an original or machine-made.

BS: Could you give an example of a partnership between an artist and a machine to produce a creative outcome?

LN: An interesting example would be a digital art exhibition that took place at the Edinburgh International Festival and is called the New Real (https://efi.ed.ac.uk/events/the-new-real/).

BS: What is the most long-term goal of the AI and Arts group?

LN: The whole Turing Institute is going through a re-visioning, re-missioning phase and received recently a budget of £10 million from the Engineering and Physical Sciences Research Council, on behalf of UKRI, to support this project. The goal of our group, within this scheme, is to develop a collaborative and fruitful community that respects all various disciplines and fields that equally feed into the AI and Arts remit.

Pauline Kerekes is a postdoctoral researcher in the Department of Physiology, Development and Neuroscience. Lukas Noehrer is a third-year PhD student in museology and computer science at the University of Manchester.