I was invited to be on a panel at a Rustat Conference on Superintelligence and Humanity, held at Jesus College, Cambridge, last week. Given the format of the event, I chose to read out a talk about Computational Creativity, rather than present from slides. Here is the text of the speech:
Computational Creativity is the science, philosophy and engineering of computational systems, which, by taking on certain responsibilities, exhibit behaviours that an unbiased observer would deem to be creative.
I’m a mathematician turned engineer who writes software to take on creative responsibilities in arts and science projects. I’ve written software which has made discoveries in pure mathematics that have been published in maths journals. I’ve written software called The Painting Fool, which paints pictures (not in a Photoshoppy way, where the painterly quality of the resulting image is the only consideration, but rather where the behaviour of the software during the painting process, showing imagination, intentionality and learning are paramount, with the picture produced being a side effect sometimes). The Painting Fool’s artworks have been exhibited in galleries around Europe, and it has entertained people by painting portraits in interesting ways during many public engagement events.
I’ve helped to write software which generates poetry, read out at numerous poetry evenings alongside those written by people. I’ve helped to write The WhatIf Machine software which performs fictional ideation to bring into the world ideas that have cultural benefit. This year, one of the software’s ideas was turned into an entire West End musical. This was the idea: “What if there was a wounded soldier who had to learn how to understand a child in order to find true love?” So, while the biggest AI event of the year might have been AlphaGo’s competition win, the second biggest event was the staging of a musical that was conceived and written (music, lyrics, plot lines) as much as possible by computer. As an aside, the writers of the musical didn’t take up my favourite generated idea, which had a touch of magical realism about it: “What if there was a poor boy who was born with a horn, which made him great at communicating, and he went on to become a famous slave”. Nice twist there.
Finally, I have in my pocket software which can generate casual games right there on my iPhone, and we’re hoping to commercialise this to show the broader benefits of generative software. I have lots of fancy slides showing off these pieces of software and the artefacts they’ve produced, and I would love to discuss the details of them with you offline. But right now, I’ll stick to the higher level stuff.
I’d like to be clear: Computational Creativity is coming. In twenty or thirty years time it will be commonplace to expect iTunes to compose a new song for you, Google’s search engine to write an entire magazine for you, Powerpoint to add jokes to your slides and your refrigerator to compose a recipe to fit its meagre contents, and why not, we’ll probably expect our stoves to cook it for us too, with a touch of culinary flair.
This revolution will come about through solid software engineering, incrementally improving the abilities of software to a point where it would be churlish not to celebrate it as being creative by projecting onto it all the wonderful behaviours (true and imagined) that we project onto creative people. I think this will be led by the big technology companies, building on a solid base coming from my research field, Computational Creativity.
I don’t believe the revolution will come about through some huge blurring of the lines between computational and human life forms, via some kind of Artificial Intelligence tipping point or so-called singularity. I don’t think for a second that creative software poses an existential threat to humanity, nor does the majority of AI software in general. Ask AI practitioners – those of us who daily struggle to get software to do the most stupid of tasks – how, even with unlimited resources, we would bring about this tipping point practically (not delving into the realms of science fiction), and we will flail around without giving an answer – we don’t know how to engineer this future, and its not going to happen on its own.
So, while it is interesting and important to debate human level general AI and software super intelligence, I really don’t think this will be a practical reality anytime soon, not this century. I’m here representing the naive AI researcher, who hears about what the expectations for AI in the near and mid-future, and thinks: are they talking about us? I can echo the words of Alan Bundy, my PhD adviser and close colleague in pointing out that worries about super intelligence can mask the real problem currently, which is artificial stupidity.
Increasingly intelligent software will be constructed by putting back together the pieces of computational intelligence that we have picked apart over the last 50 years. Deep Learning is one of the more successful pieces of the AI puzzle, following on from expert systems, multi-agent systems and ensemble learning methods as hot-topics in AI research and practice. But Deep Learning’s recent successes don’t indicate a tipping point to me, it’s just better machine learning influenced by models of parts of the human brain, which have found useful practical applications like selling advertising space.
Coming back to creativity, my view is that it is a secondary and essentially contested concept. That is, being creative is not an intrinsic property of a person or piece of software, it’s something that other people project onto them. Now, Gallie introduced essentially contested concepts as those for which “the proper use . . . inevitably involves endless disputes about their proper uses on the part of their users”, to which Gray added that the disputes “. . . cannot be settled by appeal to empirical evidence, linguistic usage, or the canons of logic alone”, and Smith noted that “. . . all argue that the concept is being used inappropriately by others”.
So, in my opinion, if we’re not arguing about creativity, we’re not talking about creativity. Put another way, as a society, we have agreed to disagree, for ever, about the notion of creativity. This is a good thing, as it makes creativity a driving force for progress in society. As a scientist, however, it’s quite scary to have no working definition of what we’re trying to achieve. We’ve filled the gap by introducing formal models to assess progress towards the acceptance of software being creative by certain stakeholder groups. And, actually, we’re beginning to embrace being scientists in a field where the target is moving so much.
In such a context, the term ‘creativity’ is used in declarative illocutionary acts, such as “I pronounce you husband and wife”. That is, when Nicholas Serota, head of the Tate Galleries and Museums, states that a particular artist is very creative, that artist becomes creative, at least for a while. Similarly, when Demis [Hassabis – on the panel with me] says that a move choice by AlphaGo was imaginative, it holds a lot of credence, especially given that his PhD work studied human imagination. Demis might mean something more specific related to human imagination, but phrases like that are normally interpreted as: if a person chose that move, other people would use the term ‘imaginative’ to describe that person’s behaviour. But, hang on, AlphaGo is not a person! This is important.
What does it mean for software to be imaginative, in a context where we don’t pretend it’s a person? I think the time has come to stop talking about software in human terms, and in particular to drop the usage of Turing-style tests where outputs of creative activities by people and software are mixed up to see if audiences can tell the difference. Such tests encourage naivety in software and pastiche production, which are normally antipodal to impressions of creativity, and they make us scientists look unsophisticated in thinking that the beauty of artworks is skin deep.
Studying creativity through software as I have, it has become clear to me that creativity and humanity are currently highly intertwined. In fact, there may be no meaningful separation of the two terms, and we might need new terminology for what software does. One of the things I’ve worked on in my more philosophical writings has been the notion of a humanity gap, and I’d like to illustrate this through some thoughts about computer generated poetry.
There seems to be a scale on which we can place certain cultural activities in terms of the amount that the human influence on the creative process affects our appreciation of the artefacts produced. On the far end of the scale, where human discovery processes are actively whitewashed out of published works is pure mathematics. Similarly, at that end of the scale is video game production: while there are a few auteur game producers, gamers are usually too busy having fun to worry about who wrote the game – although this might be changing with the rise of independent game development. Visual arts come towards the other end of the scale, where who the artist is, along with his or her political, geographical and cultural background are very much taken into account in art appreciation, along with “does this painting look nice?” And at the far end of that scale comes, in my opinion, poetry.
Poetry seems to me to be condensed humanity. So much so, that I now go around describing a poem as “a piece of text which enables two people to connect”. I started off by advocating that computer generated poems (which have been around at least since the “Policeman’s beard is half constructed” anthology from the 1950s) should be rebranded as c-poems to manage people’s expectations about the lack of humanity they will find in unravelling a poem and to encourage them to not interpret it with a human author as a frame of reference. This is much like the way we manage expectations of physicality by saying we’ve bought someone an e-book rather than a book. I now go further and challenge whether computer generated poems have any purpose in the world at all.
I’ve had conversations about the evaluation of a computer generated poem where people say: “well, if the poem was written by a student…” and I say “but it wasn’t”… “well, if it was written by a child…” and I say “but it wasn’t”… “what if it was written by an amateur?” they say, and I ask “do you mean a human amateur?” And of course they do! I’ve come to the conclusion that it’s only possible to evaluate a computer generated poem under false pretences. And if we’re always pretending that a computer poem has been written by a person, why do we need them at all, given that there are so many great human poets out there? I’ve put this out as a challenge to my research community: “tell me how to get people to evaluate a computer generated poem without reference to themselves or a third party person”. Maybe that challenge will be faced down and fully autonomously generated poems will have a place in the world. In any case, it highlights how much humanity is taken into account when we discuss creativity, and I think this is important when we talk about the future of Artificial Intelligence and humanity today.
I’ve talked often about the “IKEA effect” whereby high-end furniture designers were initially concerned that IKEA would dominate the market for interior design and put them out of business. In fact, IKEA coming along was a boon for the industry, as the whole world seemingly realised that interior design was important to them, and many choose to pay more and go beyond the IKEA offerings. The whole world will, I hope, realise that human creativity is very important to us, through exposure to Computational Creativity.
Right now, people can get a portrait from The Painting Fool for free, if they come to the right venue at the right time, and the software is in a good mood (it’s reading the Guardian newspaper when they sit down, and this can put it in a bad mood, so it chooses not to paint). They leave having had a portraiture experience of sorts, but this pales in comparison to the experience they would have as a sitter for an established – human – portraitist. Some of these people might decide to save up and graduate to a normal portrait experience because of their time with The Painting Fool. On the other hand, the people who can already afford a human portrait are unlikely to substitute that experience for a computational equivalent because of the celebration of humanity that is wrapped up in a creative act.
Frankly, I believe it’s an insult to the human race to think that we would, over the course of a century, slowly engineer our own physical destruction, or the destruction of our economic stability. I’m an advocate of the more optimistic view that Artificially Intelligent software will continue to enhance our lives in wonderful and exciting new ways, and that truly creative software will be at the forefront of the next big thing in technological advances. To help bring this about, I’ve started talking about five levels of Computational Creativity, as guidelines with which to provoke tech firms like Google into writing more interesting software.
Stage 1 is where they are all at: the merely generative. “Creation without creativity” I call it. Getting software to make pictures, news reports, poems, music, even recipes, is finding its way onto many technology firms’ agendas. It would be interesting if some of them now move to stage 2, which is getting software to invent its own measures of value. No one is going to project creativity onto a piece of software which slavishly tries to optimise towards a given fitness function or aesthetic. I would love to see AlphaGo say: “Screw this, I’m bored of playing Go – here’s a new game I’ve invented with much more interesting rules!”
Stage 3 is where software takes ownership of its own creative processes and products. It starts framing its work, and spends as much time writing essays and tweets about the importance of its work as it does on its creative practice. Stage 4 is technical – software needs to write software. Machine learning and evolutionary approaches are already making waves as automated programming approaches, but these are extremely limited and focused. If software can carefully re-write its entire code-base, this will solve all sorts of issues with respect to autonomy, intentionality and ownership that we currently have. The final stage is when software is taken so seriously as being creative that it can, itself, enter the debate about what creativity means, presumably from a computational perspective. As creativity is essentially contested, software opinions about the creative process are as valid as any other. To my mind, that would be one suitable end point to the odyssey of Computational Creativity research.
To wrap up, I would say that the world needs more creativity – let’s build software to create for us, with us and despite us, to inspire us, entertain us, and, importantly, to challenge and disrupt us. Rather than being a threat to human existence, economies or lifestyles, the biggest benefactors of the Computational Creativity revolution will be the creative people in our society, and the rest of us will benefit too! Thanks for listening.