Featured

Creativity in Artificial Intelligence

Introduction

Creativity is something human, isn’t it? I mean, when you think of creativity your mind floats to some amazing pieces of music either with interesting beats you’d never thought about or inspired compositions of instruments that just “work” together. Or maybe you think about art, in all its varieties, and its ability to convey original ideas in unique ways; maybe it’s the particular style of the artist or the scene or idea they are depicting that draws you in and make you contemplate their motivations or thought processes. But there’s also poetry, and writing in general, that convey something deep about the human condition. These most literally tell a story, although arguably all of the aforementioned art forms do, centred on an idea or collections of ideas – this is something that surely requires imagination and creativity? All of these things are products of the human mind; its seemingly infinite capacity for exploration of ideas based on the way it processes and learns from the world around itself. Could an AI ever recreate any of these? Surely not – all these ideas are incredibly complicated and there is no “algorithm” for a successful piece of art or music or book, something about them relies on human connection and understanding. But if we strip away the fact that an author was a human, the art itself is still compelling (and whether or not that is attributable to the human artist actually being human is another debate) for a multitude of reasons. You can’t really explain what these reasons are – its something indescribable. And people react differently, indeed art in all of its forms is subjective, but the fact that it spawned from a sense of individual thought or moment of creativity makes it special, and maybe that’s how we convince ourselves that an AI could never really do something creative. Like I said earlier, creativity is something human, isn’t it?

Project Magenta / Work by Google

projectmagentaWell, Google’s Project Magenta addresses this very question. The first tangible product of their work can be seen in this video. It learned to play the piano from just a few notes, created a beat and picked suitable instruments to ensemble together to create its first composition. Primitive? Yes. Creative? I’d argue also yes, but I see also why this early stage piece is not worthy of such an accolade just yet. It captures the right things, though. This AI learned from the world around it (albeit the only world it “knew” was musical instruments, how they sound and how they are put together – humans have a much wider range of stimuli to base art upon) and put these sounds together in a unique way to create something. I would advise having a play around with some of the Demos on Project Magenta – they’re really interesting and a showcase of how far AI has come! I could talk more about this, but I want to move on to something else in this blog.

There’s also a nice article on a Deep Neural Network that was able to mimic the style of artists. A picture of their work can be seen below. These art pieces could also be considered creative, but I think the fact that they so closely mimic other work means they are anything but. Still, though, they in a sense “understand the style” of an artist and are able to recreate pictures “in their image”. It’s certainly very interesting, but I would argue that this doesn’t have that moment of inspiration or uniqueness that would qualify these pieces to be truly creative.

artai.png(Source: ARXIV/A NEURAL ALGORITHM OF ARTISTIC STYLE/GATYS, ET AL.)

AlphaGo

I want to use this blog post to talk about one moment in particular that certainly peaked my interest in Artificial Intelligence and Machine Learning since it’s a truly compelling story. That is the story of Deepmind’s AlphaGo. There is a rich story I want to convey here, that directly relates to this blog post, but first I want to explain a little context…

Go is a game that was invented in China more than 2,500 years ago and is believed to be the oldest board game continuously played to the present day. There are tens of millions of players worldwide, and the game is very popular in East Asia. In a lot of cultures, being good at Go is considered a form of vast intelligence; Go was considered one of the four essential arts of the cultured aristochratic Chinese scholars in antiquity. It’s an incredibly popular game even today, with those who are good at the game like celebrities in their respective countries. The rules are very simple, but the game is complex. The objective of the game is to capture the most territory on the board and you and your opponent take turns in placing white and black “stones” on points of the board. Once placed on the board, stones may not be moved, but stones are removed from the board when “captured” (when it is surrounded completely by stones of the opposite colour). The game proceeds until neither player wishes to make another move; the game has no set ending conditions beyond this. When a game concludes, the territory is counted along with captured stones and komi (points added to the score of the player with the white stones as compensation for playing second, which is normally either 6.5 or 7.5 depending on the rule-set being used) to determine the winner. Players may also choose to resign.gogame.jpgThis sounds like a very simple game, and it is! But it has always been the ultimate goal for artificially intelligent systems to master this game, purely due to the number of possibilities of moves there are at every given point. There are more possible games of Go than there are atoms in our universe – a truly extraordinary amount (the lower bound on the number of legal board positions in Go has been estimated to be 2 x 10170). Many expected we were decades away from this dream. It is not the first time that AI systems have mastered games, but major successed previously included systems like IBM Deep Blue, which notably beat Kasparov at Chess in 1997. This was a remarkable achievement, and the first time a computer had beaten a human grandmaster at Chess. Go, however, is much much harder. IBM Deep Blue was the product of several experts coming up with clever algorithms to evaluate all possible “good” chess moves at a given position (“brute force search”) and decide which one was best. You can’t do this with Go, there are just too many possibilities.

alphagonetwork.pngalphagotreesearch.jpg
Deepmind revolutionised the way we thought about playing games by introducing Deep Neural Networks applied to Reinforcement Learning. I don’t want to go too much into how it works, because that is a topic for a blog itself, but it roughly combines three main ideas: the policy network, the value network and the Monte Carlo Tree Search (images above). Given a board position, the policy networks provide guidance regarding which action to choose – the output is a probability value for each possible legal move and so actions with higher probability values correspond to actions that have a higher chance of leading to a win. The value network provides an estimate of the value of the current state of the game i.e what is the probability I will ultimately win, given the position I am in now. The Tree Search explores a subset of different game possibilities from the current game state. It can look ~50 steps into the future, but if it is having a hard time deciding it may look up to 200 steps into the future. But how does it understand the concept of value, and get these win probabilities, and know which games to explore? This is the power of Machine Learning. AlphaGo watched thousands and thousands of good amateur human games to learn some of these values. It took this as a basis and then began to play itself millions of times, gradually updating itself and using new versions of itself to play older versions (and if it won most of the time with the newer version, it would assume that this version was “better” at playing Go, so update itself). It gradually began to learn what moves led to success and which ones were doomed to fail. It began to mimic this “human” intuition that all good Go players have. Often, when asked why a Go player played a particular move, they simply answer with “it felt right”. For humans, “it felt right” is based on the fact that the human has previously played thousands of these games, and it tends to know what works out well – it was this concept that the machine was trying to emulate mathematically.

Now you know what Go is, and you know how Deepmind tried to attack the problem, you’re either sat there going wow this is all really complicated or wow that’s actually really smart. Both of those are good responses, but I need to comment at this stage that this had never been done before – the ideas were brand new and no one really expected they would work at this level. Deepmind knew that their algorithm worked on good amateurs as they had played people online before, and they knew it could beat the European Go Champion (Fan Hui, who later became an advisor for the Go project at Deepmind after losing to the system 5-0) but they wanted to challenge the best of the best. Lee Sedol was often regarded as the Best Go player in the world at the time. Arguably, he still is. To date, he has 49 Gold medals including 18 Global Gold medals, so he’s pretty good at Go. He is ranked 9d, which is the best ranking a Go player can receive (by comparison, Fan Hui was 2d). As a result of all of this, no one thought Deepmind would win. Everyone expected Lee Sedol to win 5-0 or at least 4-1. Maybe its something to do with human intellectual arrogance (“how can we possibly be outsmarted by a machine?!”) and yet that’s not what happened…

AlphaGo vs Lee Sedol

alphago.pngThe match was streamed worldwide, with millions tuning in. It was played at the Four Seasons Hotel in Seoul over 5 consecutive days. The winner was set to win $1 million, but the match wasn’t about the money – it was man vs machine, who will win in the battle of intellect?

AlphaGo went on to win Game 1. Lee appeared to be in control throughout much of the match, but AlphaGo gained the advantage in the final 20 minutes and Lee resigned. Lee stated afterwards that he had made a critical error at the beginning of the match; he said that the computer’s strategy in the early part of the game was “excellent” and that the AI had made one unusual move that no human Go player would have made. This was an initial shock, especially since most people pipped him to win 5-0, but Lee was just testing the water. Now he knows what he is up against, and most people at this stage did not think the AI would seriously stand a chance of winning the best of 5. He went home to analyse his game and came back the next day.leesedol.jpgIn Game 2, something amazing happened. The players began and started to put stones on the board. As you can imagine, a generic strategy is to place your pieces in quite open corner-ish areas at the beginning of the game to establish some territory. This happened initially, but in Move 37 AlphaGo did something rather surprising: it played a rather central move. Many commentators, themselves 9d professionals, “thought it was a mistake” and labelled it “a very strange move”. Lee Sedol found the move perplexing, taking 15 minutes to decide a comeback – far longer than he often takes – which included a smoking break where he paced up and down to try and clear his head. This move, although quite early, led to AlphaGo taking control and Lee Sedol eventually resigned. While commentators called later AlphaGo moves “brilliant”, Move 37 was described as “creative” and “unique” by Michael Redmond (9d professional Go player). Afterwards, in the post-game press conference, Lee Sedol was in shock. “Yesterday, I was surprised,” he said through an interpreter, referring to his previous days loss. “But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading.” Maybe machines have the capacity for creativity, from a human point of view, after all?

After the second game, there had still been strong doubts among players whether AlphaGo was truly a strong player in the sense that a human might be. The third game was described as removing that doubt; with analysts commenting that “AlphaGo won so convincingly as to remove all doubt about its strength from the minds of experienced players. In fact, it played so well that it was almost scary …”. AlphaGo won this game since Lee Sedol resigned at move 176.

AlphaGo had now won the third game out of 5, meaning it had won the series. Deepmind had already said it was donating the $1 million to a variety of charities (including UNICEF). There was a sombre atmosphere in the hotel now, though. Human intellect had been defeated, in a sense. A game that was celebrated for its complexity, a game that rewarded creativity and understanding, had been “understood” by a machine so well that it was able to beat one of the smartest minds in the business.sadleesedol.jpgLee Sedol returned for the fourth match with nothing to play for in terms of contest, but everything to play for in terms of pride. He had often said that he was carrying a huge burden in the sense that he was “representing humanity”. In post match conferences he felt he was “letting humanity down” with his play. While it was an amazing achievement for Deepmind to have an algorithm that could perform so well at this game, it became incredibly difficult to watch. A man who has dedicated much of his life to this game was being dismantled by a machine who only learned about Go a few months ago. In the fourth match, however, Lee Sedol approached it with a different, more aggresive, style of play. He began to attack AlphaGo’s regions on the board rather than build up territory himself and while initially AlphaGo was able to respond, a moment of brilliance happened at Move 78. He played a particularly inspired white stone to develop a so-called “wedge play” after some thought, and AlphaGo responded poorly on Move 79. A few moves after, AlphaGo had predicted its own chances of winning had fallen off a cliff. AlphaGo began to spiral, desperately trying moves that might work, but knowing that its chances of coming back were astronomically low and Lee Sedol began to take control. Eventually, AlphaGo resigned, and Lee Sedol won the game. Among Go players, the move was dubbed “God’s Touch”, with commentators describing the move as brilliant: “It took me by surprise. I’m sure that it would take most opponents by surprise. I think it took AlphaGo by surprise.”happyleesedol.jpgIn the post match press conference, he looked much happier that he was able to gain a small victory on the side of human intellect. In that game, towards the end, AlphaGo looked amateurish, floundering at moves that no professional would make in a desperate attempt to regain control.  AlphaGo went on to win the fifth game, however, and the match ended 4-1 to AlphaGo.

What we learned

alphagopressconference.jpgWe learned that a machines was capable of understanding a very complex game decades before expected using new mathematical and computational ideas, but there was something interesting that came out from Moves 27 and 78. Fan Hui, who was at the event, had much thought about his own game as a result to losing against AlphaGo months earlier. He played against AlphaGo many times after, and himself described Move 27 as “beautiful”. David Silver, the lead researcher on the AlphaGo project, gave us insight into how the machine viewed the move. Based on all the information it had based on the games it had played against itself and studied from humans, AlphaGo had calculated that there was a one-in-ten-thousand chance that a human would make that move. But when it drew on all the knowledge it had accumulated by playing itself so many times—and looked ahead in the future of the game—it decided to make the move anyway. And the move was genius. It rattled Lee Sedol and ultimately helped it to win the game. This itself could be seen as a beautiful moment of creativity – deciding to play a move that humans are extremely unlikely to make because it believes that it will work (but of course, it can’t know for certain). That’s what moments of creativity reduce to in the game of Go, isnt it?

But what about Move 78? Why did AlphaGo start to crumble after “Gods Touch”? Was it really that good of a move? Well, here’s some beautiful symmetry:  Demis Hassabis, who oversees the DeepMind Lab and was very much the face of AlphaGo during the seven-day match, told reporters that AlphaGo was unprepared for Lee Sedol’s Move 78 because it didn’t think that a human would ever play it. Drawing on its months and months of training, it decided there was a one-in-ten-thousand chance of that happening. The exact same probability. It was a move so rare that AlphaGo was simply taken aback, so it was unsure how to proceed and ultimately lost the game. It was Move 78 that allowed Lee Sedol to regain control of the match, and a move that many people and professionals didn’t expect.

What I take from this is that creativity, in all forms, can often make us reflect on our own world view. You look at a piece of art, or listen to some music, and people often describe it as “speaking to them”. Something creative can make you look at things differently, with a new appreciation. That’s why I think we have already seen creativity in AI. Fan Hui’s world view was altered as a result of losing to AlphaGo, and he became a better Go player, competitor and person because of it (quoting his own, deeply philosophical, words). But we can learn from AI, and use it as a powerful tool to solve complex problems. It forces us, too, to think about things differently but its still got a long way to go to match the power of the human brain: AlphaGo learned creativity after millions of Go games, but Lee Sedol was able to learn from AlphaGo in just one move. He said himself that he meticulously analyses all of his games, and so no doubt he would have spent hours deliberating over these, but the human brain is truly remarkable in the sense that it can learn so much from such a small sample size. It was able to recreate a moment of genius (Move 27) in Move 78 after only seeing it once, and I find that somewhat beautiful.

Lee Sedol and Fan Hui both went on to win many matches, no doubt learning from AlphaGo. In writing this (long!) blog post the Wikipedia pages that document the game were extremely useful, but I was inspired by the AlphaGo documentary, which can be found on Netflix or Prime Video (and presumably other movie streaming sites). They also have a website: https://www.alphagomovie.com/. I highly recommend you give it a watch if you’re interested, as it tells this story very well.

alphagotrailer.jpg

Decision Trees – the first classifier?

Introduction

So I have an internship at a company in Cambridge that I secured prior to my Masters at UCL, and it has taught me a lot of things. Don’t get me wrong, I’ve learned a lot about pragmatism and throwing together ad-hack solutions that usually involve tweaking the Python code until it is no longer erroneous and subsequently trying to decode whether its actually doing what you think it is. But I’ve also learned a lot about Machine Learning here, as it’s given me time to explore all sorts of amazing algorithms from Convolutional Neural Networks to regular Neural Networks and statistical machine learning techniques like Principal Component Analysis (PCA), Logistic Regression and Support Vector Machines (SVMs) as well as the humble Decision Tree and its extension into Random Forests. I want to cover all of these, in time, but I want to focus on Decision Trees for now.

I should probably write something on “Why Machine Learning?” first though, right? I mean, its all the hype and its achieving some amazing results. I think that blog post will require some thought, though, as how to concisely explain why an entire discipline is worthwhile, and collating and selecting some of the most remarkable results (which will prove challenging!). This is the problem with just starting a blog, you have so many swirling ideas that it becomes hard to try and figure out a road map for them. The plan, currently (and it may change!), is to motivate some of the key ML techniques at a very basic level. I’ll try to keep the maths to a minimum. Then, I will revisit these topics and add in example code in Python to demonstrate what these things are doing with some actual data in a JuPyter Notebook.

Decision Trees

Let’s keep things simple, and imagine we have two categories and we want to distinguish which one a given input is. An input will have several features which contribute towards the decision as to which category it is, and these features are called, well, “features” (often, it is called a “feature vector” as it is a vector of inputs) or “predictors” in Machine Learning since they help to predict the output. To make this concrete, let’s say you are deciding whether or not to take a new job. What factors are important to you? Well, lets say you’re looking for a place that offers you a good salary, that’s quite close to you and caters for your caffeine fix. The “Salary”, “Distance from work” and “Does it offer free coffee?” become features/predictors of your input, which is a given job. You may be lucky and have several jobs you can choose from, and so for each of these input jobs you create a “feature vector” with the respective values for each of the predictors {“Salary”, “Distance from work”, “Does it offer free coffee?”}. Your decision tree might look something like this:

decisiontree

A Decision Tree Classifier is a simple and widely used classification technique which bases the final verdict on a series of cascading questions. Given a set of features, you start at the top of a tree and get asked several True/False questions about the features of an input. The tree is followed down until the last node (the leaf) which classifies it.

Seems simple, right? It’s a common things humans do in decision making which leads me to believe that it was one of the first classifiers, or at least the most “human”, out of all of these Machine Learning Algorithms.

But what about if our example is more complicated, and we have multiple features? Well, we can use the same algorithm. But what if we don’t necessarily want all the features to be used (e.g we only want to use the first 5 features)? This will be the “depth” of our tree. In our example it was 3, as it had “3 layers”. But, possibly the most important question, how do we “choose” the most important features?

The way Decision Trees work is a type of Machine Learning called Supervised Learning. We give it loads of examples and it begins to learn structure in the data. In our example, we would give it lots of jobs that we would either accept or reject, as well as their corresponding “feature vectors”. Ideally we want an algorithm that, upon looking at these examples, can perfectly classify any new job as one we would accept or reject given past data. So, given all this data – all the features of the jobs and whether or not we accepted or rejected them – how do we decide which are the most important?

Well, as in the example above, we picked the features and the thresholds ourselves but maybe it isnt that simple. In real life, we often know lots of things that we think will be useful in predicting the classification but it’s hard to decide ourselves what the order and the thresholds would be – not to mention how long it would take if we had 100s of features to choose from! We want a machine to learn this for itself, for it to learn the best order of the questions to ask, and the type of threshold it should set. In order to do this, we need to turn to a bit of maths.

I won’t explain them too much in this blog, as I just want to cover the general ideas behind Decision Trees, but the most common metrics for splitting the decision trees are minimising the Gini Impurity (used by CART Decision Trees) or maximising the Information Gain (used by ID3, C4.5). The first is a measure of the following idea: Given a subset of inputs, suppose we picked one at random and randomly labelled it with one of the two classes (in our example it was either accept the job or reject it) – how often would we be wrong with this random labelling? The second is based on the idea of information and entropy, and informally we should think of the question “how much more information does adding this feature tell me?” being translated into mathematical terms.

Decision Trees – Pros and Cons

Pros

  • Very understandable in terms of an algorithm – “human interpretable” and simple to understand (hopefully!)
  • Quite quick to train these models
  • Have shown to have decent performance

Cons

  • They tend to “overfit the data”. This means that they can classify the data well, but can’t generalise to new examples. This is because the deeper a tree grows, the more “fine tuned” it gets and it isn’t really learning anything new but just “fitting to random noise” in the data
  • Performance is often better with other ML algorithms, such as Neural Networks and Random Forests (which we move onto next…)

Random Forests

A Random Forest is now easy to explain. We build up a “forest” out of many decision trees (normally hundreds of decision trees) and merge them together to get a more accurate and stable prediction. It is a so-called “ensemble” method, which uses “bagging”. All this means (roughly) is that we are using multiple models (in an “ensemble”) that we are averaging over.

A Random Forest adds additional randomness to the model while growing the trees. Instead of searching for the most important feature while splitting a node, it searches for the best feature among a random subset of features. This results in a wide diversity that generally results in a better model.

This has been shown to demonstrate much better performance than a single decision tree as it doesn’t overfit the data, so it is more robust.

Conclusion

I hope this quick, informal overview of Decision Trees helped you to understand what they were and (a bit of) how they work. Also, I hope the idea behind a Random Forest is now also clear. I wanted to keep it quite light for a first post, and I think this is a good place to start since, in my opinion, it’s a good example of how Machine Learning algorithms are modelled on humans (and human behaviour) – we will see this more when looking at Neural Networks in particular.

I will try to update this blog when I remember. I like the idea of informally explaining the idea behind a lot of the concepts to help build intuition on the algorithm at hand, but I know it is useful to eventually learn how to code such a thing. Luckily, Pythons Machine Learning Libraries make it incredibly quick and easy to build such an algorithm and I will definitely write a blog post linking to some JuPyter Notebooks in the future. I think, though, that I want to keep this blog specifically as a place to explore ideas behind Machine Learning and AI. It’s important to have a summary of the key algorithms and ideas (and I’m happy to link to some code that actually implements these ideas that is suitably documented in a JuPyter Notebook online – https://mybinder.org/ looks great for this!), but it would also be interesting to explore some of the ethical and moral questions surrounding AI, and these might be suitably intertwined with more technical blog posts to keep the variety.

Machine Learning is certainly interesting, and at the moment we’re kind of in a middle place. It evolved from these purely mathematical “statistical learning” techniques, but I think eventually some ideas from Machine Learning will help agents truly learn from the environment around them. In the future, but certainly within our lifetimes, we will see machines get much more intelligent using a combination of these techniques, many of which I am excited to blog about – but you have to start somewhere right?

There are many questions we need to think about when it comes to AI. It is great that, as a field, it is starting to come to the forefront – and rightly so! It has the power to be truly transformative, and harnessing data is the next big milestone for humans to overcome. These techniques do have the potential to be used for bad, but also for overwhelming good. These algorithms have shown to have superhuman performance, and can be used to solve a lot of problems. I wanted to start with Random Forests for a reason: they surprised me. I never thought something so simple could have such amazing performance, but yet I’ve seen it myself at work. I hope now you understand the basic ideas behind them. I look forward to posting some more blogs describing the key ideas behind algorithms, as well as ML and AI in general!

The Beginning

This is my first post on a journey towards discovering more about technology and Artificial Intelligence. I will focus first on Machine Learning, a subset of AI, and build from there.

I hope you find these posts interesting. They will primarily be on small projects I am working on or specifics that I have learned. Hopefully, we can both learn something.

Image result for AI