VIEWS 2018 - Robotic EditionS
The First Sparks of Artificial Creativity
Get the secure blockchain verified certificate for your robotic edition here, and read on to discover more about this collaboration between myself and 3D, aka Robert Del Naja of Massive Attack.
a New Kind of Artistic Reproduction...
I have been working on and off for past several months with Bristol based artist 3D. We have been experimenting with applying GANs, CNNs, and many of my own artificial intelligent algorithms to his artwork. I have long been working an encapsulating my own artistic process in code. 3D and I are now exploring if we can capture parts of his artistic process.
It all started simple enough with looking at the patterns behind his images. We started creating mash-ups by using CNNs and Style Transfer to combine the textures and colors of his paintings with one another. It was interesting to see what worked and what didn't and to figure out what about each painting's imagery became dominant as they were combined
As cool as these looked, we were both left underwhelmed by the symbolic and emotional aspects of the mash-ups. We felt the art needed to be more meaningful. All that was really being combined was color and texture, not symbolism or context. So we thought about it some more and 3D came up with the idea of trying to use the CNNs to paint portraits of historical figures that made significant contributions to printmaking. Couple of people came to mind as we bounced ideas back and forth before 3D suggested Martin Luther. At first I thought he was talking about Martin Luther King Jr, which left me confused. But then when I realized he was talking about the founder of Lutheranism and author of The 95 Theses it made more sense. Not sure if 3D realized I was confused, but I think I played it off well and he didn't suspect anything. We tried applying CNNs to Martin Luther's famous portrait and got the following results.
It was nothing all that great, but I made a couple of paintings from it to test things. Also tried to have my robots paint a couple of other new media figures like Mark Zuckerberg.
Things still were not gelling though. Good paintings, but nothing great. Then 3D and I decided to try some different approaches.
I showed him some GANs where I was working on making my robots imagine faces. Showed him how a really neat part of the GAN occurred right at the beginning when faces emerge from nothing. I also showed him a 5x5 grid of faces that I have come to recognize as a common visualization when implementing GANs in tutorials. We got to talking about how as a polyptych, it recalled a common Warhol trope except that there was something different. Warhol was all about mass produced art and how cool repeated images looked next to one another. But these images were even cooler, because it was a new kind of mass production. They were mass produced imagery made from neural networks where each image was unique.
I started having my GANs generate tens of thousands of faces. But I didn't want the faces in too much detail. I like how they looked before they resolved into clear images. It reminded me of how my own imagination worked when I tried to imagine things. It is foggy and non descript.
From there I tested several of 3D's paintings to see which would best render the imagined faces. I did this with straight on style transfer as well as something I am experimenting with called contextual style transfer, where I break up image into context and apply the styles contextually. Many combinations and simulations were tried.
3D's Beirut (Column 2) was the most interesting, so I chose that one and put it and the GANs into the process that I have been developing over the past fifteen years. A simplified outline of the artificially creative process it became can be seen in the graphic below.
I would begin by having the GAN imagine faces. Then I ran the Viola-Jones face detection algorithm on the GAN images until it detected a face. At that point, right when the general outlines of faces emerged, I stopped the GAN. Then I applied a CNN Style Transfer on the nondescript faces to render them in the style of 3D's Beirut. Then my robots started painting. The brushstroke geometry was taken out of my historic database that contains the strokes of thousands of paintings, including Picassos, Van Goghs, and my own work. Feedback loops refined the image as the robot tried to paint the faces on 11"x14" canvases. All told, dozens of AI algorithms, multiple deep learning neural networks, and feedback loops at all levels started painting original face after face after face.
Thirty-two original faces later it arrived at the following polyptych which I am calling the First Sparks of Artificial Creativity. The series itself is something I have begun to refer to as Emerging Faces. Have already made an additional eighteen based on a style transfer from one of my own paintings, and working on approaches to make many more.
For Views 2018, I continued this experiment by creating 128 Robotic Editions from 16 of the faces in the original. Each was painted by one of my robots on a 16"x20" piece of stonehenge paper using the same brushstroke algorithms that created the original. Even though these were replicas, as could be expected, each edition came out unique due to variation in brushstroke planning and paint thickness. Below are all 128 editions from in the series.
As can be seen, deep learning has made an entirely new kind of artistic reproduction possible. These are unique prints. It will be interesting to see how fellow artists take to and begin to adapt to the new possibilities presented by AI. 3D and I have already thought about dozens of variations ourselves including swapping out Beirut with another one of his painting, or training the face generating GANs on faces found in his art. Beyond that, this can be done with any artist’s work. There are really unlimited possibilities.
It has been a fascinating collaboration to date. Looking forward to working with 3D to further develop many of the ideas we have discussed. Though this may have seamed like a lot, it only goes into his art on a very shallow level. We are always talking and wondering about how deep we can actually go.
Pindar
Thanks to Fantom & Sons and 3rd Space for support on this ongoing project.