Runway ML - Machine Learning for Creators

From MDD wiki
Jump to navigation Jump to search

Maker-in-residence report

Author Gabriela Onu & Kent de Bruin
Date 03.12.2019


The MakersLab is a special place within the Master Digital Design. Here you get the chance to tinker with technologies, new ideas. All without the boundaries of clients, projects and deadlines.

From the beginning we were very interested in using a new technology. But what? After going through the different options we decided to make a shortlist.

We are very much interested in new digital technologies, design and understanding tech as an designer. Our shortlist was:

  • VR and AR as an emerging technology
  • Understanding AI and Machine Learning
  • Creating music through sensors with Arduino

After some ideation we decided that AI and Machine Learning was the most interesting topic. It gives a room to tinker with something that will be, and already is, so important in modern society.

AI for creatives


Artificial intelligence is the science of getting machines to learn, think, act, and perform tasks in ways that are normally associated to human intelligence.

Machine learning is the process for machines to train and learn based on datasets of examples.

“We shape our tools and thereafter our tools shape us” — Marshall McLuhan

RunwayML Untitled.png

Explainable AI

Explainable AI is not an AI algorithm that is self explaining itself, but it is something done by product designers. It should be transparent enough so that the explanations that are needed are part of the design process.

To reach that we need to understand the model better. The can be achieved by making the model visual. The added layer on top of the code can be used as a layer that is interesting to non technical people.

AI is training a computer to do stuff. Basically training itself how to perform a certain task.

Visual Machine Learning Models

A tool for creators

Machine learning could be a fantastic tool for creators, but the problem is that integrating AI is a challenge if you can’t code.

So what can we exactly do with AI and machine learning. We, as designers, are certainly not skilled enough to make machine learning models ourselves right?

Well think again.

The rise of no code tools make it possible to create technology without knowing the technical parts. This is how Bret Victor in his talk "Inventing on principle" envisioned it all along.

In this exploratory research we would like to research what is possible with AI for designers? What can we do with AI if we don't understand code?

Available tools

After some research we found a couple of tools that use a visual approach to machine learning. The two that were most suited for this project were

  • Dataiku - Visual Machine Learning And Modeling
  • Runway ML - Machine Learning for Creators

After watching some tutorials of both we decided to go with Runway ML. Although Runway ML is still in beta, the promises are huge. Runway ML claims that their Software allows you to apply Machine Learning on many practical situations without needing to code. It's a new model with easy access to machine learning for everyone.

Visual Machine Learning and Modeling in Dataiku

Runway ML | Machine learning for creators.

Runway ML possibilities

What are its possibilities? Artists don't have to learn Github, they can play with pre-made models in Runway ML. Anything from color adaption, image recognition, style transfer and text editing. The possibilities are endless which makes it possible for us, as non-coders (but creatives), to do cool stuff.

Runway works fairly simple. You use input, run it through the various machine learning models, to create an output.

RunwayML Untitled 1.png

Now that we understand how the Runway model works we can play with the input and the different available models. The possibilities are literally endless because the model of course adapts to the content you put in to it. It generates infinite outputs.

Ideation and research

To understand the program a bit better we played with different models.

This is Gabriele Ferri in a Picasso style:

Screen Shot 2019-12-03 at 17.47.16.png

Screen Shot 2019-12-03 at 18.02.24.png

This is a live picture in fast style transferred to a cubist style painting.

Screen Shot 2019-12-02 at 17.55.19.png

With this model you paint something, and let the AI model auto generate an old landscape painting:


Screen Shot 2019-12-03 at 18.07.59.png

This one combines two images into one picture: The model is called SPACE-FACE.




After playing around with various models to lay over the input we decided on the adaptive style transfer model. This model repaints images in styles of famous painters such as Van Gogh and Picasso.

Error creating thumbnail: Unable to save thumbnail to destination

We decided to test wether people would recognize the difference between a real painting and an AI generated painting.


We applied the Adaptive-Style-Transfer to different types of images trying to come as close to the original Van Gogh or Picasso styles. We choose the ones that we considered interesting and thought were the trickiest to choose from.

While generating new paintings we found some pretty amazing results. Such as a wireframe with a Picasso overlay.

Wireframe to Picasso style:

RunwayML WireframeToPicasso.png

Amateur painting to Picasso style:

RunwayML AmateurPaintingToPicasso.png

Original Van Gogh painting through the Van Gogh layout and then to Picasso layout.

Error creating thumbnail: Unable to save thumbnail to destination


To test whether people would see the difference between a real and AI generated painting we set up a test with 6 images. 3 were real, 3 were fake.

RunwayML Test RealOrFake.png


We tested on 16 participants, and the majority choose the original paintings. 68% guessed the right image. 32% guessed the wrong image. Some of them said that they could see the difference between the paintings by analyzing the lines and the blurriness of the images. Some recognized the famous paintings.

RunwayML TestResult RealOrFake.png


Reflect on our test

The test results seemed a bit surprising for us, because we thought that some of the new generated paintings looked really authentic. Even if you can spot some defining differences by carefully looking at the images, almost a third of our participants guessed wrongly. We consider that in other conditions, with images that would have had the same quality and size, the results might have been a bit different.

The workings of the lab

Reflect on the workings of the lab

Although we didn't use any hardware in the outcome of this project it was really cool to work from the lab for a week. Because Runway ML is using a lot of GPU power we could only really do the model generations on the lab computer.

The environment of the lab really helps you to be in the right sphere to test new technologies. It's a real maker space.

Runway ML

Reflect on how your design practice could benefit from the experience with this material/tool

We learned how to use Runway ML, that will be beneficial in order to understand AI. In our minds AI was super abstract, but this platform showed us the possibilities of AI for creatives. We have seen that we can use other types of models for our future projects.

AI and the future

AI still has a long road the go. Although the promises are quite high, there are still some limitations to what it can do. What is the role of art in the future if computers are able to autogenerate a lot of stuff?

Generative art is endless, but the learnings of a model are being based on a pre-set database. This database is formed by human generated art, so the model is limited to the presented styles.