Computer-generated artistic text and images tend to resemble FrankenArt and FrankenFiction. Using machine learning, designers can build software programs that can generate text and images of various kinds. The results vary from the sublime to the ridiculous to the positively weird. Text includes paragraphs written in the style of specific authors, snippets of “philosophical” sayings, or even scripts. Images include “paintings” in the style of famous artists, generated from a photo, or – vice versa – realistic photos of landscapes generated from rough drawings. Anything is possible these days. And in addition to text and images, an interactive Google Doodle this week allowed users to generate snippets of music that sound like something Johann Sebastian Bach would’ve composed. Some classicists in the music world were highly offended. However, these kinds of apps are here to stay. Here are a few of the more entertaining ones – Inspirobot, Literai, Botnik, Deepart, and Google.
Inspirobot – Philosophy
Inspirobot is just that – an AI program (robot) that generates random but often very weird and wrong inspirational sayings, quotes, idioms, etc.: As the website says: “I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.”
Like typical machine learning programs,“…Inspirobot is a ‘generative’ system. It has learned the basic rules behind motivational quotes and how to ‘fill in the blanks’ — and, since the materials for filling in the blanks is unreasonably large, it will rarely repeat itself (and almost never do so exactly). On the other hand, by examining enough Inspirobot phrases, we can recognize the general structure that Inspirobot learned by its regularities and repetitions — both exact and generalized — and then tease out how it was taught.” (Black Box Files: How Does Inspirobot Do Its Thing? (Generative Systems), on Steemit.com, rtrvd. 2019-03-23)
Literai and Botnik – Fiction
“Ron didn’t even upset her little ingredients on the toilet, and a group of third-year girls last year. Highly bushy and then burst away from them quickly.”
– Harry Potter and the Cream Cake Of Dumbledore –
This Harry Potter book you’ve never heard of, “…along with The Adventures of Cyborg Holmes, South Park: Deeper & Harder, and Return of the Computer Jedi were all written by a downloadable neural network model called Literai. While the fiction doesn’t make a whole lot of sense, it provides a glimpse of another major direction in which artificial intelligence is headed. Admittedly, it also provides some great (read: ludicrous) entertainment along the way.” (Stephen Altrogge, Harry Potter books written by Artificial Intelligence are terrible, but they are important, November 29, 2016, rtrvd. 2019-03-23)
“He [Ron] saw Harry and immediately began to eat Hermione’s family. Ron’s Ron shirt was just as bad as Ron himself. ‘If you two can’t clump happily, I’m going to get aggressive,’ confessed the reasonable Hermione.”
– Harry Potter and the Portrait of What Looked Like a Large Pile of Ash –
Botnik, though it has produced TV scripts for Scrubs and Seinfeld, also produced a brief FrankenFiction chapter of something that looks like it came from a Harry Potter book. It’s called Harry Potter and the Portrait of What Looked Like a Large Pile of Ash, and you can read it all here. It’s a bit mad, but in a weird way, also…plausible?
“Botnik describes itself as ‘a human-machine entertainment studio and writing community’, with members including former Clickhole head writer Jamie Brew, and former New Yorker cartoon editor Bob Mankoff. The predictive text keyboard is its first writing tool – it works, Botnik explains, by analysing a body of text ‘to find combinations of words likely to follow each other’ based on the grammar and vocabulary used.” (Alison Flood, Bot tries to write Harry Potter book – and fails in magic ways, The Guardian UK, Dec. 13, 2017)
DeepArt – Paintings
The mission of the creators of DeepArt, is
“… to provide a novel artistic painting tool that allows everyone to create and share artistic pictures with just a few clicks. We are five researchers working at the interface of neuroscience and artificial intelligence, based at the University of Tübingen (Germany), École polytechnique fédérale de Lausanne (Switzerland) and Université catholique de Louvain (Belgium).”
You upload a photo that you took, choose a painting style, say Pointillism or Pop Art, and the result is a photo that looks like it was painted in the style you preferred. (Three examples of photos of landscapes uploaded by me, below.)
The point of these programs to generate “art” is that they serve, as DeepArt’s founders say, to help people get creative and share what they have made. The programs are not intended to replicate what the masters have done. No machine learning program ever can. (Refer to the explanation of how Inspirobot’s generative system works, above.)
“Machine learning is the process of teaching a computer to come up with its own answers by showing it a lot of examples, instead of giving it a set of rules to follow as is done in traditional computer programming”, according to Google.
A machine learning program works by analyzing the data that is inputted, and finding patterns, and generating output according to those patterns. The more data is inputted, the more material the program has to “learn” from (which is why the interfaces invite users to “share” their creations) but once the fount of data about the original artwork is exhausted, all that is left is the data about the copycat art of the imitators, which is added to the database from which the computer “learns”.
Inspiration, creativity, context, emotion and unique personal experience, all of which shapes what an artist produces, cannot be replicated by a machine, and therefore a machine, like an animal, cannot produce true art. That being said, Google’s Bach Doodle was a fun way to get closer to the art of the maestro.
Bach Google Doodle – Music
On March 21, 2019, to celebrate the life and works of Johann Sebastian Bach, Google released a fun Doodle which allowed people to key in a few notes on-screen, which their AI program would then convert to something sounding like a Bach composition. First you put down a few notes for a melody, in a four-part chord, and then the program harmonizes it by adding Bach-style soprano, alto and tenor notes. And voila! – something that sounds like…Something Bach-ish!
Just a wee taste of how wonderful it is
For someone like me, who knows a handful of languages but cannot read or write musical notation, and plays the piano by ear, this was an absolute wonder, a wow! moment that allowed me, musical dumbo that I am, so “make” something that sounded unusually like classical music, and what’s more, keep it, in the form of a teeny midi file. And what’s even more, I could see which notes did what. First I did not realize that you could put 4 notes into – what was it? – a “measure” on a “staff”, the thingie with lines. So I figured out what the 4/4 at the beginning meant. It was a huge learning curve for me, and I played with it most of the day.
Eventually, because I like the sound of it, I made something with my various snippets of “Bach Chorale” on GarageBand on my Mac. Below is the result of my further fiddling. Hey, it’s not much but it’s the first piece of music I have ever created and recorded! The first, ever!!
Soundfile: “Funky Bach 2” created by M. Bijman, using Google’s Bach Doodle and GarageBand.
It led me to understand the further reasons for these kinds of apps, like Literai and the Google Doodle: I got a flash of insight into how complicated, ingenious, effortful and wonderful real musical composition is. (And how genius Hip-Hop is! Breaks, stringing up loops and layering sound!! Yes, like Andy Cooper and The Allergies!) I realized that music is a language I would need to learn, and that it would perhaps be possible to learn it without being mathematically inclined. This app, like the others, prove that making art is something everyone can aspire to.
Like a poem that doesn’t quite do what you want it to do, and doesn’t sound quite right, your machine-produced music, images and narrative passages are just the starting point of something bigger, better and more beautiful.
How did Google do it?
This was the first ever AI-powered Doodle, made in partnership with the Google Magenta and Google PAIR teams. The first step in developing the Doodle was to create a machine learning model to power it. The model used in today’s Doodle was developed by Magenta Team AI Resident Anna Huang. , who developed Coconet: a versatile model that can be used in a wide range of musical tasks—such as harmonizing melodies or composing from scratch.
As the developers explain, Coconet was trained on 306 of Bach’s chorale harmonizations. His chorales always have four voices, each carrying their own melodic line, while creating a rich harmonic progression when played together. This concise structure made them good training data for a machine learning model. Next, the Google PAIR team used TensorFlow.js to allow machine learning to happen entirely within the web browser (versus it running utilizing tons of servers, as machine learning traditionally does).