My Partner, Software: A Lecture on Generative, Automative Design

<!DOCTYPE html>


<title> My Partner, Software. A Lecture on Generative and Automative Design </title>

<p> Note to self: keep this map in mind, your goal is to show the coded world behind-the-scenes </p>

<img src=“generate automate mindmap”>

The digitalized version of the Generate, Automate mindmap.



<h2> Dear Andrew and Dene, </h2>


How are you doing? I hope my hologram finds you well. I apologise for the occasional glitch.

I sent this digital representation of myself to better discuss and explain what I learned in the lecture “Generate Automate” last week. So much easier than email, it’s amazing what technology can do, don’t you think?

First off I thought it was very interesting and also extremely thought-provoking, especially thinking about the changing role of the designer. Like you said, Andrew, you went to school for graphic design, and designed for print-based media, and now we go to school for the exact same subject, but more often than not, we design on a digital forum. But I’ll be exploring that topic later.

Feel free to click on the links and videos and to make counter-arguments, I’m really interested to see where this debate goes.

Now to start. In a bit of a poetic line, long day, lots to say, way too little time. But I’ll try not get too philosophical. <b> Disclaimer though, no promises there. </b>

The lecture Generate, Automate, given by JP and Nela was about algorithmic creativity and the changing role of the designer in the modern world. We discussed how the artist is changing from the original creator of all artwork to a curator who selects parts from computer generated output and arranges them into a result, as well as discussing objective artwork, like the Karl Gerstner’s Carro 64 and its unlimited expressions (shown below).


<img src=“Carro 64”>

Carro 64, an unlimited generative design piece, displayed in Gerstner’s book, Design Programmes.


We also considered the combination of man and machine to get a designed result, which is what the majority of this post will be about.

But before we get into that, here’s a bit of history. Generative design was jumpstarted by the Post-war era in Europe, when the role of design changed from decorative to hyper-functional. That is to say, because of the destruction caused by the war, design now had a task to rebuild the world, not to embellish it. This gave rise to algorithmic design: clean, mathematical, and simple.

The king of this was Swiss design, a movement which focused very heavily on functionalism; basically, solving a problem.


<img src=“Brockmann Beethoven>

Beethoven by Joself Müller-Brockmann, a Swiss Style Designer


A key figure in this was Karl Gerstner, who I mentioned above, and who created work which, like the above example, had infinite variations depending on how you chose to solve the design puzzle.

This movement and its results the  collided with computers, coding, and machine legible languages like that developed by Ada Lovelace, a brilliant mathematician in the 1800s, to combine into works of generative art we know today. An example of this is the Eye 94 magazine, which Paul McNeil and Hamish Muir generated with a set of rules in a computer aided design formula that released thousands of different variations, based on colour, simplicity, and algorithms.


<img src=“Eye 94>

Selected covers of Eye 94


So there you have it, a general history of computer based design. Now here we have the core question of this exploration.


<h2> In generative or instruction-based design, is the artist the designer or the algorithm? <h2>


The question may seem silly, but just hear me out, because this gets a lot more interesting. In the last century, design has taken a significant turn for the digital, and has also mutated into several machine-driven experimental results.

A key example of this is Pendulum Music. First listen to this, then read on:


<a href=;>


That was an experience, wasn’t it? Now, you might be wondering what on Earth a score for this piece might look like. Take a guess. Ready?


<a href=;>


While Steve Reich clearly gave the following set of instructions for the piece (right), he did not write a traditional score, like Beethoven’s Für Elise (left).


<img src=“Für Elise>
<img src=“Pendulum Music”>


In further observation, while Reich’s instructions do dictate how to create the piece of music, it gives no specification as to the location, specific duration, angle at which you must raise the microphone, etc. This the, unlike the comparatively restrictive Für Elise, gives the audience a creative freedom to produce wildly different results (for the sake of argument, we can say that Beethoven’s Für Elise is nearly always played identically). So now the argument becomes: who is the artist? Is it Reich for writing the instructions? How about the performers? The people who set up the room the way they thought it would work best? The audience? Physics? The very audio machines themselves for interacting to produce sound in the space between electricity and air?

This is the tricky bit about generative and automative design. We as humans, as artists, and as thinkers, have gotten a promotion: we now take a managerial role and generate the means by which we can create, harvesting the rewards and calling it ours. So basically, we create a set of instructions, an algorithm, to have other things do things for us, and edit out anything we don’t want.

And this is where the question from the very beginning of this section comes in. Is the algorithm an artist? Or is the artist the person who wrote the algorithm in the first place?

Now, I fully expect an uproar. Of course the artist is the creator! I mean, even on a way older basis, if you see this painting by Raphael, you don’t think, “Oh, that is work done by the unnamed assistants and working apprentices in the workshop,” you attribute it to the master.


<img src=“Raphael”>

Disputation of the Most Holy Sacrament by Raphael


 I mean, he might not have painted a lot of it at all, but he built the initial skeleton of the art and curated its appearance. Besides, even if it is curated, the artist decides what will and won’t be seen, how it’s seen, and pretty much everything else about its presentation, and that takes creativity.

<b>And now honestly, use your head, without the creator, the algorithm, and therefore any art, would not even exist. </b>

Fine, I see your point. But for the sake of playing devil’s advocate, I’ll ask a few more things. In order of your counter:




  • Yes, they don’t, but should they? If the apprentices are not seen as the creators, are they at least collaborators? And isn’t that the same thing as a person and a programme working together?



  • I concede. True, curation does take a lot of creativity and also an artistic sense, but what about, say, books? We can all agree that the author writes; even if they get the idea from somewhere else, they are the sole creative driving force, and that manuscript is theirs. But what about the editor? The editor refines, they improve, they take out plot points and suggest new ones to their creator, the artist. Is the editor not the creator of the final product? Or at least the co-creator? Are you as the editor the creator of what the algorithm has produced? Especially if it has just created an output based off of an input it did not think of itself?



  • But what if it did?




For this third point, I’m going to take it away a little bit. First, let’s repeat that.

<b> And now honestly, use your head, without the creator, the algorithm, and therefore any art, would not even exist. </b>

But would it?

The reason I ask is because it does. Now, we can all agree that we, as humans, as artists, and as thinkers learn, distinguish and improve over time. Indeed, all people do; not only is it necessary for our survival, but it is also necessary so that our creativity does not stagnate. It’s a well known fact that we do. And because code does not think, we can safely say that it does not learn or grow in that classical non-linear way we do. I mean, it’d be useful, sure, but does it?

I think Greenfield put this best in his book Radical Technologies: The Design of Everyday Life. He states:

<“…like any of us an algorithm will ideally be equipped with the ability to learn from its experiences, generalise from what it’s encountered, and develop adaptive strategies in response. Over time, it will learn to recognise what distinguishes a good performance from an unacceptable one, and how to improve the odds of success next time out. It will refine its ability to detect what is salient in any given situation, and act on that insight. This process is called “machine learning.””

<i>Radical Technologies: The Design of Everyday Life, page 213</i>

And it is. As mentioned in the beginning, machine languages and machine learning is growing, improving, and absolutely real. Based on a complex machine network, new technology, known as “perceptrons” according to Greenfield, is able to essentially build neural pathways in a machine. This system allows machines with selective human input to strengthen certain connections and diminish others to make their jobs seem obvious, exactly the same way we do when we see a maths problem or learn a new way to create art. Basically, we train machines to think for themselves. So yes, the artwork can exist without explicit human input.

A key example of this is something known as Neural Style Transfer. Neural Style Transfer, or NST, is essentially a process by which a machine learns the style of an image – a particularly popular example is Van Gogh’s Starry Night – and applies the key points of that style to another image (Desai 2017).


<img src=“NST Image Wave>

<img src=“NST Image Stars”>

Stylized portraits created using neural style transfer with The Great Wave off Kanagawa by Hokusai and Van Gogh’s Starry Night respectively.


So in essence, that algorithm then creates an artwork based on a given input through what it has learned to visualise in an image. 

As a final note to that devil’s advocate topic, however, it is arguable that our creativity would not exist without input either, whether that be books or movies or classes or interests or teachers. In that way, you are an algorithm too.

But that aside, what happens when the entire debate around authorship is removed through machine learning?


Markmaker ™. That’s what happens.


<Markmaker ™ is an online platform that uses machine learning to calculate likes and dislikes, and automatically designs from a database dependant on these inputs. So say you are a small business who cannot afford a professional designer. You go on this site, type in your company name, and as you scroll down the line of Markmaker suggestions, you “heart” the ones you like and scroll past those you do not. The machine learns from your input and creates new logo suggestions which appeal to you, and, once you find one you like, you can customise it with the pencil tool.


<img src=“Markmaker Logo”>

An example logo which Markmaker TM created with my name. This is a logo I would seriously consider for myself, even as a design student.


Notice how I said you and the machine. There is no one else in the process. And you sure as hell didn’t code what you wanted it to spit out at you. 

This is now one of two possible outcomes that occur when an algorithm and a human work together: collaboration and automation.

In softwares such as illustrator, man and machine work collaboratively to achieve effects which may be difficult, impossible, or downright slow by hand. In some cases, like with Illustrator (as I can very much attest to), this is a highly beneficial relationship; the programme gives the artist tools with which to work and increases ease of usage with preferences, pre-sets, and other useful features.

In regards to Markmaker ™, however, this becomes an automated process. The machine creates designs without any explicit input from a designer, let alone thumbnails, roughs, marker comps, or any kind of visible process whatsoever. In doing so, the website cuts weeks out of the design process, making it easier – and cheaper – for a company or individual to work one-on-one with a software and bypass the hassle of emails, meetings, outsourcing, and creative minds which can speak back, unlike a machine which does exactly what it is told. The only input which is still entered into the machine by designers is the components with which it builds, so colour codes, fonts, and symbols. 

<b> So in effect, we as designers are being automated, changed in a world where humans take a managerial approach to creation. We are the builders of things that will replace our role as we know it. </b>

This is nothing new. The world, job markets, people, we have all been constantly changing as long as there is a change occurring. Better technology, lack of need, changing production methods, the lot. We all still survive. We find new niches and new ways to grow and change with the shifts. Now the question is, how will we change with this one?

<b> And does this shift mean that creativity and vision, both uniquely human, will become automated with us? </b>





Desai, S. (2017) ‘Neural Artistic Style Transfer: A Comprehensive Look’, Medium, Sept.14,. Available at: (Accessed: Mar 6, 2019).

Gerstner, K. (2007) Designing Programmes. Lars Muller Publishers.

Greenfield, A (2017). ‘Machine learning: The algorithmic production of knowledge’, in Radical Technologies: The Design of Everyday Life. London/New York: Verso, pp. 211-216.

Joshi, A. (2018). style_transfer_1. [image] Available at: [Accessed 6 Mar. 2019].

Joshi, A. (2018). style_transfer_2. [image] Available at: [Accessed 6 Mar. 2019].

Muir, H. and McNeil, P. (2017). Eye 94 Covers. [Digital].

Müller-Brockmann, J. (1955). Beethoven Poster.

Raphael (1510). Disputation of the Most Holy Sacrament. [Fresco] Vatican City: Stanza della Segnatura.

Reich, S. (1973). Pendulum Music Score.

van Beethoven, L. (1810). Für Elise Score.

2 thoughts on “My Partner, Software: A Lecture on Generative, Automative Design

  1. Absolutely fabulous post Karoline, I would like to share with the rest of the group, it’s experimental, shows deep engagement with the subject, the mind map is fantastic, this is exactly what I want students to be doing with their blogs, synthesising the learning in the session. Well done, a real credit to your dedication and approach to learning.

    Liked by 1 person

  2. Wonderful – the entire blog reflects on and uses ideas in a way that personalises and applies lectures and secondary sources to a range of examples. Remind me to chat about Harvard. Very impressive.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s