For the new study, the laboratory created a modification of the system known as Generative Adversarial Networks (GAN), in which deep neural networks are taught to replicate a number of existing painting styles, such as Baroque, Pointillism, Color Field, Rococo, Fauvism, and Abstract Expressionism. One network generates the images based on what it has been taught, and the other network judges the resulting works.

The new, modified version, Creative Adversarial Networks (CAN), is designed to generate work that does not fit the known artistic styles, thus “maximizing deviation from established styles and minimizing deviation from art distribution,” according to the paper. For the training, they used 81,449 paintings by 1,119 artists in the publicly available WikiArt data set.1

We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art fairs. Human subjects even rated the generated images higher on various scales.

  • This paper is an extended version of a paper published on the eighth International Conference on Computational Creativity (ICCC), held in Atlanta, GA, June 20th-June 22nd, 2017