Defining play for artificial intelligence

If humans and animals are both able to play and be considered creative, then could we apply both concepts to artificial intelligence, as well? They are both largely circumstantial so that one would know it when they see it (Henricks, 2008) more easily than logically derive it (Graham, 2010). By better framing them in a non-human context, we could develop new methods and algorithms to derive them computationally. Enabling an AI to derive meaning from human communication or natural events as well as an average human would yield to the theory of strong AI (Brill, Florian, Henderson, & Mangu, 1998).

That is a long-term goal. For this project, my team developed Historie, a weak AI using natural language processing to tap into the collective consciousness of a place’s visitors and residents to tell stories. In this paper, I will describe how this intelligent agent of constructive linguistic play could meet the criteria of some concepts of play and creativity and how it could be further developed to critique those concepts from a non-human perspective (Brill, Florian, Henderson, & Mangu, 1998).

Background

Creativity is a hallmark of human intelligence, yet a cursory literature review shows how we have not wholly defined the characteristic suitably for application to artificial intelligence (AI). Creative AI could have huge benefits for humanity. From a utilitarian perspective, an AI-enabled robot could perform more than just rote mechanical tasks. More abstractly, applying creativity to an inorganic agent could lead to a shift in our metaphysics. Other humanistic qualities like emotional and social awareness have been applied to AI, like the robots Leonardo (Thomaz, Berlin, & Breazeal, 2005) and Simon (Lee, Chao, Bobick, & Thomaz, 2012), yet creativity has been elusive. Perhaps because such attempts have been in the same vein as Alan Turing’s conception of a “computing machine” that can duplicate any symbolic manipulation conceivable by a person (Turing, 1936). However, such a machine could arguably be simulating, rather than duplicating, a human’s efforts isomorphically in only some respects (Horst, 2011). A human and a machine may both have some capacity to create and play, but they are likely too different to compare as closely as they have been in the literature reviewed. In humans and some animals, though, the capacity to be creative often conflates with the capacity to play. Perhaps by demonstrating that an AI could be playful, we could determine a sufficient criteria for AI creativity.

The most common criteria for creativity is the novel application of a concept to an appropriate end (Erden, 2010). Something at play is doing the same but either does not have an end or is an end in itself. For example, Dadaism is a reflection of artists at play, where their autotelic creativity is expressed as art, itself meaningless. A shareable piece of Dadaist art would be the state of things after being at play: a retrospective artifact. Animals have used similar forms of play to learn to survive. Tiger cubs emulate their mother’s hunting techniques by playfully pouncing on each other, but they do not try to actually kill their siblings. The cubs are aware of what happens when they conform exactly to their mother’s efficient model of hunting, but they are also aware of such consequences to their siblings. Therefore, some animals decide to self-handicap for the sake of continued play (Gray, 2011). As they learn the proper motions, they apply them out of context, perhaps even in new ways, in play with each other, demonstrating a common criterion of creativity. Play could thus be considered as undirected creativity, made possible by what the agent is capable of and what is possible in its environment.

AI can also be given a model of action by teaching it rules and contingency detection (Lee et al., 2012). By allowing it to apply that knowledge to some non-deterministic or chaotic input without giving it a goal, would the AI be at play? If it has memory and access to a constant source of suggestions—like those tiger cubs getting better at pinning their siblings after watching their mother catch her prey every day—it would effectively learn without necessarily conforming to some determined model. If this is indeed AI at play, then is the process creative, resulting in an artifact like a Dadaist artwork?

Historie creates an original artifact by amalgamating people’s feelings and thoughts submitted to it. A similar work is “Before I die…” by Candy Chang, which builds a tapestry of people’s life aspirations. The artwork is something that gains body and meaning based on people’s contributions, whereas what Chang created is more of a canvas or container. Chang’s only input to the artwork are the rules—to finish the sentence “Before I die…” or something similar—and the space—a chalkboard printed with “Before I die…” and lines on which to write the responses. The piece itself transforms as many inputs are used to converge on one final artifact: a filled-out board of emotion captured over time. A similar project, “We Feel Fine” by Sep Kamvar and Jonathan Harris, takes advantage of digital media, contrasted with Chang’s board and chalk. It is a program that mines blogs, microblogs, and social networking sites for phrases that include “I feel” or “I am feeling” and stores them for users’ manipulation. Users can play with how these phrases are visualized and generate their own derivative artifact. This uses many inputs to create many possible individual outputs, depending on what the user wants out of the artifact.

Both of these cases use the entire original input in the whole of the final artifact. Historie, on the other hand, muddles people’s individual contributions to make an original output, but takes them in all the same. The original formulation of the many inputs, as individual submissions, are not preserved in the final artifact of the system; rather, their words and structure played with to yield an original, creative output. The final artifact could not be created the same way with a different input, so all are equally necessary to yield the final effect, similar to a derivative artifact of “We Feel Fine” and “Before I die…”

Method

The way we created Historie allows certain degrees of freedom. An agent of creativity and play must have some degree of free will, or other metaphysical freedom (Caillois 2001), in order for it to not be strictly formulaic or a means to an end. Although these conditions seem stacked against an argument for creativity in strong AI, It just means that an AI must at some point use more than just its internal programming. Otherwise, any end is deterministic, and the process is not creative at all, but is simply going through the motions. The key to this is user input as variations on a theme, like a well-maintained exquisite corpse. That theme, for example, could be a location like a park, where we shot our prototype video.

Let’s say we want to generate a story about today at Copenhagen’s Botanical Garden. In our demonstration video, we set up Historie as a cube-shaped device that would attract attention, provide instructions on how to use it, and physically react to people’s submissions. To set up an instance of Historie, we maintainers would make a phrasal template, similar to the word games Mad Libs and earlier Revelations of My Friends (Collins, 2007). It could be arranged like this:

My name is Historie@Park, the conscious entity that lives in this park. Today, I wanted to [Why are you in this park today?], but the weather was just too [How do you feel about the weather today?]. So I decided to [What is the person nearest to you doing?] with [Who are you with today?] instead. All things considered, today was [How do you feel today?].

The device would pose each of those questions (e.g. [How do you feel today?]) to passers-by, unaware of the whole template, and store their responses. In the demonstration video, we show someone responding with their mobile phone and the cube lighting up when it receives the response. Historie would add all of these responses to one database per question. A random initial choice of words would be followed by responders’ words chosen by an n-gram Markov model, using words as units (Shannon, 1951). The model should have an order low enough to achieve a desirable degree of originality and fun but still maintain grammar, ensuring what Historie generates is cogent; although, this method did sometimes yield non-sequitur or plain grammatical error. Every time a response is logged, a sentence would be regenerated and inserted into the phrasal template to yield a new, original story.

Discussion

The grammar acts as basic rules, and the ever-variable word choice and structure, coupled with a small programmed random variation, provides degrees of freedom. By giving Historie these rules and freedom, along with an ideal sentence model (people’s responses), we are effectively letting the machine play to generate a creative story that captures the zeitgeist of a place in time. This idea could be applied anywhere, or without a fixed location, with different templates or perhaps none at all. For example, if participants are asked to write 300 words about what they did that day, our algorithm could take those responses as a single corpus to generate a 300-word story of everyone’s day.

Considering this is a largely linguistic, digital medium for play, it may be worth comparing it to a massively multiplayer online role-playing game, or MMORPG, but inverted. That is, the AI in a digital “world” is role-playing as a human in our real world, just as a real-world human would play a fantasy character in the digital world of a typical MMORPG. For example, the AI could be said to “pretend” to be a human by using words and grammar in a plausibly ordinary way. It is also functionless in that there is no actual intention of what the AI is doing at any point in sentence generation to result in a particular sentence; it simply plays by the rules in the environment it’s given until it’s time to stop. With this framing, many of Graham’s criteria of play in virtual worlds (Graham 2010) can be met with the notable exception of it being a voluntary activity. Historie is programmed to do these things, and is always triggered by someone responding to its questions. It does not have a choice of whether or not to react to these responses; however, it does have one condition programmed that decides whether to provide a response. An n-gram model relies on a given gram’s occurrence and neighbors in a corpus to determine any preceding or following grams. That is, given the bigram “will you” the algorithm would search for occurrences of the gram and perhaps offer “please go” or “help me” as grams that follow it. It may also provide nothing if “will you” always occurs at the end of a sentence or other factors lead to no other grams being offered. Cases like this act as a sort of threshold for letting Historie decide whether to respond. While this is not quite voluntary, as it is based on only one relatively simple rule, it is a topic for further discussion of how voluntary actions qualify playfulness. A digital agent of creativity and play is itself a notion that could be rather innovative to defining play, though not too far a leap if playfulness itself is possible in digital media.

Another important feature of play is fun: emotional or physical satisfaction or excitement experienced while at play. Therefore any agent of play would need senses and the ability to interpret them as fun. For the sake of insight to how an AI might play and to better determine the hazily defined terms “play” and “playfulness,” let’s apply this, too, to an AI. To do this, we could consider the computational theory of mind (CTM) to draw a bridge for the concepts of play to cross from a strictly human to a strictly AI context. In CTM, especially as reviewed by Horst, the human mind is compared quite literally to a digital computer. Since play often necessitates a desire to continue it and even getting lost in the moment, it can be assumed that the subject at play is aware of their state and believes it to be fun. CTM treats such beliefs as functional relationships between mental representations, such as happiness and bouncy castles. Putting it simply, a person concurrently experiencing those two representations may relate them with the content of “having fun” or being playful. Again, CTM does not necessarily use Turing’s direct correlation of human and machine computation; however, it would lend credence to strictly symbolic representations of worldly concepts being relatable similarly in both human minds and AI. The computational linguistics used in Historie’s programming touch on similar rules and conclusions Chomsky and Fodor reached in children’s language learning process (Horst, 2011).

Conclusion

As computational theory of mind is further tested on both human and AI subjects, we may see that processes considered intrinsically human, such as play and creativity, are not necessarily unique to humans. Should this be the case, computational models of creativity could be more rigorously tested and the ephemerality of play better seized and observed to quash its “silliness,” as Sutton-Smith put it (Sutton-Smith 2009). Understanding play from a more computational cognitive perspective could help delineate a strong AI from what is only human notion, from which play is most often described. Our project Historie is a shallow foray into what is classically considered play by challenging what is capable of playing and how a process can be considered creative.

Works Cited

Brill, E., Florian, R., Henderson, J. C., & Mangu, L. (1998). Beyond n-grams: Can linguistic sophistication improve language modeling? In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics-Volume 1 (pp. 186–190). Association for Computational Linguistics.

Caillois, R. (2001). Man, play, and games. University of Illinois Press.

Chang, C. (2013). Before I Die. St. Martin’s Griffin.

Collins, P. (2007, February 24). “Revelations” About a Precursor to “Mad Libs.” Retrieved from http://www.npr.org/templates/story/story.php?storyId=7584979

Erden, Y. J. (2010). Could a Created Being Ever Be Creative? Some Philosophical Remarks on Creativity and AI Development. Minds and Machines: Journal for Artificial Intelligence, Philosophy, and Cognitive Science, 20(3), 349–351.

Graham, K. L. (2010). Virtual Playgrounds?, 109–116.

Gray, P. (2011). The Decline of Play and the Rise of Psychopathology in Children and Adolescents. American Journal of Play, 3(4), 456.

Henricks, T. (2008). The Nature of Play: An Overview. American Journal of Play, 161.

Horst, S. (2011). The Computational Theory of Mind. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2011.). Retrieved from http://plato.stanford.edu/archives/spr2011/entries/computational-mind/

Kamvar, S. D., & Harris, J. (2011). We feel fine and searching the emotional web. In Proceedings of the fourth ACM international conference on Web search and data mining (pp. 117–126). ACM.

Lee, J., Chao, C., Bobick, A. F., & Thomaz, A. L. (2012). Multi-cue contingency detection. International Journal of Social Robotics, 4(2), 147–161.

Shannon, C. E. (1951). Prediction and entropy of printed English. Bell System Technical Journal, 30(1), 50–64.

Sutton-Smith, B. (2009). The Ambiguity of Play. Harvard University Press.

Thomaz, A. L., Berlin, M., & Breazeal, C. (2005). An embodied computational model of social referencing. In Robot and Human Interactive Communication, 2005. ROMAN 2005. IEEE International Workshop on (pp. 591–598). IEEE.

Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. J. of Math, 58, 252.