Paper Summaries

This year, we’ve brought back more EXAG Paper Summaries – short descriptions written about each paper – so you can find out what papers might interest you or what talks you might want to tune into later this week!

Click on the links below to see what’s in this year’s workshop! (The list is in presentation order.)

 We’ll also be uploading full PDFs for each paper shortly!

Day One

Day Two

~*~

Marahel: A Language for Constructive Level Generation

Level generation is an ongoing problem in games AI research, and this is exacerbated by the fact that level generation is a highly domain specific problem – levels designed for a given game are not necessarily compatible with other games. Enter Marahel: a little language for describing level generators for 2D spaces. This paper proposes Marahel, describes the current syntax and shows off some examples of the language in action.

In Marahel, generates are constructed from 5 different parts: Metadata, Entities, Neighborhoods, Regions, and Explorers; the first four setup information for the Explores, the actual generation agents, to utilize in their generation process. After providing detailed system notes, the authors offer some examples, as well performing an expressive range analysis of a few different generators. In general, it’s a neat approach to a making level generation more universal – by providing a language that can describe different generators.

Back To Top

~*~

Leveraging Multi-Layer Level Representations For Puzzle-Platformer Level Generation

There’s an old joke in computer science that if you can’t solve a problem, just add another layer of abstraction. It’s honestly not that funny unless you’re a computer scientist. And even then. But in every old joke there’s always a grain of truth, and an extra layer of information or ideas can sometimes crack a problem wide open. In this paper, the authors look at level generation and see if adding extra layers of information to an AI’s training data helps it understand the nuances of level design better.

To use a crude metaphor, suppose I wanted you to design me a new house, and I showed you photos of houses I liked, but you had absolutely no idea how to build a house. Maybe you paint a big cardboard box to look like one of the houses I showed you. But now suppose I give you extra layers of information for each of these houses: I show you blueprints, architectural drawings, maps of the plumbing and electrical wiring. You’d still probably build a really bad house, but you’d have a much better idea of what goes into one!

By showing extra layers of level information for each example level – like paths a player takes or zones that different puzzles control – as well as raw information about the example level, the idea is that machine learning systems will learn faster and better, and be able to produce more complex and interesting levels. The paper explains it much better, and with fewer construction-based metaphors – be sure to check it out for more information!

Back To Top

~*~

Towards General RPG Playing

Can an AI play a game? Well, it depends on the game. Game-playing agents for a wide variety of genres from platformers (e.g., Mario Bros.) to real-time strategy games (e.g., Starcraft) have been enabled by a wide variety of techniques and advances in AI from simple heuristics to machine learning. But some kinds of games still elude us even now, in the ongoing quest for the crystals of light general (video)game playing.

This paper introduces general RPG playing and the challenge of building agents that can play console role-playing games. RPGs (think Dragon Warrior or Final Fantasy here) are particularly complicated, as they consist of many different mechanical systems that require cultural knowledge, numerical reasoning, and the ability to determine what the content and context of a random text box on a screen (among many other things). In this paper, the authors break down RPG playing into a number of subproblems. They describe their initial progress at some of these: automatic mapping, overlay detection, and (menu) text recognition. As for next steps – there’s quite a number of sidequests subproblems, such as learning combat systems or using human coaching to train agents. It looks like this exciting quest for RPG playing has only just begun!

Back To Top

~*~

A Vision for Continuous Automated Game Design

ANGELINA is a system that’s well known within the EXAG community. That’s what you get when the creator is one of the founders of your workshop, I guess. Regardless, ANGELINA’s always been an ambitious project — the ultimate goal has always been to build a system capable of designing games that humans will be able to play, but this iteration on ANGELINA looks to approach a slightly different angle on the problem – specifically, that human designers get too many ideas, pick up one, scrap two others, merge two half baked ideas into one that’s three-quarters baked, etc.

In this paper, the authors define the notion of continuous design – the act of flowing between different design states freely, and using a continuously growing bank of knowledge and ideas to inform the current state of the design. This new version of ANGELINA hopes to operate in the space between micro code-level tweaks and high-level means of piece rules together. The paper then details a few areas of work for ANGELINA to focus on: designing levels, building mechanical patterns, and innovating on individual mechanics.

Back To Top

~*~

A Generative Framework of Generativity

What is generativity? Depending on who you might ask, it might mean different approaches or artifacts. What’s for certain though is that generative methods have been applied to all kinds of content – music, architecture, games, simulation, crafts (to name but a few). Traditionally, taxonomic efforts (particularly in the games research community) have primarily focused on the specific classes of generative algorithms for and/or the domain-specific artifacts of generation. This makes it difficult to generalize these frameworks across different domains or understand how techniques from one domain might apply to another.

This paper describes a framework for generative methods that emphasizes the methods of generativity and their applications across research fields. Here, each generative method is treated as a transformation on specified inputs that returns a kind of output – but is relatively agnostic to the artifacts being generated. (Did you know, for example, that tile-based approaches have been used not just for games, but also for music generation?) The authors outline a variety of generative methods, from random selection to machine learning approaches, first focusing on the construction of the content. Next, they describe different techniques for optimizing the generated content, such as heuristic-based search techniques, constraint solvers, or user interaction. Throughout the paper, their framework is illustrated with examples of systems and concludes with examples in games, art installations, etc., showing how their framework can generalize across different domains.

Back To Top

~*~

Effects of Self-knowledge: Once Bitten Twice Shy

When it comes to the various use cases for AI in games, pathfinding would seem to be among the ones with the least expressive potential. Whereas application areas like dialogue generation and social simulation seem inherently expressive, pathfinding serves a more practical purpose: get characters from point to point by the path that is optimal in terms of both travel time and computational cost. Here at EXAG, we don’t tend to see much work in this area, because it’s not really an experimental one at this point (there are tried and true techniques) and it doesn’t (initially) seem to have much expressive potential. In this paper, however, Vadim Bulitko cleverly hacks a conventional technical approach to turn character pathfinding into an expressive mechanism.

“Consider for instance, a character who entered a castle, got lost in it and thus took a long time to get out,” Bultiko writes. “It would be natural for a human to regret getting lost and therefore hesitate to enter the castle in the future despite the fact that going through the castle may be the shortest way.” This is the premise for the paper’s “once bitten, twice shy” approach to pathfinding. By hacking the conventional formulation of real-time heuristic search to include this kind of self-knowledge, NPCs can be made to pathfind in ways that are less rational, but more believable. Moreover, the way that NPCs move about the gameworld can actually express subtle information about who that NPC is. As such, this project represents an interesting contribution to background believability, the application area of game AI that pertains to improving the believability and expressivity of NPCs appearing in the backgrounds of gameworlds. Check out the paper for details about the technique and some of Bulitko’s interesting experimental findings!

Back To Top

~*~

Poetic sound similarity vectors using phonetic features

With the introduction of techniques like word2vec and GloVe, there’s been a burst of excitement in recent years about vector-space models of semantics. In such a model, texts (such as individual words or phrases, or whole documents) are represented as vectors in a semantic space, where texts with related meanings have similar vectors. By using such a vector-space representation of meaning, a series of cool tricks from vector algebra are enabled. The classical application of these techniques is to automatically determine how related text documents are, but recent work has demonstrated other striking affordances. For instance, the meanings of texts can be manipulated using arithmetic operations such as addition and subtraction. Here, a classic result is the query “king – man + woman = ____”, for which a word2vec model famously returned ‘queen’.

Inspired by these recent advances in vectorial semantics, in this paper Allison Parrish introduces a technique for vectorial phonetics. While techniques like word2vec allow one to automatically characterize and manipulate the meaning of text, Parrish’s method makes this possible for the sound of text. The paper is ripe with cool examples of what the technique can afford, including phonetic analogy (‘light’ is to ‘slide’ as ‘lack’ is to ‘slag’), sound “tinting” (adding a spiky-sounding “kiki” filter to a text, or a round-sounding “bouba” filter), and random walks through sound space (moving from one line of poetry in a corpus to its most similar-sounding counterpart, and so on). The applications for computer poetry and other areas of expressive text generation are clear and considerable. Check out the paper to find out more about how the method works and what all it can do!

Back To Top

~*~

Deep Learning for Speech Accent Detection in Videogames

What if an AI system could automatically detect what accent a person had? That might be a bit interesting, but most people can place most accents most of the time.

But what about taking this idea to games? What accents are used in games? How are those accents used?

In daily life accents serve as a marker for where a person grew up and learned language. But we typically infer much more than just a person’s home town from an accent – we often make assumptions about that person’s social, economic, and ethnic background as well. In games this goes further: characters in games like Dragon Age: Inquisition are designed to represent social groups through the use of their accents. Villains often have British accents and heroes speak with standard American English. These choices are use to encourage players to adopt a worldview about characters based on their accent, potentially creating a narrative for players, like that British people are evil and Americans good.

In this work the authors demonstrate an initial effort to train an AI system to detect which accents are being used by characters in games, with the long-term goal of building a corpus of information about how accents are used in games. With this knowledge we can begin to understand and question how we use accents in games to lead to more balanced use of accents in the future. Check out the paper to learn more!

Back To Top

~*~

Dynamic Epistemic Logic in Game Design

Branches of logic are kind of like rabbits – leave a few of them alone for long enough with some researchers to nibble on, and eventually you’ll have more than you know what to do with. However, unlike rabbits, branches of logic have obvious applications in AI research, since they provide models and techniques for reasoning formally about a world. In this paper, the authors propose applying Dynamic Epistemic Logic (or DEL) to a number of different areas of games, providing a list of different spaces where this formalism could be useful.

DEL is a branch of logic that includes Dynamic operations (the ability to reason about actions with non-deterministic outcomes) and Epistemic operations (the ability to reason about theory-of-mind) with classical logic statements in order to provide powerful tools for AI systems to leverage.

Back To Top

~*~

“Press Space to Fire”: Automatic Video Game Tutorial Generation

When we think about using AI to generate bits of games, or maybe even entire games, there are things our minds immediately jump to – like generating level designs, or maybe coming up with a funny name. But there’s so much that goes into making a game, and a lot of these things are very rarely looked at by AI researchers. One of those is generating tutorials – every good game has to be understandable, whether it walks the player through step by step, or designs clever ways for the player to teach themselves. This paper looks at how we can use AI to analyse a game and then figure out what to tell the player about – and, as you may have guessed, it turns out to be a really tricky problem.

One of the ways this paper proposes thinking about the problem is getting the system to explore the game’s rule space, and look at the rules that cause winning, and the rules that cause losing. These rules – the ones tied directly to failure or success – might be great starting points to teaching the very basics to players. Avoid alien bullets. Shoot bullets at aliens. From there, an AI system might be able to build bigger steps to teach players about different strategies, or hint at things without giving the whole game away. It’s a really exciting AI problem, and one we’re only just beginning to look at – check out this paper if you want to see some intriguing first steps towards solving it!

Back To Top

~*~

Generominos: Ideation Cards for Interactive Generativity

What kind of design tools are available to help design generative systems? Can this ideation process be treated as a game? What about a generative game about generating generative systems? (Okay, maybe that’s a bit much.) But here’s a playful solution!

This paper presents Generominos, a series of design cards for modeling and prototyping generative systems. Each card contains a series of input and output datatypes – from people to sensors to voxels to vectors. When laid down with matching inputs and outputs, these cards form playful dataflow sequences that users to visualize generative systems! And if you’re wondering just how fun Generominos can be, take a peek at the paper! The authors describe example use cases for the cards – studying systems and inspiring interactive art installations to name a few – as well as a preliminary exploration with students in a design class who found Generominos to be both understandable and enjoyable!

Back To Top

~*~

Designing Stronger AI Personalities

As part of EXAG’s commitment to fostering the exchange of ideas between industry and academic games folks, this year’s workshop features a first series of invited industry case studies. These will be published just like the other papers—and will also be presented as talks at the workshop—but they’re invited contributions from distinguished non-academic games practitioners. In this invited industry case study, Tanya X. Short introduces eight mechanisms that designers can utilize to better harness procedural character personalities in games. Tanya is co-founder and captain of Kitfox Games, a Montreal-based independent game studio, and a veteran developer and designer known for her expertise in procedural generation and systems-driven game design; her credits include Age of Conan, Shattered Planet, Moon Hunters, and Shrouded Isle.

As Tanya explains in this paper, there is an emerging pattern in game design that utilizes character personality as a central gameplay system. Games in this area leverage new technologies to produce reactive NPCs with procedural (and often generated) personalities. As we know well here at EXAG, new experimental game AI techniques often raise new design challenges at the level of gameplay, and this paper provides a wealth of design knowledge that will be of great interest to indie developers and academics. Using a number of examples spanning her experiences as both developer and player, Tanya provides an actionable recipe for building better character personality systems. Check it out and soak up the hard-earned design knowledge!

Back To Top

~*~

Secret Identities in Dwarf Fortress

Behold Tarn Adams’ thrilling return to academia! In this invited industry case study, the Dwarf Fortress creator (and one-time mathematics PhD/postdoc) discusses recent extensions to the game’s systems for character deception. A noted opus in the history of videogames, Dwarf Fortress is a roguelike game set in procedurally generated fantasy universes. It has been shown at the Museum of Modern Art and has been featured in The New York Times, The New Yorker, Wired, and many other press publications. Currently, Tarn and his brother, Zach Adams, are roughly midway through its famous 30-year development cycle. Here at EXAG, we are proud to present the first academic paper on Dwarf Fortress that has been written by the creator himself.

As Tarn explains in this paper, an upcoming update centered around artifacts—and what characters know about them—has had the fun consequence of necessitating that a certain class of non-player characters cultivate secret identities. While most civilizations in the game participate in trade, migration, and other mechanisms for information propagation, goblins do not. As such, members of that civilization must go undercover, by adopting secret identities, to acquire information about artifacts (which is becoming one of the game’s critical resources). The notion of procedural espionage, it turns out, has been on the Adams brothers’ minds ever since they watched the 1979 television miniseries Tinker Tailor Soldier Spy as children. In this paper, Tarn outlines their new approach to secret identities, discusses some of the technical and design challenges that have emerged, and outlines plans for the future. This industry case study will be of special interest to Dwarf Fortress fans in particular, but anyone who’s into experimental game AI will find discussion of an intriguing system at the cutting edge of character deception. Read it!

Back To Top

~*~

Answer Set Programming in Proofdoku

There’s old story about how a small group of undergraduates were given what sounded like a simple task for a summer; the task we now call “the entire field of computer vision.” In this papers, the authors present some of the challenges they faced in making what would initially sound like a straightforward variation on Sudoku. In the game, the players don’t specify the value of a particular cell, but instead choose the cells that offer proof that the cell’s value is correct. To do this, the authors leverage the power of Answer Set Programming (ASP), which is powerful enough to handle some of the problems that the authors face in the design, but not in a straightforward way.

Additionally, the authors discuss some of their technical challenges when designing a game for this strange new world of “cell phones with bad batteries,” “reception vanishing,” and “sometimes solving these puzzles can take a very long time even on the cloud, so what’s the best way to cache solutions” All in all, even if some of the ASP goes over your head, this paper is worth a read for the wisdom that it provides about marrying game design to AI.

Back To Top

~*~

Towards Positively Surprising Non-Player Characters In Video Games

What would a surprising NPC behaviour look like? If you’re playing Spelunky, ‘not stealing my shotgun and getting me killed’ would probably be nice. But for the real, true meaning of the word, NPCs in games don’t really intentionally surprise us very often. When they do, it’s usually because someone has coded some very special behaviours in, or something’s gone wrong and it’s ended up on a YouTube glitch compilation. In this paper, the authors ask whether we could build systems that seek out surprising behaviour, and automatically curate unusual NPCs to be added to a game automatically.

The example given uses sheep grazing in a field, with an evolutionary system governing how they respond to other sheep, food, wolves and the general space around them. Then, they used a neural network to watch many simulations of these sheep, and try and identify which ones had been set to evolve wildly and unpredictably, and which ones were evolving more calmly. The authors note that this isn’t necessarily ‘interestingness’ and that being interesting or surprising may be a particularly difficult challenge. But the basic idea of evolving new behaviours and training a curator to pick the ones that deviate or surprise you the most is definitely a cool approach – and surprising NPC behaviour in general feels like a timely problem to be working towards solving.

Back To Top

~*~

Social Simulation for Social Justice

There is an emerging mode of scholarly practice that views computational media through the lens of social justice and practices social justice through the medium of computational media. In this area, practitioners build systems that explore, through computation and gameplay, central issues in social justice. Here, examples include Fox Harrell’s Advanced Identity Research project, which explores the computational modeling of identity issues, and Vi Hart and Nicky Case’s Parable of the Polygons, which explains segregation by means of a playable system. Earlier this year, a new academic workshop on Computational Creativity and Social Justice held its first meeting, and it looks to become an ongoing series that may serve as a home for this kind of work. In this spirit, we here at EXAG are happy to provide a venue for Melanie Dickinson, Noah Wardrip-Fruin, and Michael Mateas’s paper on a system that explores issues of social justice through the medium of social simulation.

Inspired by the feminist credo “the personal is political,” Dickinson and her collaborators are interested in exploring personal–political phenomena through the medium of social simulation. This approach has two kinds of benefits, the authors argue: first, “writing social justice theories in code forces us to understand them in a different way,” and, second, “interacting with computational models of them affords a different kind of audience understanding and engagement than otherwise possible, due to the unique affordances of simulation and computational media.” In this paper, the authors introduce an ongoing project that leverages the social-simulation framework Ensemble—a descendant of the ‘social physics’ engine used in Prom Week—to critically model a specific personal–political domain: activist group meetings. By harnessing the expressive power of the Ensemble engine, and moreover by cleverly modeling abstract concepts such as issues and identities as characters, the authors are able to simulate a variety of rich personal–political phenomena. Check out the paper for more info the project’s intellectual underpinnings, technical approach, and future directions!

Back To Top

~*~

A Sandbox For Modeling Social AI

Designing and testing AI is like making jam: you need a big clean space to do it in, or it’s going to spill everywhere and make everything sticky and the dog’s going to come and try and eat your codebase. Those big clean spaces are sometimes called ‘sandboxes’ – software that creates a tiny example environment where AI can be tested and ideas can be explored. But sandboxes take time to make, and often aren’t very representative of the real software AI has to run on. Worse still, if everyone has their own sandbox, it’s really hard to compare systems to one another.

This paper is all about a new sandbox, made specifically for testing social AI, the kind you might see in The Sims. The best part is that it’s based on a real indie game – Project Highrise – so it’s polished, and represents the kind of systems you might find in a commercial game. This is a really valuable contribution that will help other people test their ideas in a shared space, compare results, and do it all in a well-built, stable and good-looking environment!

Back To Top

~*~

A Proposal for a Unified Agent Behaviour Framework

Ways to script non-player character behavior have seen many waves of popular techniques. The early days were dominated by simple finite state machines. Over time people got savvy and we added techniques for weighing different options – utility systems – and new ways to combine sequences of decisions – behavior trees. Each of these methods offers advantages and disadvantages, trading off ease of coding, interpretability, reactivity to near-term events, or ability to make long-term plans.

What if we wanted the best of all worlds? How could we combine the ability to react to near-term events with long-term planning, while making decisions that balance the value of different choices? In this paper the authors introduce a framework that leverages thinking from event handling to create an NPC scripting language with dynamic trees that make choices by weighing options using a utility system. Check out the paper for details on this promising new approach to NPC scripting!

Back To Top

~*~

A General Level Design Editor for Co-creative Level Design

Co-creativity is one of the hot topics in AI research, focusing on systems that allow humans and AI systems to collaborate on generative systems. All of these focus on different means of allowing the human and the machine to collaborate, but one of the open problems is a question of UI: how to best design an interface to allow human authors to interact in meaningful ways with an AI system.

Here, the authors present a demo of a level editor (targeted at the ever-popular-in-game-AI-research domain of Super Mario Bros) designed to allow a human to work together with an AI system. While the paper’s length requires the authors to point at other works for the actual underlying AI, they end with stating that they hope to be able to use the system as a means to tests how working with different AIs feels for humans.

Back To Top

~*~

ProcDefense – A Game Framework for Procedural Player Skill Training

One of the motivations behind using procedural content generation in games is to enable automatic customization of gameplay challenges and experiences for players. Typically, this customization targets metrics such as difficult and enjoyment (e.g., enabling players to experience games at an appropriate difficulty). But can this be taken further? Instead of just enabling experiences that match a player’s skill, can PCG be used to help train players by adjusting the experiences to improve their skills as they play?

Motivated by the success of simulation and AR games for training, this paper presents ProcDefense, a game which is intended be used as a platform for player skill training. In ProcDefense, players engage in a top-down 2D action game – part bullethell, part tower-defense – in which they must defend a core object from incoming projectiles using a circular paddle. The game’s interface allows adjustment of a variety of parameters related to game mechanics – such as projectile speed or the paddle size – and then can expose these as features to a difficulty-adjustment system (which is future work that we can’t wait to see!)

Back To Top

~*~

That concludes this year’s paper summaries! We hope to see you at the workshop or on the livestream in a couple of days!

(Special shout-outs to ex-organizers Mike Cook and Alexander Zook for helping out with this year’s summaries!)

EXAG 2017 Schedule

We are pleased to announce this year’s schedule for the EXAG workshop!

Thursday, October 5, 2017
9:00am – 9:30am
Welcome

9:30am – 10:30am
Long Papers
Marahel: A Language for Constructive Level Generation
Ahmed Khalifa, New York University
Julian Togelius, New York University

Leveraging Multi-layer Level Representations for Puzzle-Platformer Level Generation
Sam Snodgrass, Drexel University
Santiago Ontañón, Drexel University

10:30am – 11:00am
Break

11:00am – 12:30pm
Long Papers
A Vision For Continuous Automated Game Design
Michael Cook, Falmouth University

Towards General RPG Playing
Joseph Osborn, University of California, Santa Cruz
Benjamin Samuel, University of New Orleans
Adam Summerville, University of California, Santa Cruz
Michael Mateas, University of California, Santa Cruz

A Generative Framework of Generativity
Kate Compton, University of California, Santa Cruz
Michael Mateas, University of California, Santa Cruz

12:30pm – 2:00pm
Lunch (on your own; no sponsored lunch provided)

2:00pm – 3:00pm
Long Papers
Effects of Self-knowledge: Once Bitten Twice Shy
Vadim Bulitko, University of Alberta

Poetic sound similarity vectors using phonetic features
Allison Parrish, New York University

3:00pm – 4:00pm
Break

4:00pm – 5:30pm
Short Papers
Deep Learning for Speech Accent Detection in Videogames
Astrid Ensslin, University of Alberta
Tejasvi Goorimoorthee, University of Alberta
Shelby Carleton, University of Alberta
Vadim Bulitko, University of Alberta
Sergio Poo, University of Alberta

Dynamic Epistemic Logic for Game Design
Javier Torres, Brainific SL

“Press Space to Fire”: Automatic Video Game Tutorial Generation
Michael Green, New York University
Ahmed Khalifa, New York University
Gabriella Barros, New York University
Julian Togelius, New York University

Generominos: Ideation Cards for Interactive Generativity
Kate Compton, University of California, Santa Cruz
Edward Melcer, New York University
Michael Mateas, University of California, Santa Cruz

Friday, October 6, 2017
9:00am – 10:30am
Industry Case Studies
Designing Stronger AI Personalities
Tanya X. Short, Kitfox Games

Secret Identities in Dwarf Fortress
Tarn Adams, Bay 12 Games

10:30am – 11:00am
Break

11:00am – 12:00pm
Long Papers
Answer Set Programming in Proofdoku
Adam M. Smith, University of California, Santa Cruz

Towards Positively Surprising Non-Player Characters in Video Games
Vadim Bulitko, University of Alberta
Shelby Carleton, University of Alberta
Delia Cormier, University of Alberta
Devon Sigurdson, University of Alberta
John Simpson, University of Alberta

12:00pm – 2:00pm
Lunch (on your own; no sponsored lunch provided)

2:00pm – 3:30pm
Demo Presentations
A Sandbox for Modeling Social AI
Ethan Robison, Northwestern University

A Proposal for a Unified Agent Behaviour Framework
Javier Torres, Brainific SL

A General Level Design Editor for Co-creative Level Design
Matthew Guzdial, Georgia Institute of Technology
Jonathan Chen, Georgia Institute of Technology
Shao-Yu Chen, Georgia Institute of Technology
Mark Riedl, Georgia Institute of Technology

ProcDefense – A Game Framework for Procedural Player Skill Training
Brandon Thorne, North Carolina State University
Hiru Nelakkutti, North Carolina State University
Joseph Reinhart, North Carolina State University
Arnav Jhala, North Carolina State University

Late-Breaking Demo Presentations
Microbial Art: An Implicit Cooperation Musical Experience
Mário Escarce Junior, Phersu Interactive
Georgia Rossmann Martins, Phersu Interactive
Leandro Soriano Marcolino, Lancaster University
Anderson Tavares, Universidade Federal de Minas Gerais
Yuri Tavares Rocha, Universidade Federal do Recôncavo da Bahia

Grammar-Based Generation of 2D Boss Designs
Eric Butler, University of Washington
Kristin Siu, Georgia Institute of Technology

(Plus any more late-breaking work!)

3:30pm – 4:00pm
Break

4:00pm – 5:00pm
Demo Showcase

Behind the Scenes: Program Committee

We’re happy to finally announce the wonderful folks on this year’s EXAG program committee!

  • Sasha Azad
  • Eric Butler
  • Martin Cerny
  • Michael Cook
  • Melanie Dickinson
  • Squirrel Eiserloh
  • Mirjam Palosaari Eladhari
  • Jeremy Gow
  • Kazjon Grace
  • Matthew Guzdial
  • Sarah Harmon
  • Ian Horswill
  • Dominic Kao
  • Max Kreminski
  • Boyang Li
  • Antonios Liapis
  • Chong-U Lim
  • Peter A. Mawhorter
  • Mark J. Nelson
  • Joseph Osborn
  • Jonathan Pagnutti
  • Allison Parrish
  • Justus Robertson
  • Ben Samuel
  • Gillian Smith
  • Andrew Stockdale
  • Anne Sullivan
  • Adam Summerville
  • Jonathan Tremblay
  • Alexander Zook

Deadline Extension!

The deadlines for papers have been extended to July 7th!

We look forward to seeing your papers!

Meet the Organizers!

EXAG was founded by Mike Cook and Alex Zook in 2014, and Antonios Liapis joined the organizing committee in 2015. The workshop has been held at AIIDE each of the last three years, and it returns now for its fourth annual meeting!

This year’s organizing committee features three new additions:

  • Jo Mazeika is a researcher exploring constraint-based generative methods and computational encodings of style. She is a current Ph.D. in the Augmented Design Lab at UC Santa Cruz, and one of the core members of ScholarsPlay, a Twitch stream featuring game scholars critiquing games through play. She’ll be the tall lady with the bag of Legos, so feel free to say hi!
  • Kristin Siu is a researcher and independent game developer, working in artificial intelligence and human computer interaction for games. She is currently a Ph.D. candidate in the Entertainment Intelligence Lab at Georgia Tech: by day, her thesis is on human computation games and by night, she works on generative methods for boss encounters. She is also one of the engineers behind Elsinore, a time-looping Shakespearean adventure game. She likes tea drinking and hamsters.
  • James Ryan is a researcher and practitioner exploring creative and expressive applications of artificial intelligence, especially in the areas of simulation, narrative, and natural language. His collaborative projects include Bad News, a hybrid physical–digital game combining simulation and live improvisation, and GameSpace, a playable visualization of the videogame medium built using techniques from natural language processing and machine learning. He is currently finishing up his PhD at the Expressive Intelligence Studio at UC Santa Cruz, and he also works part-time as an AI Specialist at Spirit AI.

Additionally, EXAG is excited to announce the addition of a new workshop role, our Industry Expert, who is tasked with providing advice and expertise aimed at strengthening the bridge between academics and industry (particularly, indie) games folks. This year’s Industry Expert is none other than:

EXAG 2017 Call For Papers

What is EXAG?

The Experimental AI in Games (EXAG) workshop is an open, friendly, and laidback workshop hosted by AIIDE that aims to foster experimentation in AI research and all forms of game development. In addition to presenting traditional academic talks and live demos of AI technologies, EXAG hopes to foster a welcoming and diverse community of AI researchers and practitioners by including activities such as a show-and-tell demo and gameplay session.

Topics:

  • Echoing AIIDE-17’s special topic of “Beyond Games,” applications of experimental AI to expressive or creative areas of entertainment beyond games, such as music generation, poetry generation, bots, and many more.
  • New games and other related projects powered by academic research—like Sure Footing or Bad News.
  • New technology and tools made possible by AI, from roguelike Unexplored‘s procedurally-generated dungeons and puzzles to stealth game Third Eye Crime‘s visualization of AI logic.
  • Cross-pollination from AI subfields not typically used in games, like computational linguistics, machine vision, and procedural music.
  • Traditional AI techniques being applied in new ways, like Left 4 Dead’s drama management or Black And White‘s learning creatures.
  • Better living through AI—improving game development and design through new and interesting applications of AI, from intelligent design tools to automated QA.
  • Discussion of interesting but relatively unknown historical examples of experimental AI in games and related areas, such as Captain Blood‘s (1988) modular icon-based interface for procedural communication with NPCs, Intellivision World Series Baseball‘s (1983) telecast-influenced procedural camera system, or Skool Daze‘s (1984) real-time simulation of NPC agendas.
  • Discussion of the provenance of now widely adopted game AI techniques that were at one time experimental. Here, an example case study could trace the introduction of behavior trees to the game industry by Damian Isla in Halo 2 (2004) and Michael Mateas and Andrew Stern in Façade (2005).
  • Reports on failed experiments related to any topic in our purview, with insight into what went wrong and how others can learn from the failure.
  • Not sure if your topic is a fit? Drop us a line!

Submission:

EXAG 4 will be accepting three types of submissions:

  • Full papers: Regular papers submitted for oral presentation (4-6 pages in length, excluding references). These will be incorporated into the proceedings and presented as 20-minute talks during paper sessions.
  • Short papers (new track): Short papers (up to four pages, excluding references) describing a position, project, or proposal related to any topic of interest to the workshop. These papers will be incorporated into the proceedings and will be presented as five-minute talks during a lightning session.
  • Demonstration: Very short papers (up to two pages, excluding references) describing demonstrable systems that will be showcased in a show-and-tell session. These papers will be incorporated into the proceedings and will be presented during the demo session.

Important Dates:

EXAG 4 will be held on October 5-6th, 2017, co-located with the Artificial Intelligence in Interactive Digital Entertainment (AIIDE) 2017 conference at the Snowbird Ski and Summer Resort in Snowbird, Utah, USA.

Paper Submission deadline: JULY 7TH (Now extended!) (at 23:59 UTC-10:00)
Paper Acceptance notification: July 20th
Paper Camera-ready deadline: July 30th
Demo submissions (for proceedings): July 13th
Demo acceptance (for proceedings) notification: July 20th
EXAG 4: October 5-6

Submission Directions:

Please submit papers to our EasyChair site here. Papers must follow the AAAI format and must be anonymized for double-blind review.

Questions?
We want to make EXAG the friendliest, most fun and most open workshop we can. If you have any questions at all, please get in contact with one of our organizers: Jo, Kristin, and James.

Welcome To EXAG!

The Experimental AI in Games workshop is returning for its fourth year at AIIDE. We’re just getting ourselves set up here – check back soon for a Call for Papers and more information about the workshop!