This year, we’ve brought back more EXAG Paper Summaries – short descriptions written about each paper – so you can find out what papers might interest you or what talks you might want to tune into later this week!
Click on the links below to see what’s in this year’s workshop! (The list is in presentation order.)
- Marahel: A Language for Constructive Level Generation
- Leveraging Multi-layer Level Representations for Puzzle-Platformer Level Generation
- Towards General RPG Playing
- A Vision For Continuous Automated Game Design
- A Generative Framework of Generativity
- Effects of Self-knowledge: Once Bitten Twice Shy
- Poetic sound similarity vectors using phonetic features
- Deep Learning for Speech Accent Detection in Videogames
- Dynamic Epistemic Logic in Game Design
- “Press Space to Fire”: Automatic Video Game Tutorial Generation
- Generominos: Ideation Cards for Interactive Generativity
- Designing Stronger AI Personalities
- Secret Identities in Dwarf Fortress
- Answer Set Programming in Proofdoku
- Towards Positively Surprising Non-Player Characters in Video Games
- Social Simulation for Social Justice
- A Sandbox for Modeling Social AI
- A Proposal for a Unified Agent Behaviour Framework
- A General Level Design Editor for Co-creative Level Design
- ProcDefense – A Game Framework for Procedural Player Skill Training
Marahel: A Language for Constructive Level Generation
Level generation is an ongoing problem in games AI research, and this is exacerbated by the fact that level generation is a highly domain specific problem – levels designed for a given game are not necessarily compatible with other games. Enter Marahel: a little language for describing level generators for 2D spaces. This paper proposes Marahel, describes the current syntax and shows off some examples of the language in action.
In Marahel, generates are constructed from 5 different parts: Metadata, Entities, Neighborhoods, Regions, and Explorers; the first four setup information for the Explores, the actual generation agents, to utilize in their generation process. After providing detailed system notes, the authors offer some examples, as well performing an expressive range analysis of a few different generators. In general, it’s a neat approach to a making level generation more universal – by providing a language that can describe different generators.
Leveraging Multi-Layer Level Representations For Puzzle-Platformer Level Generation
There’s an old joke in computer science that if you can’t solve a problem, just add another layer of abstraction. It’s honestly not that funny unless you’re a computer scientist. And even then. But in every old joke there’s always a grain of truth, and an extra layer of information or ideas can sometimes crack a problem wide open. In this paper, the authors look at level generation and see if adding extra layers of information to an AI’s training data helps it understand the nuances of level design better.
To use a crude metaphor, suppose I wanted you to design me a new house, and I showed you photos of houses I liked, but you had absolutely no idea how to build a house. Maybe you paint a big cardboard box to look like one of the houses I showed you. But now suppose I give you extra layers of information for each of these houses: I show you blueprints, architectural drawings, maps of the plumbing and electrical wiring. You’d still probably build a really bad house, but you’d have a much better idea of what goes into one!
By showing extra layers of level information for each example level – like paths a player takes or zones that different puzzles control – as well as raw information about the example level, the idea is that machine learning systems will learn faster and better, and be able to produce more complex and interesting levels. The paper explains it much better, and with fewer construction-based metaphors – be sure to check it out for more information!
Towards General RPG Playing
Can an AI play a game? Well, it depends on the game. Game-playing agents for a wide variety of genres from platformers (e.g., Mario Bros.) to real-time strategy games (e.g., Starcraft) have been enabled by a wide variety of techniques and advances in AI from simple heuristics to machine learning. But some kinds of games still elude us even now, in the ongoing quest for the
crystals of light general (video)game playing.
This paper introduces general RPG playing and the challenge of building agents that can play console role-playing games. RPGs (think Dragon Warrior or Final Fantasy here) are particularly complicated, as they consist of many different mechanical systems that require cultural knowledge, numerical reasoning, and the ability to determine what the content and context of a random text box on a screen (among many other things). In this paper, the authors break down RPG playing into a number of subproblems. They describe their initial progress at some of these: automatic mapping, overlay detection, and (menu) text recognition. As for next steps – there’s quite a number of
sidequests subproblems, such as learning combat systems or using human coaching to train agents. It looks like this exciting quest for RPG playing has only just begun!
A Vision for Continuous Automated Game Design
ANGELINA is a system that’s well known within the EXAG community. That’s what you get when the creator is one of the founders of your workshop, I guess. Regardless, ANGELINA’s always been an ambitious project — the ultimate goal has always been to build a system capable of designing games that humans will be able to play, but this iteration on ANGELINA looks to approach a slightly different angle on the problem – specifically, that human designers get too many ideas, pick up one, scrap two others, merge two half baked ideas into one that’s three-quarters baked, etc.
In this paper, the authors define the notion of continuous design – the act of flowing between different design states freely, and using a continuously growing bank of knowledge and ideas to inform the current state of the design. This new version of ANGELINA hopes to operate in the space between micro code-level tweaks and high-level means of piece rules together. The paper then details a few areas of work for ANGELINA to focus on: designing levels, building mechanical patterns, and innovating on individual mechanics.
A Generative Framework of Generativity
What is generativity? Depending on who you might ask, it might mean different approaches or artifacts. What’s for certain though is that generative methods have been applied to all kinds of content – music, architecture, games, simulation, crafts (to name but a few). Traditionally, taxonomic efforts (particularly in the games research community) have primarily focused on the specific classes of generative algorithms for and/or the domain-specific artifacts of generation. This makes it difficult to generalize these frameworks across different domains or understand how techniques from one domain might apply to another.
This paper describes a framework for generative methods that emphasizes the methods of generativity and their applications across research fields. Here, each generative method is treated as a transformation on specified inputs that returns a kind of output – but is relatively agnostic to the artifacts being generated. (Did you know, for example, that tile-based approaches have been used not just for games, but also for music generation?) The authors outline a variety of generative methods, from random selection to machine learning approaches, first focusing on the construction of the content. Next, they describe different techniques for optimizing the generated content, such as heuristic-based search techniques, constraint solvers, or user interaction. Throughout the paper, their framework is illustrated with examples of systems and concludes with examples in games, art installations, etc., showing how their framework can generalize across different domains.
Effects of Self-knowledge: Once Bitten Twice Shy
When it comes to the various use cases for AI in games, pathfinding would seem to be among the ones with the least expressive potential. Whereas application areas like dialogue generation and social simulation seem inherently expressive, pathfinding serves a more practical purpose: get characters from point to point by the path that is optimal in terms of both travel time and computational cost. Here at EXAG, we don’t tend to see much work in this area, because it’s not really an experimental one at this point (there are tried and true techniques) and it doesn’t (initially) seem to have much expressive potential. In this paper, however, Vadim Bulitko cleverly hacks a conventional technical approach to turn character pathfinding into an expressive mechanism.
“Consider for instance, a character who entered a castle, got lost in it and thus took a long time to get out,” Bultiko writes. “It would be natural for a human to regret getting lost and therefore hesitate to enter the castle in the future despite the fact that going through the castle may be the shortest way.” This is the premise for the paper’s “once bitten, twice shy” approach to pathfinding. By hacking the conventional formulation of real-time heuristic search to include this kind of self-knowledge, NPCs can be made to pathfind in ways that are less rational, but more believable. Moreover, the way that NPCs move about the gameworld can actually express subtle information about who that NPC is. As such, this project represents an interesting contribution to background believability, the application area of game AI that pertains to improving the believability and expressivity of NPCs appearing in the backgrounds of gameworlds. Check out the paper for details about the technique and some of Bulitko’s interesting experimental findings!
Poetic sound similarity vectors using phonetic features
With the introduction of techniques like word2vec and GloVe, there’s been a burst of excitement in recent years about vector-space models of semantics. In such a model, texts (such as individual words or phrases, or whole documents) are represented as vectors in a semantic space, where texts with related meanings have similar vectors. By using such a vector-space representation of meaning, a series of cool tricks from vector algebra are enabled. The classical application of these techniques is to automatically determine how related text documents are, but recent work has demonstrated other striking affordances. For instance, the meanings of texts can be manipulated using arithmetic operations such as addition and subtraction. Here, a classic result is the query “king – man + woman = ____”, for which a word2vec model famously returned ‘queen’.
Inspired by these recent advances in vectorial semantics, in this paper Allison Parrish introduces a technique for vectorial phonetics. While techniques like word2vec allow one to automatically characterize and manipulate the meaning of text, Parrish’s method makes this possible for the sound of text. The paper is ripe with cool examples of what the technique can afford, including phonetic analogy (‘light’ is to ‘slide’ as ‘lack’ is to ‘slag’), sound “tinting” (adding a spiky-sounding “kiki” filter to a text, or a round-sounding “bouba” filter), and random walks through sound space (moving from one line of poetry in a corpus to its most similar-sounding counterpart, and so on). The applications for computer poetry and other areas of expressive text generation are clear and considerable. Check out the paper to find out more about how the method works and what all it can do!
Deep Learning for Speech Accent Detection in Videogames
What if an AI system could automatically detect what accent a person had? That might be a bit interesting, but most people can place most accents most of the time.
But what about taking this idea to games? What accents are used in games? How are those accents used?
In daily life accents serve as a marker for where a person grew up and learned language. But we typically infer much more than just a person’s home town from an accent – we often make assumptions about that person’s social, economic, and ethnic background as well. In games this goes further: characters in games like Dragon Age: Inquisition are designed to represent social groups through the use of their accents. Villains often have British accents and heroes speak with standard American English. These choices are use to encourage players to adopt a worldview about characters based on their accent, potentially creating a narrative for players, like that British people are evil and Americans good.
In this work the authors demonstrate an initial effort to train an AI system to detect which accents are being used by characters in games, with the long-term goal of building a corpus of information about how accents are used in games. With this knowledge we can begin to understand and question how we use accents in games to lead to more balanced use of accents in the future. Check out the paper to learn more!
Dynamic Epistemic Logic in Game Design
Branches of logic are kind of like rabbits – leave a few of them alone for long enough with some researchers to nibble on, and eventually you’ll have more than you know what to do with. However, unlike rabbits, branches of logic have obvious applications in AI research, since they provide models and techniques for reasoning formally about a world. In this paper, the authors propose applying Dynamic Epistemic Logic (or DEL) to a number of different areas of games, providing a list of different spaces where this formalism could be useful.
DEL is a branch of logic that includes Dynamic operations (the ability to reason about actions with non-deterministic outcomes) and Epistemic operations (the ability to reason about theory-of-mind) with classical logic statements in order to provide powerful tools for AI systems to leverage.
“Press Space to Fire”: Automatic Video Game Tutorial Generation
When we think about using AI to generate bits of games, or maybe even entire games, there are things our minds immediately jump to – like generating level designs, or maybe coming up with a funny name. But there’s so much that goes into making a game, and a lot of these things are very rarely looked at by AI researchers. One of those is generating tutorials – every good game has to be understandable, whether it walks the player through step by step, or designs clever ways for the player to teach themselves. This paper looks at how we can use AI to analyse a game and then figure out what to tell the player about – and, as you may have guessed, it turns out to be a really tricky problem.
One of the ways this paper proposes thinking about the problem is getting the system to explore the game’s rule space, and look at the rules that cause winning, and the rules that cause losing. These rules – the ones tied directly to failure or success – might be great starting points to teaching the very basics to players. Avoid alien bullets. Shoot bullets at aliens. From there, an AI system might be able to build bigger steps to teach players about different strategies, or hint at things without giving the whole game away. It’s a really exciting AI problem, and one we’re only just beginning to look at – check out this paper if you want to see some intriguing first steps towards solving it!
Generominos: Ideation Cards for Interactive Generativity
What kind of design tools are available to help design generative systems? Can this ideation process be treated as a game? What about a generative game about generating generative systems? (Okay, maybe that’s a bit much.) But here’s a playful solution!
This paper presents Generominos, a series of design cards for modeling and prototyping generative systems. Each card contains a series of input and output datatypes – from people to sensors to voxels to vectors. When laid down with matching inputs and outputs, these cards form playful dataflow sequences that users to visualize generative systems! And if you’re wondering just how fun Generominos can be, take a peek at the paper! The authors describe example use cases for the cards – studying systems and inspiring interactive art installations to name a few – as well as a preliminary exploration with students in a design class who found Generominos to be both understandable and enjoyable!
Designing Stronger AI Personalities
As part of EXAG’s commitment to fostering the exchange of ideas between industry and academic games folks, this year’s workshop features a first series of invited industry case studies. These will be published just like the other papers—and will also be presented as talks at the workshop—but they’re invited contributions from distinguished non-academic games practitioners. In this invited industry case study, Tanya X. Short introduces eight mechanisms that designers can utilize to better harness procedural character personalities in games. Tanya is co-founder and captain of Kitfox Games, a Montreal-based independent game studio, and a veteran developer and designer known for her expertise in procedural generation and systems-driven game design; her credits include Age of Conan, Shattered Planet, Moon Hunters, and Shrouded Isle.
As Tanya explains in this paper, there is an emerging pattern in game design that utilizes character personality as a central gameplay system. Games in this area leverage new technologies to produce reactive NPCs with procedural (and often generated) personalities. As we know well here at EXAG, new experimental game AI techniques often raise new design challenges at the level of gameplay, and this paper provides a wealth of design knowledge that will be of great interest to indie developers and academics. Using a number of examples spanning her experiences as both developer and player, Tanya provides an actionable recipe for building better character personality systems. Check it out and soak up the hard-earned design knowledge!
Secret Identities in Dwarf Fortress
Behold Tarn Adams’ thrilling return to academia! In this invited industry case study, the Dwarf Fortress creator (and one-time mathematics PhD/postdoc) discusses recent extensions to the game’s systems for character deception. A noted opus in the history of videogames, Dwarf Fortress is a roguelike game set in procedurally generated fantasy universes. It has been shown at the Museum of Modern Art and has been featured in The New York Times, The New Yorker, Wired, and many other press publications. Currently, Tarn and his brother, Zach Adams, are roughly midway through its famous 30-year development cycle. Here at EXAG, we are proud to present the first academic paper on Dwarf Fortress that has been written by the creator himself.
As Tarn explains in this paper, an upcoming update centered around artifacts—and what characters know about them—has had the fun consequence of necessitating that a certain class of non-player characters cultivate secret identities. While most civilizations in the game participate in trade, migration, and other mechanisms for information propagation, goblins do not. As such, members of that civilization must go undercover, by adopting secret identities, to acquire information about artifacts (which is becoming one of the game’s critical resources). The notion of procedural espionage, it turns out, has been on the Adams brothers’ minds ever since they watched the 1979 television miniseries Tinker Tailor Soldier Spy as children. In this paper, Tarn outlines their new approach to secret identities, discusses some of the technical and design challenges that have emerged, and outlines plans for the future. This industry case study will be of special interest to Dwarf Fortress fans in particular, but anyone who’s into experimental game AI will find discussion of an intriguing system at the cutting edge of character deception. Read it!
Answer Set Programming in Proofdoku
There’s old story about how a small group of undergraduates were given what sounded like a simple task for a summer; the task we now call “the entire field of computer vision.” In this papers, the authors present some of the challenges they faced in making what would initially sound like a straightforward variation on Sudoku. In the game, the players don’t specify the value of a particular cell, but instead choose the cells that offer proof that the cell’s value is correct. To do this, the authors leverage the power of Answer Set Programming (ASP), which is powerful enough to handle some of the problems that the authors face in the design, but not in a straightforward way.
Additionally, the authors discuss some of their technical challenges when designing a game for this strange new world of “cell phones with bad batteries,” “reception vanishing,” and “sometimes solving these puzzles can take a very long time even on the cloud, so what’s the best way to cache solutions” All in all, even if some of the ASP goes over your head, this paper is worth a read for the wisdom that it provides about marrying game design to AI.
Towards Positively Surprising Non-Player Characters In Video Games
What would a surprising NPC behaviour look like? If you’re playing Spelunky, ‘not stealing my shotgun and getting me killed’ would probably be nice. But for the real, true meaning of the word, NPCs in games don’t really intentionally surprise us very often. When they do, it’s usually because someone has coded some very special behaviours in, or something’s gone wrong and it’s ended up on a YouTube glitch compilation. In this paper, the authors ask whether we could build systems that seek out surprising behaviour, and automatically curate unusual NPCs to be added to a game automatically.
The example given uses sheep grazing in a field, with an evolutionary system governing how they respond to other sheep, food, wolves and the general space around them. Then, they used a neural network to watch many simulations of these sheep, and try and identify which ones had been set to evolve wildly and unpredictably, and which ones were evolving more calmly. The authors note that this isn’t necessarily ‘interestingness’ and that being interesting or surprising may be a particularly difficult challenge. But the basic idea of evolving new behaviours and training a curator to pick the ones that deviate or surprise you the most is definitely a cool approach – and surprising NPC behaviour in general feels like a timely problem to be working towards solving.
Social Simulation for Social Justice
There is an emerging mode of scholarly practice that views computational media through the lens of social justice and practices social justice through the medium of computational media. In this area, practitioners build systems that explore, through computation and gameplay, central issues in social justice. Here, examples include Fox Harrell’s Advanced Identity Research project, which explores the computational modeling of identity issues, and Vi Hart and Nicky Case’s Parable of the Polygons, which explains segregation by means of a playable system. Earlier this year, a new academic workshop on Computational Creativity and Social Justice held its first meeting, and it looks to become an ongoing series that may serve as a home for this kind of work. In this spirit, we here at EXAG are happy to provide a venue for Melanie Dickinson, Noah Wardrip-Fruin, and Michael Mateas’s paper on a system that explores issues of social justice through the medium of social simulation.
Inspired by the feminist credo “the personal is political,” Dickinson and her collaborators are interested in exploring personal–political phenomena through the medium of social simulation. This approach has two kinds of benefits, the authors argue: first, “writing social justice theories in code forces us to understand them in a different way,” and, second, “interacting with computational models of them affords a different kind of audience understanding and engagement than otherwise possible, due to the unique affordances of simulation and computational media.” In this paper, the authors introduce an ongoing project that leverages the social-simulation framework Ensemble—a descendant of the ‘social physics’ engine used in Prom Week—to critically model a specific personal–political domain: activist group meetings. By harnessing the expressive power of the Ensemble engine, and moreover by cleverly modeling abstract concepts such as issues and identities as characters, the authors are able to simulate a variety of rich personal–political phenomena. Check out the paper for more info the project’s intellectual underpinnings, technical approach, and future directions!
A Sandbox For Modeling Social AI
Designing and testing AI is like making jam: you need a big clean space to do it in, or it’s going to spill everywhere and make everything sticky and the dog’s going to come and try and eat your codebase. Those big clean spaces are sometimes called ‘sandboxes’ – software that creates a tiny example environment where AI can be tested and ideas can be explored. But sandboxes take time to make, and often aren’t very representative of the real software AI has to run on. Worse still, if everyone has their own sandbox, it’s really hard to compare systems to one another.
This paper is all about a new sandbox, made specifically for testing social AI, the kind you might see in The Sims. The best part is that it’s based on a real indie game – Project Highrise – so it’s polished, and represents the kind of systems you might find in a commercial game. This is a really valuable contribution that will help other people test their ideas in a shared space, compare results, and do it all in a well-built, stable and good-looking environment!
A Proposal for a Unified Agent Behaviour Framework
Ways to script non-player character behavior have seen many waves of popular techniques. The early days were dominated by simple finite state machines. Over time people got savvy and we added techniques for weighing different options – utility systems – and new ways to combine sequences of decisions – behavior trees. Each of these methods offers advantages and disadvantages, trading off ease of coding, interpretability, reactivity to near-term events, or ability to make long-term plans.
What if we wanted the best of all worlds? How could we combine the ability to react to near-term events with long-term planning, while making decisions that balance the value of different choices? In this paper the authors introduce a framework that leverages thinking from event handling to create an NPC scripting language with dynamic trees that make choices by weighing options using a utility system. Check out the paper for details on this promising new approach to NPC scripting!
A General Level Design Editor for Co-creative Level Design
Co-creativity is one of the hot topics in AI research, focusing on systems that allow humans and AI systems to collaborate on generative systems. All of these focus on different means of allowing the human and the machine to collaborate, but one of the open problems is a question of UI: how to best design an interface to allow human authors to interact in meaningful ways with an AI system.
Here, the authors present a demo of a level editor (targeted at the ever-popular-in-game-AI-research domain of Super Mario Bros) designed to allow a human to work together with an AI system. While the paper’s length requires the authors to point at other works for the actual underlying AI, they end with stating that they hope to be able to use the system as a means to tests how working with different AIs feels for humans.
ProcDefense – A Game Framework for Procedural Player Skill Training
One of the motivations behind using procedural content generation in games is to enable automatic customization of gameplay challenges and experiences for players. Typically, this customization targets metrics such as difficult and enjoyment (e.g., enabling players to experience games at an appropriate difficulty). But can this be taken further? Instead of just enabling experiences that match a player’s skill, can PCG be used to help train players by adjusting the experiences to improve their skills as they play?
Motivated by the success of simulation and AR games for training, this paper presents ProcDefense, a game which is intended be used as a platform for player skill training. In ProcDefense, players engage in a top-down 2D action game – part bullethell, part tower-defense – in which they must defend a core object from incoming projectiles using a circular paddle. The game’s interface allows adjustment of a variety of parameters related to game mechanics – such as projectile speed or the paddle size – and then can expose these as features to a difficulty-adjustment system (which is future work that we can’t wait to see!)
That concludes this year’s paper summaries! We hope to see you at the workshop or on the livestream in a couple of days!
(Special shout-outs to ex-organizers Mike Cook and Alexander Zook for helping out with this year’s summaries!)