In the demo session we got the chance to look closer at the applications presented at the conference. The demo session reminds me of the ACE conference, but here at FDG it was more weaved into the program: It was put late, so that one could go and look closer at those applications that had sparkled an interest during the paper presentations. All conferences in this field should do it like this! As you can see in the picture below, the room was packed.
Here is Ken Hullet from the EIS lab, showing his scenario generation app for emergency rescue training games.
Ken is using a hierarchical task network planner that (I asked) can be used for other systems. (Note to self: don’t forget.)
In the corner next to the show of KODU I saw people with space-like goggles on holding a piece of carton with a pattern on. They were looking totally immersed as they were carefully tilting it in different angels. I asked to try, and lo and behold (as Yasmin Kafai would say), when I was wearing the goggles, and looking at the patterned piece of carton things appeared on it! By tilting the carton piece I could steer a ball that would roll in different directions through a maze.
I remember looking at other applications for augmented games. A group at Fraunhofer made the cross media game Epidemic menace, where the goggles provide an overlay of fictional content on the reality through the goggles – the goggle covered only one eye, and through it one could see how the otherwise invisible viruses where moving around in the environment where one was standing. Another augmented gaming solution that comes to mind is Sony’s The Eye of Judgment that I saw at TGS 2006.
(1)Fraunhofer, Virus game, me trying on goggles, (2) Poster of the epidemic menace game from Fraunhofer, (3) Sony’s the The Eye of Judgement, (TGS 2006)
There are many other systems, but somehow… well that could be me being ignorant, but somehow they don’t seem to fly out to the world. I see an initial presentation, I get enthusiastic, but then they fade into silence. On the other hand: these are early, brave, expensive projects that are heavy on the tech. They fly by showing what is possible. Paving the way.
Now, the system shown at FDG is called Goblin XNA for Augmented Reality research and Games, is open source and builds upon XNA. This combined with that the goggles are cheap makes it suddenly very accessible to work with! Perhaps this could be one of the applications that start trotting along the pavement laid out by sweat, blood and tears of the earlier projects. We will see. For my part I’m putting Goblin into my “future box” (things I want to play with when I’m done with the dissertation).
Damian Isla, Founder, Naimad Games, Next-Gen Content Creation for Next-Gen AI
Damian Isla approached the area of content creation aided by AI for games by looking at two possible solutions: (1) Michael Mateas’ standpoint of the need for a new breed of engineering competent designers, and (2) Chris Hecker’s standpoint of the need for better authoring paradigms (“The Photoshop of AI”). Damian showed examples of state of the art applications from three categories: (1) Causal ”when a happens do B”, (2)Learning (and behavior capture), and (3)Planning.
For each of the categories he showed screenshots of interfaces illustrating the approaches, among them Endorphin (NaturalMotion), Havoc Behavior, Autodesk, The Restaurant game (Jeff Orkin), AC Knowledge viewer (TruSoft), Assassins Creed (Ubisoft), Halo 3, The Sims, F.E.A.R (Monolith Productions), Final Fantasy 12 (Square Enix),SPIRPOS AI, Zombie (Steve Marotti, Nihilistic Software), Situation Editor (Brian Schwab, Sony), BT Editor Prototype (Alex Champanard), Façade (Mateas & Stern).
Damian went on quoting Stanislavsky (whereupon I almost fell in love with the speech),
and then showed a mockup of the office assistant helping out with a suicide letter (where I DID fall in love),
and closed his circle of arguments by looking at the two possible solutions. Damian thinks we need both, and I agree. It’s not new saying that AI needs to be done in coop with designers, but what Damian is saying that is it should be done BY designers – either in code, or by using the technical solutions for content creation. (And we need more of those). I might of course be biased given the work I do… but hey, there is a reason for it. Yay for Damian!
I understood later in the day that I had missed a really good speech in the morning: Tan Le’s presentation about Emotive “The Brain - Revolutionary Interface for Next-Generation Digital Media”. Emotive has developed a helmet that listens to the EEG waves of the brain and managed to make a system that can filter the noise from the signals good enough to enable a player to move 3D objects with pure thought! Still in Orlando after having disembarked the ship I watched a presentation of the system that is out on YouTube. …Would the Marvin, who is up on stage there, be Marvin Minski? I really hope that I can swing some time after I’m done with the dissertation to play around with the system: there is an SDK for it. (Thanks David Gibson for sending me the link and summarizing her whole speech in conversation :))
We spent the afternoon ashore, and after returning to the ship it was time for the poster session and a panel about Academic/Industry Collaboration.
Here are the posters that I found most interesting:
Panel about Academic/Industry Collaboration. Steve Berman (Founder & CEO, Transformative Media Consortium and COO, IP Pacific/Canada), Mark Overmars (Utrecht University), Magy Seif El-Nasr (Simon Fraser University), Kurt Squire (University of Wisconsin) and Bill Swartout (USC/ICT)
The dinner theme was pirate costumes, and the EISers (joined by Ian and Maggie) went all out on! (i kept wishing i had bought the pirate hat i found in Nassau)
Paper Session #5: Game Studies Session chair: TL Taylor
Hardcore casual: Game culture return(s) to Ravenhearst, Mia Consalvo
Mia Consalvo has been studying the player base of the casual games produced by Big Fish Games in order to see if players of casual games differ from players of other types of games – but found lots of similarities between these players and players of more ‘hardcore’ games. I took some notes while listening:
Easy to use and incredibly difficult: On the mythical border between Interface and Gameplay, Jesper Juul and Marleigh Norton
Jesper Juul and Maleigh Norton gave some illustrative examples of games juxtaposing easy interface with complicated game play, and games with easy gameplay but with extremely complicated interfaces. They argued that in some cases an inefficient interface can be part of the game – to learn to master it.
Here are the notes I jotted down while listening:
Characterizing and Understanding Game Reviews, Jose Zagal, Amanda Ladd and Terris Johnson
Jose and his coauthors have been analyzing a large number of game reviews in order to understand their influence and character. According to Jose they are much more than shopping guides, and have the following nine common themes:
Invited Talk: Yasmin Kafai, Ph.D., Professor of Learning Sciences, University of Pennsylvania Graduate School at Education Beyond Barbie and Mortal Kombat: New Perspectives on Girls and Games
Yasmin Kafai gave an interesting speech on the history of designing games for girls, providing us with three named categories for these design endeavors: - Games for girls only - Games for social change - Games for expression
Kafai conducts her research in a teen virtual worlds where the percentage of female players are higher than that of males (Yes, they exist). She gave a fascinating overview of her work where she had analyzed logs of play from 6 months, and could see patterns in behaviors and very diverse playing styles. I don’t think anyone would believe that all girls play the same way just because they are girls, but if there is anyone out there assuming this Kafai’s work has solid proof of the contrary.
During this session I was distracted: I thought I had lost my camera, so I was running around searching for it, and filled in a form at the guest services in case someone had found it. Once I gave up I had missed Ken’s speech (Scenario generation for emergency rescue training games, Kenneth Hullett and Michael Mateas).
The room was full, so I settled down on the floor listening to Gillian presenting ( Rhythm-Based Level Generation for 2D Platformers, Gillian Smith, Mike Treanor, Jim Whitehead and Michael Mateas). I started to empty my handbag… and found the camera. Happily I took this photo:
At least I was in time to listen to Christina present her generative conversation tool for game writers. She has built in cooperation with Telltale Games, and thus gotten continuous input from game writers using the system.
The system generates small talk, which is related to a “topic”, that proposes a “fact” on the topic, and results in a “quip” – the actual dialog line. The speakers in the application are zombies in a cocktail party environment, very forgiving and charming. Strike of genius, since then a user would be impressed of the intelligence of the zombie, rather than disappointed in a presumed human. This is the pieces of the system:
Parameters used by the conversational agents include Alcohol, appropriateness and silence tolerance. The influence of alcohol (it’s a cocktail party context) increases the threshold for appropriateness, while it decreases the memory for dialog lines, thus possibly generating less appropriate and repetitive dialog (i.e. the characters don’t remember what they just said). The agents have different levels of tolerance to silence, so an agent with low tolerance would start talking quicker if there is an awkward pause due to inappropriateness. For topic generation input is taken from… I can’t remember if it was from word net or concept net. But from a net. Great work this is. It will be exciting to see the progression of it.
Before lunch the second day of FDG Magy Seif El-Nasr held the Working Group Meeting: Towards Acknowledging the Diversity of Game Research Methodologies. These were our starting points:
We gathered in groups and presented what research we were doing, and what methods we were using. In our group this took 20 minutes of the 30 we had. It was diverse of course, our methods. The discussion concluded in that we need to continue the discussion, and that we, as David Gibson put it, need a sustained workgroup in methods.
This is important. It’s easy to become reactive when looking at other people’s methods and research. I for one, when I started my PHD, thought that I would build experimental prototypes, and build AI solutions that according to principles from authoring and inspired from psychology, specifically behavior related fields. Well, one would think that using methods from computing science, social science theory and humanities would be quite a lot to start with. But during the past three years this has proved to not be enough at all: No way will a paper be accepted unless there is a study of the prototype using methods from the HCI field. At least it has been so in the venues where I have gone. I don’t think this is bad, on the contrary, it is a good way of finding knowledge. It could also be a sign of that it is difficult to quantify results in other ways. And this area being comparatively young, there is a need to be sure of one’s legitimacy. It can give a sense of safety to add a bar chart and point at data underlying it. But we all know that there are results not that easily quantified that are still good research. I would like to see this problematised more for our specific field. If we need to clutch our teddy bears of bar charts, then let it be so. There is nothing wrong with transitional objects.
In the group discussion someone noted that even Jesper Juul has started to use quantative methods, whereupon Jesper merrily exclaimed: "I'm a reformed formalist!"
Luckily for us, Magy is going to put together a wiki on game research methodologies, and I expect that this can lead to interesting and necessary discussions.
In the morning of the 2nd day of FDG I listened to the AI session, were David Olsen presented his take on computational humor in games. (Beep! Beep! Boom!: Towards a Planning Model of Coyote and Road Runner Cartoons) He recognizes the following problems:
David is specifically looking at physical humor, and is studying Coyote and Road Runner Cartoons. Here, there is a set of rules to lean on: David is building a solution (ACME :), using HTN planning) in order to generate gags.
In the discussion there was a question about time, and David explained the importance of having an extremely fine granularity in the timing. This is the first take I have seen of generating humorous content in a computational way. As far as I know David is breaking new ground here. I so look forward to seeing what more will come from his research.
Magy Seif El-Nasr presented next in the session. Joseph Zupko has continued developing the system for automated lightning that Magy worked on for her dissertation. (“System for Interactive Automated Lighting (SAIL)”)
In games, as opposed to in movies, it is often the player who controls the camera and thus the viewing angle. The creators can’t control these angles, and not the lightning. This is where SAIL comes in, providing automatic lighting and an authoring interface for the creators. I don’t know what to say except I’m continuously and profoundly impressed by SAIL.
In the afternoon, when we were all safely aboard, Chris Satchell (Chief Technology Officer for the Interactive Entertainment Business (IEB), Microsoft) gave the opening key note: Evolution of the medium: Positioning for the future of gaming
Next on the schedule was a panel about “Creating and Managing an Academic Games Program” with Ian Bogost (Georgia Tech), Gary Brubaker (The Guildhall at SMU), Andrew Phelps (Rochester Institute of Technology), Walker White (Cornell University), Jim Whitehead (UC Santa Cruz), Michael Zyda (USC Gamepipe)
In the discussion one of the questions were about how to avoid students to “crash and burn” during their games projects. Michael Zyda said that they at UCS use “best practices”, and weekly planning meetings. I.e. putting a heavy focus on the work process. Ian said that, well, let them crash and burn, that’s the best way to learn. This is a problem that most student groups encounter, and the general meaning of the audience and the panel was that it is a good thing that they learn early on, in the education what it is to crash and burn in order to avoid doing that later on, both in their final exam projects and when the students go into the industry.
After this I went to another panel: “Funding Landscape for Games-Related Research” with Mary Lou Maher (National Science Foundation), John Nordlinger (Microsoft Research), Ben Sawyer (Digital Mill), Roger Smith (US Army PEO STRI).
This panel was focused around the funding landscape in the US. There was a question about the funding landscapes on other continents put to the audience, but no one stepped up, so the panel continued focusing on the US.
Mary Lou Mahler from the National Science Foundation (of the US) recommended people to volunteer as reviewers, and to develop a communication with NSF. Focus areas that people doing games related research can tap into is social computing, health related issues and education(“no kid left behind”).
John Nordlinger from Microsoft stressed the importance of that, once there is funding in place, to make sure that the results are communicated to the funding organization. (ie don’t embarrass the person who believed in your project)
Ben Sawyer ended the session by asking his little son, who was also on the stage on what to say when one needs funding and got the answer “daddy daddy daddy daddyyy”. Nice illustration of persistence and persuasiveness!
I just got home from the conference Foundations of Digital Games (FDG). FDG has been running for a few years now, hosted by Microsoft, and was this year made larger than previous year, with 220 attendees.
FDG is special in a number of ways. First, it is an academic game conference that explicitly focuses on the educational aspect of games, on how we teach them, and providing a forum for talking about these issues. To me this was very valuable, since I mostly go to conferences where I present research results and listen to others presenting theirs. FDG recognizes both: that we who do research in the field of games in the academic realm also spend time teaching about it. So at FDG I got both – the latest work in my research area, expressive AI for games, AND fuel for thoughts about teaching games. Speaking about Expressive AI – there were 14 (!!) people from the Expressive Intelligence Studio at UCSC where I was a visiting scholar last year. It was absolutely wonderful to meet almost the whole lab, and get to listen at presentations of the latest work done there. (But I really missed Josh McCoy who couldn’t make it. I guess I’ll need to email him now to ask how his work on simulating social stigma progresses.)
Another aspect of FDG that I appreciate is the technical focus of the conference that still embraces research from the humanities and the social sciences. It’s like a “technical DIGRA”. For the research community this is useful because many of us need to have peer review of full papers rather than reviews only based on abstracts. FDG might fill a needed space – a conference that is broad enough to address the most important issues in game research no matter from what discipline the results come from, but still have review processes that both can recognize what to publish, and do it in a manner that it is acceptable for our home-departments. I hope that FDG can be a kind of mix of AIIDE and DiGRA and still keep the educational focus. Jim Whitehead had a good comment about knowledge transfer in the ending session of the conference: That academics go to GDC to listen, and people in the games industry go to FDG to listen. It would be great if FDG can fill this role in the future. As an academic I go to several researches oriented conferences each year, and if I have time and can find resources I go to one industrial event: GDC and this doesn’t happen every year given the expense and the travel time. It would be useful if the industry had one conference to attend to take the pulse on the state of the art research in games. Given the constant interest in Ian’s, Jane’s and Mia’s yearly session at GDC about top ten research results of the year I think this would be interesting to many.
The brightest highlight for me was to get advice from Michael Mateas who is one of my advisors for my dissertation. (I glued myself into a place of proximity until he gave up :)). I showed him and Noah (Wardrip Fruin) the current state of the prototype Pataphysic Institute, and then went to get my swimsuit and stuff to bring ashore (yes the conference was on a cruise ship!). We had the actual meeting on the beach! If someone had told me when I was 14 that I sometime in my life would be in a situation where I worked on a dissertation about AI and game mechanics in MMOs … I wouldn’t have understood much given that I hadn’t even played an MMO back then, and only had tried programming go-to sentences on a “COMPIS” …but I would have understood “meeting at a beach in Bahamas” and I’m sure my adolescence would have been much nicer!
During the conference I took many pictures of slides and speeches – I use them afterwards to get myself a summary of the conference. I’ll put up a small photo diary here as a post.