The Paris game AI Conference

It feels really good to be part of this amazing community of AI minded individuals that joined forces over a few days in Paris for the game AI conference. High quality keynotes and almost three hundred people to network with made it both enjoyable and very important for me to be there.

For all the coverage head over to http://aigamedev.com/open/coverage/paris10-report/

For news about next years event I recommend http://gameaiconf.com/

Richard

Reinforcement learning

As an important part of my thesis I have been reading and learning about this subject. My effort has been split between an existing implementation in the sandbox, and working on a new implementation that can take advantage of a regression tree a fellow sandbox AI programmer has been working on.

For my first part I loaded the step animations I had created, and modified the reward function of the existing RL solution in order to achieve my desired behaviour. The demo works by simply having Marvin (sandbox star character) react to the mouse cursor. For me this meant stepping away and to the side of the cursor.

After being pretty happy with the result I was presented with the opportunity to work on a new implementation with the primary source of inspiration being  a paper on Real-Time Planning for Parameterized Human Motion by Wan-Yen Lo and Matthias Zwicker. Following this and eventually ending up doing things pretty differently a new implementation never the less was completed. The biggest difference from the former implementation is that with a regression tree you get a continues space for each dimension, and can use several dimensions.

Going to try to take some footage of the result at a later date, closer to the end of the thesis work.

Some links for Reinforcement Learning:

http://webdocs.cs.ualberta.ca/~sutton/RL-FAQ.html

http://aigamedev.com/open/reviews/planning-parameterized-human-motion/

Richard

Salsa as only Winterson can describe it

SALSA DI POMODORI

Take a dozen plum tomatoes and slice them lengthways as though they were your enemy. Fasten them into a lidded pot and heat for ten minutes.

Chop an onion without tears.
Dice a carrot without regret.
Shard a celery stick as though its flutes and grooves where the indentations of your past.

Add to the tomatoes and cook unlidded for as long as it takes them to yield.

Throw in salt, pepper and a twist of sugar.

Pound the lot through a sieve or a mouli or a blender.

Remember – they are vegetables, you are the cook.

Return to a soft flame and lubricate with olive oil. Add a spoonful at a time, stirring like an old witch, until you achieve the right balance of slippery firmness.

Serve on top of fresh spaghetti. Cover with rough new parmesan and cut basil. Raw emotion can be added now.

Serve. Eat. Reflect.

This recipe is from The PowerBook by Jeanette Winterson and is my strongest takeaway from the book. It made nothing short of good sense for me to simply cook the recipe and see if it tastes as good as it reads.

Slicing, chopping, cooking and blending brought me to this result. Following the recipe I also added a few modifications, some ham, chilli and more spices. The result was nothing short of brilliant; tasty, lots of vegetables, and also very little sugar (compared to getting a ready-made pasta sauce).

I cant help but wish there was an entire cook book of recipes written in such a way as it did a lot to inspire me and made me realise this project.

Richard

Steps for a rainy day

With Marvin and me getting along well I set out to create a number of steps for stepping out of the way. Now I am new to animation and the part that I find the hardest with human motion is getting the inertia just right. In order to reach a better result I have done some motion tracking using camera footage. Me and my Ixus and a couple of my friends spent an hour filming the other day in order to achieve this.

The idea was that I would film myself doing the movement and then super impose Marvin (the rig) on top of myself and animate inside of this frame. The motion tracking gives me the scene and animates any camera movement, letting me watch the scene as with augmented reality.

Each green marker is where I have tracked pixels, the one in the far left corner of the corridor is set as (0,0,0) in 3D space. The one on our left bottom closest to the camera is + in Z, and so forth. Taking down the measurements for each post-it in the corridor I could apply these directly into the program, thus making it very easy to compute the exact camera position and corresponding 3D scene.

All in all I am really happy with the result. Might post a video of the result in the future.

Richard

Meet Marvin

This is Marvin, he is the AiGameDev rocketeer, stuntman and puppet of choice for any AI and animation related project done in the sandbox. A project of mine has been to give him some new moves, and in order to do this we needed to come to an agreement concerning exactly how to communicate with each other.

To help Marvin express himself I am able to…

Move the feet around and use a footroll slider for lifting the heel and toes.

Controls for the hands, elbows and shoulders.

Complete support for the spine that was made using the Awesome spine! It includes hips, back and upperbody controls.

Control for head and gaze.

Richard

Teaching autonomous agents to get out of your way

Or for short, taatgoyw.

The coming months I am going to be working on my thesis project. My goal is to solve a pretty simple problem in an interesting way. I see this problem all over the place however, games where the NPC doesn’t even blink to indicate that you are running into them.

The work will be done using the AIGameDev.com sandbox. The benefit of this is an array of great people to ask for help, a website with a lot of information on my subject and an engine with all the components I need to start working on this project.

The idea is to from a simple model of the interaction, design a system that can solve this. The system will use reinforcement learning and a policy aimed to teach the agent how to step out of the way off the player. With everything in place a demo will be constructed to show this behaviour.

  • Design a simple model for the interaction.
  • Animate a number of steps appropriate for this behaviour.
  • Learn the sandbox animation system.
  • Implement using Reinforcement learning in the sandbox.
  • Create a small demo.

My goal is to work with reinforcement learning as an area that seems to need more work before it can fully bloom. Hopefully the result will be simple and effective, and give way for other uses for the system in an animation context.

Richard