Thursday, May 22, 2014

Math libraries for c++

Stumbled upon this useful article: http://www.mfoot.com/blog/2012/01/23/choosing-a-maths-library-for-cross-platform-c-game-development/

I've been using OpenCV for generic linear algebra, and then some others for fixed size 3D matrices and vectors. However, I've grown tired of OpenCV's quirks, especially the difficulty of debugging due to its internals, where void pointers are passed around and a custom RTTI system tries to make sense of the data (legacy of the first OpenCV versions that were C only). It seems to be fairly easy to cause an exception for which one doesn't even get a stack trace. Maybe it's time to switch to Eigen, which is apparently equally useful for large and small matrices, and also performs faster than GLM.

Edit May 27: Eigen definitely seems to work well and is easy to use. Creating math-heavy applications in C++ just got so much easier.

Edit June 12: I'm a full Eigen convert now, preaching to everybody who cares to listen =) Many matrix size and alignment errors are catched at compile time, and the API is flexible - for example, one can declare a matrix m and then say, e.g., Vector3f v=m.rowwise().sum(). More in this API showcase.




Saturday, May 17, 2014

Beyond Ragdolls (we're hiring)

I'm proud to share the first public results from our Future Game Animation research project, supported by 7 game and animation companies and Tekes (Finnish Funding Agency for Innovation). Our paper "Online Motion Synthesis Using Sequential Monte Carlo" was accepted to this year's ACM SIGGRAPH conference.

Note: we're hiring! If you'd like to do a doctoral or Master's thesis about character motion synthesis using physics simulation, email me at perttu.hamalainen@aalto.fi.

Here's the video accompanying the paper:


Here's also some image sequences:












Executive summary: We've developed an "intelligent ragdoll" that can dodge projectiles, land on its feet, and get up after falling. All movement is emergent and does not require a library of animation or motion capture data. The goal is to eliminate animation-related delays in game prototyping and development, and enable new forms of gameplay.

I've previously posted a Unity script for turning a Mecanim character into a ragdoll that can get up after falling. While the script is quite simple, the resulting motion does not respect physical constraints like non-penetration, and the script also needs the get up animations. Using considerably more code and CPU, our research system - also integrated with Unity - can generate getting up and other behaviors without any animation data, by optimizing the control parameters of physics simulation. This will probably be how intelligent characters are implemented in future games, although we still need more work on computational efficiency and movement quality. We will be working on this at least until the end of 2015. In addition to improving the technology, we also plan to create some proof-of-concept prototypes of novel gameplay that could be created with it.


FAQ:

Q: Isn't this like NaturalMotion's Morpheme/Endorphin?

A: Yes and no. What I've seen from NaturalMotion (e.g., Clumsy Ninja) is more like my Mecanim ragdoll script in that it uses a lot of premade animation. While press and NaturalMotion's marketing like to portray it as "one big simulation", Clumsy Ninja's kicks, trampoline jumps, getting up etc. behaviors are premade animations. There's some degree of procedural footstep generation when dragging the character, but also artificial, non-physical glitches and sliding of the character, e.g., when the character adjusts its distance to target before kicking or punching.

I'm not saying that Clumsy Ninja isn't a great product. However, creating such believable and intelligent characters is presently highly expensive due to the amount of custom motion capture, animation, and scripting. The main point of our work is that it requires no motion capture, animation, or precomputation and is fully based on physics simulation.

Q: What about the recent viral video where characters were walking?

A: The video (by Geijtenbeek et al., who we do cite in our paper and have enormous respect for) shows the results of offline optimization of a controller that can generate walking in real time. The drawbacks there are that 1) the offline optimization takes a lot of time and 2) the controller is motion-specific, and the characters can't, e.g., get up after falling (which also applies to NaturalMotion's procedural balancers used in Endorphin). Note that the boxes thrown at the characters in the video appear very light and the steering and ground angle variations are quite modest. Our system does not require offline optimization or other precomputation and produces a wider range of movements, although the quality is not as good (we're working on that...).

Q: Can I download the code?

A: The system is presently quite complex and very much a work in progress. We're planning on an open source release sometime in 2015.

Augmented climbing and HCI in sports

In this year's ACM CHI conference, we had three submissions: a video showcase and a poster/extended abstract about our augmented climbing wall (a project of Raine Kajastila, a post-doc in our group), and a HCI in sports workshop paper about designing for empowerment in full-body human-computer interaction.