Thursday, October 17, 2013

Unity and version control

Here's some info of version control in Unity for this fall's game project course. Summary: it's ok to use git, svn etc. if you know a few tips, and Aalto doesn't have licenses for Unity's own version control system (the Asset Server). Only add the Assets and ProjectSettings folders to version control. Don't add the automatically generated Library folder or the monodevelop and visual studio solution files.

Also: DON'T SWITCH BACK TO UNITY WHILE YOU'RE GETTING/UPDATING FILES FROM VERSION CONTROL. This will cause major errors, as Unity starts importing and reimporting the files right away, and in the process it may create new .meta files if the correct .meta file was not yet updated.

Basic info about setting up external version control: 



Some additional points:
  • If you are using Unity Pro or Academic (e.g., school computers) go to project settings/editor and set asset serialization to "force text", which enables svn, git etc. to merge the files. 
  • However, the merging doesn't always work, and to minimize conflicts, you should avoid modifying your scene files. The way to do this is to have everything as prefabs and edit the prefabs instead of modifying the scene. This way, it's less likely that several people modify the same file.
  • Enabling external version control as explained in the document above creates a .meta file for each file. Always remember to add the metafiles to your repository. The metafile contains the unique identifier (GUID) that Unity uses, e.g., when you link a prefab to a script's public GameObject property. If you don't commit a metafile to the repository, it will be regenerated on each person's machine with different GUIDs, which breaks the object linking and you get null reference errors.
  • Be careful when moving and deleting assets. If you delete or move assets inside the Unity editor, svn etc. will of course bring them back when you update before committing your changes. However, once the metafiles are in use, you can safely move and delete files outside Unity as long as you move and delete the metafiles too. Unity will recognize the moved assets as the same ones based on the metafiles.
  • When googling to check if there's a good tutorial or if I missed something above, I noticed there's quite a lot of "free version of Unity doesn't support version control". However, things have changed and as of version 3.5., metafiles are no longer a pro-only feature.
  • The "force text" option is apparently a pro-only feature, but there are workarounds: http://answers.unity3d.com/questions/8740/version-control-workflow.html & https://github.com/terravision/UnityTextScene

Unity and FMOD Studio integration

Edit 12 June 2014: This post was based on the very first FMOD Unity integration from fall 2013. There's been changes and the project file should probably be updated. Please (also) check some more recent tutorials.

There's finally an official FMOD Studio integration for Unity, available from http://www.fmod.org/download/. It has just been released and contains no Unity examples, so I prepared a bare bones Unity project and an example FMOD Studio project that you can download from here https://www.dropbox.com/s/vyk29kyhmvm22dc/part6_sound.zip

The main reason to use this is that it enables your sound designers to edit, mix and master your game audio using the FMOD Studio interface while the game is running. Using FMOD studio, you can also create, e.g., a footstep sound event that will randomize the pitch and selected sound sample without the game having to know anything else than the name of the event. Using FMOD is free for non-commercial purposes and the license prices are also pretty reasonable for commercial work.

Most commercial games with decent sound design use such live mixing and centralized sound management approach, either using FMOD, WWise or some custom tools. Unity does use fmod for its own sound, but it does not provide any mixing tools and a sound designer will go insane if he/she has to manage hundreds of sound objects in the Unity editor. The main limitation of the FMOD Studio integration is that it only supports iOS, Android, Windows, Mac but no web player. If you want your sounds to work in the web player, I suggest you check out the Clockstone Audio Toolkit that provides mixing and management on top of Unity's internal sound system.

Here are the steps you need to add FMOD Studio support to your Unity project. These instructions can also be found in the readme.txt of the .zip above.

  1. Import the fmodstudio10203.unitypackage from this folder. You can also check for a later version from fmod.org. To import in Unity editor, select Assets/Import Package/Custom package.
  2. build your FMOD banks (File/Build)
  3. In Unity, select FMOD/Refresh Event List and select the GUIDs.txt from your FMOD project's Build folder. Note: you need to do this whenever you rebuild the FMOD project. This will create FMODAssets and StreamingAssets folders in the Unity project and copy the FMOD banks to the StreamingAssets folder.
  4. When your scene starts, load a bank (see the Start() method in main.cs)
  5. Use the FMOD_StudioSystem helpers such as PlayOneShot() to play sound. See the OnGUI() method of main.cs. You might also have to use the lower level API (fmod.cs, fmodstudio.cs), as the unity integration package has been released very recently. The integration package doesn't contain any examples, and for example code, you should consult the c++ examples in the FMOD Studio API install, which you  can download from fmod.org.
  6. To enable live mixing using FMOD Studio, uncomment the first line of FMOD_StudioSystem.cs, found in the plugins/FMOD folder. Also check the "Run in background" checkbox in Unity Editor. You'll find that by selecting Edit/project settings/player and then selecting the Resolution and Presentation panel in the Inspector. Now press play in unity, select File/Connect to game in FMOD Studio and click ok. You can now adjust audio event properties on the fly while the game is running. 

Thursday, October 10, 2013

Unity Mecanim and ragdolls

Update 17 May 2014: If you're interested in physics simulation and character animation, check also our latest research. Also, please note that Mecanim has had some changes since the original post, and now the script could be implemented more elegantly. I also haven't tested the script with different characters - the comments suggest that some characters with different bone structure might not work out of the box.

Sharing a Unity project with a RagdollHelper.cs script that can be used to turn a Mecanim character into a ragdoll and then back. Using the script is simple: there's just one public property called "ragdolled", which can be set to true or false. When the property is changed from true to false, the script blends from the ragdolled pose to a get up animation. This is illustrated in the video below.

Transitioning from Mecanim animation to a ragdoll is as easy as disabling the Animator component of the character and setting all ragdoll rigid bodies to non-kinematic. The reverse is more complex, and my solution is not without kludges, as Mecanim doesn't allow as low level control of the animation as Unity's older animation system. The main problems are that one can't use scripting to enforce an immediate state transition without a blend, and apparently one can't also add states programmatically into an animation controller.

In the old animation system, one could create a new animation clip from the ragdoll's current pose, turn of ragdolling and blend the animation from the created clip to a recovery animation. In Mecanim this is not possible, and after turning Mecanim back on, we have to manually lerp the hip position and slerp all body part rotations back towards the last ragdolled pose in LateUpdate() to achieve a smooth blending. Additionally, we have to wait some frames for Mecanim to transition to the get up animation while using LateUpdate() to move the pose back to the ragdolled one, and then read back the get up animation start body orientation and position, and update Mecanim character's root so that the animated orientation and position match the ragdolled one as well as possible. In the old system we could just update the character transforms with a single script command according to a specific frame of a given animation, and then read whatever info we need. Actually we could possibly use the legacy animation system for that, but then the script would need additional handles to the animation etc., so it seems there's no really simple and beautiful way. One could also perhaps use an editor script to precompute the info.

Here's the Unity project as a zip file in case someone finds it useful: https://drive.google.com/file/d/0B8lAtheEAvgmYUlqX1FjNm84cVU/view?usp=sharing&resourcekey=0-kVdY0_KU-BRQP9xZibN0Fw

Open the test scene, click with mouse on a body part to add an impact to the character, press space to make it get back up.

Update 5 Dec 2013: Today I noticed that Unity 4.3 has added the Animator.Update() which could be used to implement the ragdoll to animation blend more elegantly without the LateUpdate() trickery. I'll have to update this example when I have time.

Monday, September 23, 2013

Homepage

Restored and updated my old research homepage. http://perttu.info is now redirecting there. Nothing special, mainly lists of publications and projects.

Friday, May 10, 2013

Trampolines as a business

Related to our Kinect trampoline and circus games, I just checked that Googling "trampoline park" gives 628000 results. Trampoline centers/parks seem to be a growing branch of the family entertainment center business, as discussed, e.g., in this article about the Sky Zone franchise, which had 15 centers in the U.S. and planned to more than double that when the article was written in 2012.

Here's two pretty compelling videos, the first one from House of Air from the U.S. and the second one from Bounz, one of the few European ones.



Despite the thrill, it remains unclear whether trampoline parks are craze that will fade like the soft-contained-play (SCP) and inflatable centers did, as explained in this article. According to the author, trampoline centers are similar to SCP and inflatable centers in that they too haven't proven their business model for long-term viability, and they are at the same time more dangerous. However, I think that while trampolines may be more dangerous than bouncing castles, they also present much more potential for long term excitement, as the increased air time and softer landings make it possible to learn a wider variety of skills and tricks. For example, it's not very comfortable to land on one's back or belly on an inflatable bouncing surface, whereas it's quite easy on a trampoline and one can also do many interesting variations, such as turntabling:






Monday, April 8, 2013

Action Games Workshop info (Aalto course code 25447)

Since we are using a generic course code and I can't edit the contents and learning outcomes in Noppa, I'm posting the material here.

Contents

Many games are simplifications and abstractions of reality. The designer's job is to abstract away irrelevant and/or cumbersome details, and choosing what to focus on is not trivial. Background research helps, but in addition to gathering mediated experiences in the form of text, images, and video, it's useful to experience things in person.

On this course, we will experience various sports and their digital representations to identify yet uncharted possibilities for action games. The course consists of physical activities mixed with lectures, gaming sessions, and interaction sketching exercises. In spring 2013, the course is organized for the first time as a workshop including parkour, trampolining, archery, swordfighting, contemporary dance, and bouldering (climbing) with help of professional instructors. 

An additional goal of the course is to share the latest insights from Aalto’s games and motion related research. 

Learning outcomes  

Gain insights and ideas for gameplay design from the points of view of, e.g.,  embodied experiences, biomechanics, motor learning and performance, physics simulation, and virtual cinematography. Understand the variety of embodied experiences, ranging from passive media to console games to experimental mixed reality games. 

How to register

You can sign up in Oodi: https://oodi.aalto.fi/a/opintjakstied.jsp?Kieli=6&Tunniste=25447&html=1.

Click "register" at the bottom of the page. Note that the Oodi page is a generic one, since the course doesn't yet have its own course code and page. Note also that you can find the updated course description by clicking the "Workshop I" teaching event at the bottom of the page, but Oodi doesn't let you register on that page - I would have linked directly to it otherwise.

The number of participants is limited to 14, not including me and Miikka of course.

Prerequisites

Preliminary exercise: for the first gathering, prepare a few minutes presentation about a physical activity, such as your favorite sport. Illustrate your presentation with image(s) or video. Answer these questions: What's interesting, fun and motivating about it? How do you feel when practicing/competing? Has it affected the way you experience the world? How? Is there a digital version of it or a digital dimension to it? If so, how does the digital user experience differ from the real thing or what does the digital dimension contribute?


Schedule

The course is organized during 22-26.4.2013. We'll start each day with a physical activity.

Monday 22nd 9-11am: Trampolining and parkour at Taitoliikuntakeskus. How to get there: http://www.taitoliikuntakeskus.fi/?q=node/15

Tuesday 23rd 9-12am: Archery at Archery Club Wilhelm Tell. Place: Katri Valan puiston väestönsuoja: http://bit.ly/16Ivuva

Wednesday 24th 9:30-12:30: Medieval swordmanship at The School of European Swordmanship. Address Luiskatie 8, Helsinki (e.g, bus 77)

Thursday 25th 9:30-11:30: Contemporary dance at Helsingin Tanssiopisto. Address: Kaikukatu 4, http://bit.ly/Zi5LoQ

Friday 26th 10-12am: Bouldering (climbing) at Boulderkeskus, Konala. Address: Ruosilantie 1, http://bit.ly/16IknEq 

Remember to bring indoor sports clothes and shoes. Climbing shoes will be provided by Boulderkeskus.

In addition to the activities, we'll have lectures and exercises each afternoon from 13:00-16:00 at the Media Lab games classroom. I'm currently preparing the materials, and so far the tentative themes for the afternoons are controls & virtual cinematography, character animation & biomechanics, physics based gameplay, story & emotion, and motion games & games for learning.



Monday, January 28, 2013

Visualizing Fitness Function Landscapes

I'm currently preparing a project about game animation tools and technology. The project is exploring and leveraging technologies like physics simulation, optimization and machine learning. If you are interested, the recent review by Geijtenbeek and Pronost is a good starting point.

Related to the optimization of the control parameters of physics simulation, I'm trying to form an intuition about the properties of the related fitness function landscapes. It's impossible to accurately visualize the landscapes of high-dimensional biped motor control and planning problems, but here's a simple 2d physics example: throwing a ball in 2d with the angle and speed as the optimized variables, and allowing damped rebounds from the ground so that the ball trajectory is not just a simple parabola. This isn't exactly rocket science, but I wrote the code to as an exercise in interactive visualization in Matlab, which I stopped using while working outside Aalto.


It was my first time I tried to do implement the "up and down the ladder of abstraction", that is, letting the user explore a summary view (the fitness landscape) to browse concrete examples (the ball trajectories). There was some non-intuitive searching for the right places to put the getCursorInfo(), pause() and subplot() commands to prevent the display from freezing - hope this helps if anyone wants to do the same. I also tried to implement the UI using Matlab's linked data, but couldn't get the display refreshed.

Edit 29.1.2013: I was asked why there's peaks where there should be a continuous ridge. It's apparently because of the large timestep that causes a simulation error that varies as a function of how the floor contacts align with the timestep grid. It also depends on how the contact handling is implemented. The Matlab code below uses a smaller timestep and produces a more continuous ridge.

Here's the Matlab code:


minVel=1;
maxVel=5.0;
minAngle=-0.5*pi;
maxAngle=0.5*pi;
timeStep=0.02;
nSteps=100;
startPos=[0,1.5];
velSteps=64;
angleSteps=64;
E=zeros(velSteps,angleSteps);
target=[2, 0.5];
curve=zeros(nSteps,2);
curves=zeros(nSteps,2,velSteps,angleSteps);

%step over all ball launch speed and angle combinations
for velStep=0:velSteps,
    speed=minVel+(maxVel-minVel)*velStep/velSteps;
    for angleStep=0:angleSteps,
        angle=minAngle+(maxAngle-minAngle)*angleStep/angleSteps;
        pos=startPos;
        vel=[speed*cos(angle), speed*sin(angle)];
        closestDist=10000;
        closest=[0; 0];
        closestTime=0;
        
        %simulate the ball trajectory for this launch velocity
        for n=1:nSteps,
            %save this trajectory point
            curve(n,:)=pos;
            
            %update closest to target
            dist=norm(pos-target,2);
            if (dist<closestDist)
                closestDist=dist;
                closest=pos;
                closestTime=n*timeStep;
            end
            
            %physics update
            pos=pos+vel*timeStep;
            vel(2)=vel(2)-9.81*timeStep;
            
            %bounce from floor
            if (pos(2)<0 && vel(2)<0)
                timeInsideGround=pos(2)/vel(2);
                vel(2)=-0.7*vel(2);
                timeAfterBounce=max(0,timeStep-timeInsideGround);
                pos(2)=timeAfterBounce*vel(2);
            end
            
        end    
        %store the whole trajectory
        curves(:,:,velStep+1,angleStep+1)=curve;

        %update fitness
distanceThreshold=0.1;
        fitness=0.5+0.5*tanh(-(closestDist-distanceThreshold)/0.05);

        %soft threshold for hitting time
        timeThreshold=1;
        fitness=fitness*(0.5+0.5*tanh(-(closestTime-timeThreshold)/0.05));
        
        %store fitness function value for these params
        E(velStep+1,angleStep+1)=fitness;
    end
end
%plot the fitness surface
subplot(1,2,1);
surf(E);

%get the handle for querying data cursor info about the fitness
h=gcf;
dcm_obj = datacursormode(h)

%plot a trajectory of the ball
subplot(1,2,2);
curve=curves(:,:,1,1);
plot(curve(:,1),curve(:,2));

%in a loop, keep plotting the ball trajectories
pause on;
while 1
    info_struct = getCursorInfo(dcm_obj);
    if size(info_struct)~=0    
        curve=curves(:,:,floor(info_struct.Position(2)),floor(info_struct.Position(1)));
        subplot(1,2,2);
        plot(curve(:,1),curve(:,2));
        axis([0 4.5 0 3])
        hold on;
        plot(target(1),target(2),'c+:')
        hold off;
    end
    %have to pause to let the display refresh
    pause(0.01);
end

Thursday, January 24, 2013

Game marketing resources


To help our students in promoting their games, I did about an hour's worth of searching, both using Google and digging up old stuff from my brain. Dumping the results here for future reference, roughly in the order of importance.

About creating press releases etc.

http://gdcvault.com/play/1014971/Indie-s-Got-PR-Talent

http://arstechnica.com/gaming/2011/12/how-to-market-your-indie-games-ben-kucheras-lecture-at-run-jump-dev//

http://gamasutra.com/view/news/167706/Ask_Gamasutra_How_to_annoy_a_games_journalist_with_a_press_release.php#.UQE7syfZZ8E

http://www.gamestyleguide.com/VideoGameStyleGuideeBook.pdf (Talk to the journalists using their own language)

http://dopresskit.com/  Presskit making tool. Note: I haven't tried it personally.

How to make good trailers

http://indiegames.com/2012/04/ask_indiegames_what_makes_an_e.html

http://blog.kertgartner.com/2012/03/making-entertaining-and-engaging-video-game-trailers/

http://www.indiegamegirl.com/how-to-make-a-video-game-trailer-kick-ass/

http://www.ign.com/articles/2012/05/01/five-sure-signs-of-an-awesome-game-trailer

http://www.reddit.com/r/Games/comments/16g1x7/that_great_cyberpunk_2077_teaser_got_me_thinking/

http://www.reddit.com/r/Games/comments/zd84h/what_have_been_the_best_game_trailers_you_have/

http://gametheoryonline.com/2011/05/06/video-game-marketing-creating-game-tr

Stats to inform your decisions: http://engage.tmgcustommedia.com/2011/04/101-online-video-stats-to-make-your-eyes-glaze-over/

Tuesday, January 22, 2013

A note on backstory and child's play

The backstory is perhaps the most common literary device used in games. Understanding its importance is easy if one has followed the mobile game space, where many hugely successful games have a story-based motivation that can be expressed in a single sentence, such as "The frog wants the candy" or "The birds are angry because the pigs stole their eggs".

What wasn't obvious to me until I became a dad was that the backstory works perfectly similarly in child's play. It's been amazing to see firsthand how big a difference it can make in motivating play. For example, when its -10 centigrades outside, a common temperature this winter, my son hates to go out because putting on all the layers of clothes is such a tedious process. However, he immediately jumps up from the sofa if I throw in a backstory.

Me: "Let's go outside sled riding"
Son: "I don't want to go outside. It's boring. I don't want to wear the overalls."
Me: "But we can pretend to be Angry Birds. You can be the laser bird and have a snowball as a laser."
Son: "Cool! Let's go right away! Come on, dad!"

Malone's and Lepper's taxonomy of intrinsic motivation comprises four main motivating factors: challenge, curiosity, control, fantasy. Having a backstory provides a fantasy setting, and also provides material for curiosity, because one can explore how far the fantasy can be extended by using the play space, e.g., the snowballs. It can also inspire the child to come up with challenges. Many times, if I'm too tired to play cops and robbers, my son is happy to do all the running around by himself while I just sit on the couch and press imaginary buttons to control him on an imaginary iPad.

"Now I'm in level 13, and this is really hard. There's dynamite and I must not fall into the water (floor). Now press the button that makes me run to the door."

"You are the laser piggy. I must not hit the lasers."

Pretty contradictory considering that digital games are supposed to promote a sedentary lifestyle. Of course my son would never let go of the iPad voluntarily, but allowing him a controlled amount of daily gameplay seems to only boost his physical play by providing new backstories, similar to the books we read.





Saturday, January 19, 2013

Quick time events vs. timeline-based rhythm games


Using quick time events (QTE:s) is a form of trying to have a cake and eat it too. The game designer wants to control the pacing and drama, but also let the player control the game. The two are difficult to combine, and often the player's control over events is illusory, with the branching storyline soon converging back into one or few options to save production costs. But do we always have to create the illusion of control? As it happens, there's a genre where the players are perfectly happy to play along with completely choreographed action: dancing and rhythm games. Can we combine rhythm and action adventure genres in new and fruitful ways?


Yesterday, I finally got to use my PS3 at work for playing. I spent some time on Dark Souls, Uncharted 3 and Heavy Rain, which got me again thinking about quick time events. At some point, I was wondering that there's not really an analogous mechanic for Kinect games, but actually Kinect Star Wars does have them in the form of "anim GIF" prompts that appear contextually, indicating that one has to kick, jump etc. to proceed.

When playing Heavy Rain, it struck me that it's actually pretty similar to Guitar Hero. Both have action that you play along with in an abstract, simplified manner. In an ideal case, there's an illusion of actually performing a fight choreography or playing the notes of a song, to the extent that a Guitar Hero guitarist can play pretty plausibly on stage at the Video Games Live -concerts, as shown in this video (or actually not shown very well, since the camera is behind the player. I've seen better footage but this is the best I could find right now):

In GH, there's five buttons that represent all notes that one can play. Actual melodies use more, and the meaning of the buttons changes based on the previous ones. Even with just five buttons, one can approximate a melody so that it mostly goes up and down as it should, but the actual intervals are incorrect. If I remember correctly, this actually corresponds to the first stages of the developmental continuum of singing (couldn't check because I don't have access to the full paper from where I'm writing). A video for refreshing one's memory:

In Heavy Rain, you use the right analog stick and shake the controller to approximate the movements of the characters. A video:



The differences between the two

  • Heavy Rain, like most games with QTE:s, features some level of branching action based on whether the player reacts to the events on time, whereas in Guitar Hero, the song goes on, and the feedback from misses is implemented through scoring and sound effects. Also, if you make enough mistakes, the song ends prematurely and you get booed out (if I remember correctly - later music games usually let you always finish the songs)
  • Guitar Hero and many other music games feature some form of a visual timeline, where one can see multiple approaching events and plan one's action. Action adventure QTE:s usually pop up one by one without such 'prediction horizon'.
  • Action adventure QTE:s provide the player a reaction challenge, whereas rhythm games require coordination and precision.

I'm not that fond of QTE:s in fight scenes, because the reaction time is really difficult to tune - there seems to be only a tiny sweet zone between frustrating and boring, and it's different for every player. I'm more of a precision and coordination kind of guy, which brings me to wonder if one could actually have rhythm game style QTE timeline somehow overlaid in an action adventure fighting sequence. This would allow a more rapid pace of fighting, similar to many Hong Kong kung-fu movies:


An alternative to a timeline is to use multiple symbols that appear a predetermined time before the player should act, which most QTE:s do anyway, but in rhythm games the visual display of the remaining time to the correct moment is more prominent, as in Dance Evolution:



The benefit of the approach I'm proposing is that the timing and coordination challenges could be more interesting than the reaction challenge of traditional QTE:s, and only a single choreography without branches would be needed. The obvious problem is to design feedback that makes sense if the player's performance is less than perfect - do we want the "perfect", "good" etc. notifications from dance games or something else? Is the fight over if the player misses too many events? Should there be branching after all, and do we actually need two branches per event, one for being too late and the other for acting too early.

As a Google search will tell you, there's already many games with both rhythm and fighting elements. However, fighting to the beat of music as in this Kick Beat gameplay video or in Rhythm Fighter is a bit too much for me.

I really like the choreography, music and soundscape in Uncharted 3 and Heavy Rain. I just wish I could see more than one event in advance to get into the flow of the choreography and still have a reasonable challenge, similar to reading a note sheet when playing an instrument. This is what, e.g. Dance Central and Guitar Hero do, although due to the complexity of the player's realistic movements in DC, the choreography is not fully visualized on a timeline. Instead, one sees a timeline of flashcards, which have to be memorized in the training mode.

For Uncharted 3 and Heavy Rain, I think the action abstraction level is pretty much correct, but I guess I just feel strange trying to react quickly to button symbols shown on screen instead of the opponent's actual actions. The translation of symbols to actions feels more natural when given more time and when seeing multiple symbols at a time, maybe because the relations of the symbols to each other (e.g., the intervals between musical notes) help in decoding and executing the correct the motor sequence.

If anyone reads this and knows about this kind of experiments, please let me know.

Additional thoughts: The illusion of playing a guitar or fighting with QTE:s is not there at first, but at least it happens with me once I've become accustomed to the interface, I don't have to think of the real-world actions, I can focus on the results and feedback, and I get the timing right. Principles at work: We project ourselves to the avatar, our motor system assimilates the game interface through repetition, correlation is perceived as causation, our sensory integration system perceives two events as one if they happen close to each other in time. The same assimilation of the interface etc. happens of course with all action games, but the interesting thing is that it happens despite the player not having control or initiating the actions.

About dance game timeline displays and other choreography visualizations: The visualization of full-body movement over time in a compact and intuitive manner remains a research challenge, although various notations have been developed.

Friday, January 11, 2013

Trampoline games update



Uploaded a new video to YouTube and linked it to the work-in-progress manuscript submitted for CHI 2013. My previous post explains some of the goals and rationale behind the design.