Introducing Kintouch

I began adding Kinect skeletal tracking and speech helpers to Neat Game Engine since Kinect SDK Beta 2 came out. These helpers were made available under the KinectEngine class in the Neat.Components namespace. After the release of SDK v1.0, I began having troubles with using my Kinect, and I don’t know yet why these problems are happening. Often times when I’m trying to execute my code I get Low Bandwidth errors from the SDK, even with all my other USB devices disconnected. Maybe Kinect still doesn’t work fine on Windows 8 (I’m using Build 8250) or maybe it’s because I’m using an XBOX 360 Kinect hardware. Anyways, because of these annoying development problems I got bored of playing with Kinect until Kinect SDK v1.5 was released and I learned that it has facial tracking tools. Awesome. So I started playing with Kinect again and updated the Kinect code of Neat Game Engine to be compatible with SDK v1.5.

During my experiments with Kinect, I worked on a component that enables multi-touch like interactions with games by tracking the hands of players using skeletal tracking. The idea is pretty simple, and the most important part in implementing such feature is the tweaking part. Since joints move spherically, moving hands forward for pushing items often changes the position of the pointers, making it difficult for the players to focus on an item and push it. Adjusting the sensitivity and freezing the position of the pointers during push gestures is the key part of this project and I am still messing with the parameters to figure out the best experience for these “Air touch” interactions.

Kintouch is available right now in the latest changeset of Neat Game Engine’s source code, available on Codeplex. To enable Kinect features in Neat Game Engine, compile the project with KINECT directive. For more information about Kintouch, visit its page on my blog.

Speech and Skeletal Tracking in River Raid X

I’ve been experimenting with Speech SDK lately and was trying to find a way to control games with sounds. In this video, you can see me control River Raid X with Kinect skeletal tracking and voice.

Helper classes for both Speech SDK and NUI (Kinect Natural User Interface) are implemented in the latest changeset of Neat Game Engine’s source code. Implementing voice commands in games for simple tasks such as starting the game, pausing and resuming it, etc. is not a difficult task, because Speech SDK easily handles these stuff with Grammars and other classes. What’s really interesting in controlling games with sounds is finding a natural place for them inside games. Things like finding the position of the sound source in 3D and finding use for “meaningless” voices inside the game is what makes voice integration in games really fun.

I’ll keep on experimenting more with voice and speech in games and keep here updated!

Testing Speech SDK

I am planning on using Microsoft Speech SDK and Kinect’s Microphone Array in one of my future projects codenamed “Kintouch.”

I wrote a few lines of code to test the speech recognition engine, and I am not yet satisfied with the results:

In this video, I am standing around 2 meters away from the Kinect, in an almost quiet room.

Kinect SDK and XNA

It seems many people are looking for the way to draw skeletons and store the RGB camera data into Texture2Ds. The KinectEngine class which is included in Neat game engine does all these stuff, but here’s the direct way to do these in XNA:

1. Initialization

First we have to initialize the Nui Runtime. Assuming we are going to use the first available Kinect device connected to our system, we can write:

public Runtime Nui = Runtime.Kinects[0];
Nui.Initialize(RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);

The Runtime class is inside the Microsoft.Research.Kinect.Nui namespace. The second line tells the SDK that we are going to use both skeletal tracking and RGB (color) camera.

Now that we initialized the SDK, we have to tell it to do what we want when a video frame or a skeleton frame is ready:

Nui.VideoFrameReady += 
   new EventHandler<ImageFrameReadyEventArgs>(nui_VideoFrameReady);
Nui.SkeletonFrameReady += 
   new EventHandler<SkeletonFrameReadyEventArgs>(nui_SkeletonFrameReady);

2. RGB Stream

One of the most common things that XNA developers want to do with Kinect SDK, is to show the video captured by the RGB camera in their game. To do this, first you have to store the stream into a Texture2D object. Therefore, before we begin, we have to create our target texture:

public Texture2D KinectRGB =  new Texture2D(GraphicsDevice, xMax, yMax);

Now, we can perform the conversion when video frames become ready:

void nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
    PlanarImage image = e.ImageFrame.Image;

    int offset = 0;
    Color[] bitmap = new Color[xMax * yMax];
    for (int y = 0; y < yMax; y++)
        for (int x = 0; x < xMax; x++)
            bitmap[y * xMax + x] = new Color(image.Bits[offset + 2], 
                image.Bits[offset + 1], image.Bits[offset], 255);
            offset += 4;

The raw image data is stored in e.ImageFrame.Image. The usual method of filling XNA textures is by creating a Color array, fill it with pixel data and feed it to a texture using its SetData method. Color data is stored in BGR32 format, meaning that the value for blue channel is the first value stored, and the red value is the last. For more information about reading the camera data, watch this video.

To begin reading from the RGB camera, we have to open the video stream:

Nui.VideoStream.Open(ImageStreamType.Video, 2,
    ImageResolution.Resolution640x480, ImageType.Color);

In this example, the value of xMax is 640 and yMax is 480.

3. Skeletal Tracking

Getting skeleton data from Kinect SDK is easy. This example stores the data into an array for further use in the game:

void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
    var trackedSkeletons = from s in e.SkeletonFrame.Skeletons
           where s.TrackingState == SkeletonTrackingState.Tracked
           select s;
    trackedSkeletonsCount = trackedSkeletons.Count();
    for (int i = 0; i < trackedSkeletonsCount; i++)
        Skeletons[i] = trackedSkeletons.ElementAt(i);

To draw a skeleton, you just have to get each joint’s position and draw a line between adjacent joints.


This helper function draws a skeleton using Neat engine’s LineBrush class.

public void DrawSkeleton(SpriteBatch spriteBatch, LineBrush lb, Vector2 position, 
Vector2 size, Color color, int skeletonId = 0) { if (Skeletons.Length <= skeletonId || Skeletons[skeletonId] == null) { //Skeleton not found. Draw an X lb.Draw(spriteBatch, position, position + size, color); lb.Draw(spriteBatch, new LineSegment(position.X+size.X, position.Y,
position.X, position.Y + size.Y), color); return; } //Right Hand lb.Draw(spriteBatch, ToVector2(JointID.HandRight, size, skeletonId), ToVector2(JointID.WristRight, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.WristRight, size, skeletonId), ToVector2(JointID.ElbowRight, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.ElbowRight, size, skeletonId), ToVector2(JointID.ShoulderRight, size, skeletonId), color, position); //Head & Shoulders lb.Draw(spriteBatch, ToVector2(JointID.ShoulderRight, size, skeletonId), ToVector2(JointID.ShoulderCenter, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.Head, size, skeletonId), ToVector2(JointID.ShoulderCenter, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.ShoulderCenter, size, skeletonId), ToVector2(JointID.ShoulderLeft, size, skeletonId), color, position); //Left Hand lb.Draw(spriteBatch, ToVector2(JointID.HandLeft, size, skeletonId), ToVector2(JointID.WristLeft, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.WristLeft, size, skeletonId), ToVector2(JointID.ElbowLeft, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.ElbowLeft, size, skeletonId), ToVector2(JointID.ShoulderLeft, size, skeletonId), color, position); //Hips & Spine lb.Draw(spriteBatch, ToVector2(JointID.HipLeft, size, skeletonId), ToVector2(JointID.HipCenter, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.HipRight, size, skeletonId), ToVector2(JointID.HipCenter, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.Spine, size, skeletonId), ToVector2(JointID.HipCenter, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.Spine, size, skeletonId), ToVector2(JointID.ShoulderCenter, size, skeletonId), color, position); //Left foot lb.Draw(spriteBatch, ToVector2(JointID.HipLeft, size, skeletonId), ToVector2(JointID.KneeLeft, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.KneeLeft, size, skeletonId), ToVector2(JointID.AnkleLeft, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.AnkleLeft, size, skeletonId), ToVector2(JointID.FootLeft, size, skeletonId), color, position); //Right foot lb.Draw(spriteBatch, ToVector2(JointID.HipRight, size, skeletonId), ToVector2(JointID.KneeRight, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.KneeRight, size, skeletonId), ToVector2(JointID.AnkleRight, size, skeletonId), color, position); lb.Draw(spriteBatch, ToVector2(JointID.AnkleRight, size, skeletonId), ToVector2(JointID.FootRight, size, skeletonId), color, position); }

Be sure to read the source code of Neat’s KinectEngine class for more information.


The first Kinect-based sample for Neat game engine is available at

To be able to build this sample, first you have to install the official Kinect SDK Beta 2 and obtain the latest version of the Neat source code and build it with the KINECT directive.

Neat currently supports Kinect through a component class named KinectEngine. To begin using Kinect in your Neat based XNA games, you have to follow these simple steps:


You have to add an instance of KinectEngine to you game’s Components. Write these in your LoadContent method to initialize KinectEngine:

Kinect = new KinectEngine(this);
CalibrateScreen.Kinect = Kinect;

Just remember to call Kinect.Uninitialize() when you are shutting down the game.


You can monitor the stream from the RGB camera and see the elevation angle in a screen named CalibrationScreen. To use this screen, you simply have to add it to your game’s screens:

Screens["kinect"] = new CalibrateScreen(this);

k_tilt [int] command changes the elevation angle of the Kinect sensor from the console.

Convert positions to XNA Vectors

Use KinectEngine.ToVector2 and KinectEngine.ToVector3 methods to scale and convert the position of each joint to XNA vectors. Now you are ready to interact with your game using Kinect.

Draw Skeletons and the RGB Stream

If you want to draw a skeleton, all you have to do is to call KinectEngine.DrawSkeleton(…) and pass a SpriteBatch and LineBrush to it. The image from the RGB camera can be accessed from the “kinectrgb” texture:

	new Rectangle(0, 0, 640, 480),