Hitnoodle Blog

Animu Torrent Checker

I have a few resolutions to do this year, and one of them is to create and/or share my hobby projects to public. It will be open-source and released monthly. This could mean I’ll release my old project or create new ones within a month. So.. this is it for January.

AnimuTorrentChecker: a program to automatically download anime torrent, because I’m that lazy that’s why.

Background

Anime series are usually aired per-season, for example on Winter, Spring, or Summer. Every season there will be a number of anime released weekly according to their own schedule. Then, after it was aired, fansubber will create the english subtitle for the series, re-encode the files, and release the translated anime for the public. These files will be distributed using torrent and the torrent files usually hosted on torrent tracker sites such as tokyotosho or nyaa.

I already said I’m lazy, and it takes so much effort to check on some days, especially weekends, which anime that were already subbed and released from certain fansubber team. I also can say that every person have their own favourite fansub team for certain series, because some fansub team is good, some is fast, some is amusing. I think it was on AFAID 2013 dates, and I needed to travel to other city for several days. I can just imagine going home exhaustedly and need to download all the missed show, when I just want to watch them fast and sleep.

Implementation

All the above leads me to a solution, of course, to create a simple program made from me to be used just by me. Only me, not other users, not target market, whoever they are. I then programmed the functionality based the use-case I will use the program, which is download from home PC, check progress and change settings from wherever (cloud). Because the fastest language I can code is C#, the program itself is a console-based using .NET for the implementation.

When running, AnimuTorrentChecker will check every few minutes the anime list I want to check (which title and what subber), download the torrent file when it shows up, and update the data to check the next episode of the series. The downloaded torrent itself will be automatically ran by the usual torrent program (ex: uTorrent).

Animu Torrent Checker
‘ Animu Torrent Checker in Action

I usually run the program at Thursday – Monday, checking the progress from Dropbox. This can be done because I set the checker to download the torrent files to a dropbox folder, and make uTorrent to download and load whatever torrent on that folder. I can simply see if the torrents are there and already loaded by uTorrent because it will change the extension to .loaded.

Source files and usage instruction are on github. No binary as of now because I want to refactor/change technology/create new features sometimes in the future. Overall, it’s working great for me and now I can put the time when I wait and check released anime to something more productive. Like browing reddit aimlessly or something.

Starting Experience in Kinect

My latest project requires me to utilize Microsoft Kinect as a way to interact with a game. So far, it’s been a good experience working with it.

Before I go further, I will answer for the obligatory question from the usual aspiring Kinect developer:

  • Yes, you can use Kinect Xbox 360 for developing the application or whatever it is. The hardware works fine using either Microsoft SDK or Open NI + Nite stack.
  • If you’re using Microsoft Kinect SDK (1.7 at this time of writing), the release version requires Kinect for Windows hardware. If you don’t want to distribute commercially, other people can still use the Kinect Xbox for the executable that you build using the Debug mode.


I can’t post the picture of my Kinect right now, so here’s the comparison between the hardware and a dog

When you’re developing an application to use a specific kind of hardware (ex: Accelerometer, Oculus Rift, or in this case, Kinect), design and integrating the interaction is best to start at the beginning of development. Microsoft creates a great guideline for the interface you should design, and it’s a great reference even if you’re not using the official SDK and WPF. Real World™ is not as ideal, as in sometimes we don’t have the luxury to create the experience from the start.

Assume we have a finished game, and we want to add Kinect integration to it. The game is designed for touch interaction, so no complicated tight controls like the usual PC/Console games.

Integrate the SDK into the Game

Include the library, listen to device input, and do stuffs in the game according to it. You could wrap them nicely and create a singleton that you can call in your update loop or wherever you listen the input.

The problem for this way is, of course, the integration. Microsoft SDK is only available for .NET languages (C#, Basic), and Open NI + Nite is using C++. If the game used other than that languages, you’ll need to write a wrapper or another glue. Of course, being Microsoft, their SDK will only be available for Windows platform. On the other hand, Open NI + Nite is open source AND cross-platform.

Note: I think Microsoft Kinect SDK also have the native version, as opposed to managed (.NET), but I haven’t got around it yet. The point of Microsoft-verse vs cross platform still stands though.

Another point is speed of development. Open NI + Nite is more low-level than Microsoft SDK, so in some cases, you will need to implement the functionality yourself. Microsoft SDK also have more features and easier to develop, seeing they are the guys who create the hardware.

Let’s use hand gesture detection as an example:

  • AFAIK if you’re using Open NI + Nite, you will have to use Open CV to create and integrate the gesture detection yourself. There are a lot of resources though, and a lot of people have already tackle the problem.
  • On the other hand, Microsoft SDK also doesn’t have crazy hand gesture detection, but they have built-in hand position and press + grip interaction.

In my case, the game is on windows platform and it will only need simple detection (hand position, press, and drag). The problem is because the game designed for touch interaction, the basic interaction for the Windows version will automatically use the windows mouse. For that, we will need to implement another input manager that handle the in-game cursor which will receive data from the Kinect.

Simulate the Mouse Interaction

Another way is to simulate the Windows mouse using Kinect, and then use that to interact with the game. This is more of a hacky way, because in this case we override the mouse input outside the game world to interact with the inside of it. Imagine creating an application that will run on background that will receive Kinect input, and then start the game application. Because the game already receive mouse input, we don’t (or shouldn’t) need to change anything in game code at all.

The experience will be a little sucks, but we can hack that again. One thing I thought is to polish the game mouse interaction, such as drawing a big cursor and change it’s image when pressing and gripping.

There are many applications and popular Kinect hacks that are already available. Man, when you got a device that can receive gestures, what more you want to do first? Of course it will be controlling your computer with it! I found three alternatives that I think is quite usable:

  • Winect: Implementation using Open NI + Nite + Open CV. Using one hand to control the cursor and two hands for more complex controls (ex: zoom). Source code is not available, and it always crashes on my computer. Expecting users to install the prerequisites library is always not good.
  • Kinect Mouse Cursor: One of the first available Kinect mouse cursor hack. Using early Microsoft SDK. Functionality is minimal, source code is there.
  • Kinecursor: Pretty good implementation. Grip detection sometimes wrong, so the experience is kinda wacky. 1.1 source code is available, the latest is not. TangoChen have a pretty nice blog if you’re interested in Kinect overall.

One thing that interests me is how an application can take control of the native OS cursor. The magic apparently comes from using the library that is available in System32 and use the function to interact with it. For example, Kinecursor implementation is:

[DllImport("user32.dll")]
static extern bool SetCursorPos(int X, int Y);

[DllImport("user32.dll")]
static extern void mouse_event(MouseEventFlag flags, int dx, int dy, uint data, UIntPtr extraInfo);

[Flags]
enum MouseEventFlag : uint
{
    Move = 0x0001,
    LeftDown = 0x0002,
    LeftUp = 0x0004,
    RightDown = 0x0008,
    RightUp = 0x0010,
    MiddleDown = 0x0020,
    MiddleUp = 0x0040,
    XDown = 0x0080,
    XUp = 0x0100,
    Wheel = 0x0800,
    VirtualDesk = 0x4000,
    Absolute = 0x8000
}

 

Using user32.dll, we can call the native methods that is already available. After we receive and process the Kinect input, we can just call SetCursorPos function to set the mouse cursor position and call mouse_event function to call the appropriate mouse event.

As an alternative for the game interaction, none of the solution above is out-of-the-box usable. If we want to go this way, the correct way is to create the implementation based on the two using Microsoft SDK for detecting the position and update it using the 1.7 version, which already have simple hand gesture detection.

Implementing that is another problem, of course. KinectInteraction, the feature that allows us to use the Kinect gesture detection in an easy way (push, grip, cursor override with feedback) is only available on KinectRegion. KinectRegion itself is a region window that we decide on the application, which means that we can’t get the cursor override and all KinectToolkitControls working outside the application window.


Example of KinectInteraction working on a WPF window

The good news is, we can access the Kinect Interactions stream by going a little bit lower in the SDK. This means that we can do the hand gesture detection (press, grip, wave) to enhance and make the detection better, and use that in our Mouse Cursor application. I found these two blog posts awesome for introducing the Kinect Interactions concept and how it works inside:

There will be still tweaks to be made (threshold, sensitivity, and others), but when it’s done, it will be a better alternative for the interaction.

Summary

You have two choices when you want to integrate Kinect to your game:

  • You can integrate it into the game, create a reusable implementation that will receive Kinect stream, use that to create a manager that will give the relevant input into your game, and use that accordingly.
  • You can create an application that will receive Kinect stream and use that to simulate mouse control, and create a hack your game for better interaction (ex: change the cursor image and use bigger buttons that assumes we are working with Kinect from 2-3 meters away, not the mouse in front of the face).

Choose your war carefully, because sometimes the best way is not the right way.

Relevant Studies