Hitnoodle Blog

Axe to the Shoulder – An Epic Mega Jam Chronology

We’re bored with Unity3D. A bit bored with 2D development. We’ve seen the awesome prototypes and games created using Unreal Engine 4. There was a gamejam around the corner.

Sum it up and we’ve got ourselves an Oh My Viking!

viking-1-compressed

Before the storm

We’ve known for about two weeks before that there will be a big game jam for Unreal Engine. So, with such a limited time to learn, we decided that a bit of overall concepts would be nice as a preparation:

Theme…. STARTS

Friday 9 October, 3 AM. It’s “Standing on the Shoulder of Giants”. Literal meaning? Newton meaning? Attack on Titan meaning? We’ve decided to sleep and hopeful that a muse would give us ideas.

Morning comes and no eureka on anyone. CrazyMaul has just re-learned 3D modelling and I don’t even know if Trois at that time know how rigging on Blender works.

Well, first thing first, we decided that it will be a simple one-off gameplay. It relates with our nonexistent capabilities using unreal. We also decided to go with usual local multiplayer (see Dance of Thrones).

We also decided that we want some “cooperative” to the 4 person local game, so we want to make it two teams of two person going around competing against each other. Our basic game idea is to use the rope-spongebob-minigame but with viking, axe, and dynamic grappling hook.

So, two person attached to each other, holding with axe, moving with grappling hook, the arena is on the back of a giant, you win by killing the enemies and then standing on the giant’s shoulder.

Prototypings

Oh boy. This is where the madness begins.

Characters:
  • Physics constraints are fine, we can actually use it to connect two objects.
  • But wait, one should always be a static actor. And it goes wacky.
  • We should actually test with some kind of a rope model with multiple physics constraint. I think that doesn’t work either.
Arena:
  • Characters can attach to arena sure, but we actually need a flat surface. So we make a hidden one, an invisible box on the back of the titan.
  • Sometimes the titan need to “move” and all attached characters will fall. Camera Shake and Events is good here.
Local Multiplayer:
  • Need time to get used to PlayerStart and how unreal handle multiple character.
  • Deciding split screen or not, using two gamepads or one input (keyboard) for multiple characters.
Gameplay:
  • Spawning objects seems easy, objects get collided objects and do stuffs to it also easy.
  • Level blueprint is our GameSceneController. Really.
3D Assets:
  • A character with a shield (different texture for each four character) and an axe. Well, we don’t have time for shields. Modelled.
  • Rigged, but then it’s wrooong. We actually need to rig it again and assign better weights. Someone cmiiw.

Sunday evening, almost two days. We decided to scrap the cooperation mechanic because of the rope-not-connecting-two-actor-dynamically problem and make it just 4 players with hook and axe against each other. Kill one, stand on the shoulder before the killed one respawn, then win.

Weekday and Jam

Monday starts and we have to back to reality to work. WORK NOT JAM. We only have evening to work on and I need one day to recharge. I don’t think we’ve made progress on Monday and Tuesday except rewriting the prototype codes into the real project and realizing that our 3D flow is wrong. Oh we created the landscape around the titan using the Ice assets, with snow and trees.

Wednesday is surprisingly a holiday and we can jam from dusk til dawn. The thing is not all of us can go and we need to finalized all the modules. Also.. this is funny.. we don’t have a working player, or even a game working at Wednesday.

Huge red flag. Assigned peeps new jobs to do and went with it. This is also around the time I realized git is useless. Animation is up and running.

Last 24 hour

Gave up on texturing the 3D models. Gave up UI that following the players. Gave up not-split-screen. Gave up four player to make room for two players.

That last thing.. it’s because we actually have to test it using authentic gamepads and we only have two of them.

  • Sound effects and BGM inserted. Feels epic already. /seewhatididthere
  • Battle done and tested around 6 hours remaining.
  • Created title and tutorial screen. HUD for winning. I think it was 4 hours remaining.
  • Combined gameplay logic, death respawn logic, bug fixing, seeing-random-bug-and-realizing-it’s-a-cursor issue fixing. 2 hours remaining.
  • First time packaging and build, testing. 1 hours remaining.
  • Uploading is slow as shit. Safe and submitted around 20 minutes remaining.
viking-2-compressed

Afterwords

I think I learned a few important in this jam, like how spaghetti blueprint is, how it can easily related to C++, how unreal works, and how the tools correlate each other. Cascade is great, Blueprint is great, Montage is great.

Also, it’s crazy that it actually follow our usual jam schedule (2-3 days). Sure, the game it’s not up to our standard, but we can’t really “win” this jam anyway. So many great submission and I personally grateful of the experience. Exciting to see what we can do next! It can only get better!

Audience Driven Design

So a few months ago I went to Unity 2015 in Bangkok, Thailand. It’s basically a conference for Unity3D users and everyone who are interested in it to learn more about the tools, best practices, and such. It’s also a good place to network around other South East Asia game studios and shatter your confidence, something like “You’re not as good as you think”.

By the way, I still want to compile all those things I learned into one single open documentation,, but that’s for later discussion.

One lesson that resonates to my mind is a concept called Audience Driven Design. It’s very obvious, true, but somehow that simple concept eludes my mind for so long. Too long.

Simple Definition

As the name shows, Audience Driven Design is a concept where you, as a game designer, create a game based on what your specific audience wants/prefers.

If you want to create a game where everybody can play it, scratch that, there is no such things.

If you want to create a game for 40 years old moms in Asia, also scratch that. There is no (or too few) common ground. Country, how they raised, what they buy, income, and whole other things is different.

Be as Specific as Possible with your Audience

Let’s say you want to make a JRPG, choose which kind, and who for. Final Fantasy, Tales, Star Ocean, and Suikoden is different, which for example Tales is more of a niche anime RPG, and people playing it are searching for different things.

Let’s say you want to make a JRPG like Final Fantasy, choose which Final Fantasy. There are more than fourteen iteration of it, every entry have their own strength and weakness.

If you want to make FPS, Halo, Half Life, Unreal, Doom, Prey, Call of Duty, and Battlefield is different. If you want to make a platformer, Braid, Trine, Megaman, Ori, and Shank is different.

Each game have their unique characteristics and audiences. Even when they overlap each other, your game strong suit should appeal to your most specific audience.

Again, Know your Audience

For example, I want to create a murder mystery visual novels that is thrilling, something like Virtue’s Last Reward:

  • Some of the audience also play 999 and Ever17. Which elements do they like from each game? Do I want to make it more mystery, or more thriller? Do I want puzzles in it? Do the audience I want to appeal even like puzzles?
  • Some of the audience also like Umineko (game), but hate the anime. Why? Do I like to create a thorough descriptions? Do I focus on the character interactions?
  • The audience like to hang out on r/visualnovel, who are the ones like mystery the most? Also on NeoGAF? mystery threads on /a/? AnimeSuki Umineko threads? Be specific, and interact with them elegantly.

Another example, I have an imaginary client who wants to make a game for her event:

  • She doesn’t really care about technical difficulties and game balance. She wants it fun.
  • She wants the game plastered with advertising of the events and the products.

So then screw balance, screw artificial-coolness, create the perfect game for her. Instant gratification rewards, flashy animation, simple gameplay, plastered advertising on everywhere (I mean everywhere). She is after all, your audience who pays you money.

Yourself

And for those idealist indie developers like me who wants to create a game for himself, do you even know you?

  • For last example, I know I recently acknowledged that I like high-level raiding on FFXIV.
  • Specifically, I like scripted reaction fight like Titan and Shiva Extreme and Turn 9. On the other side, I don’t like Ramuh, Moggle, and Bismarck Extreme.
    • Why? Because I like thrill with upbeat fast music.
  • I like Coil of Bahamut but not Alexander. I also usually stop playing when there is no more quest to do.
    • Why? Because I care more of the story than shiny gear treadmill.

So when I create a boss fights for myself, in a game I want to play, I’ll take those specific things I know I love, twist it, and make it better. And I know for a serious game I want to make, I’ll make it to always have a great story that I will enjoy.

Conclusion

I believe by knowing your audience, you’re going to make a more personal game to them. It will appeal more, remembered more, and they will have fun more.

Commercial “get rich buy a house on a beach” success? Not likely.

Touched them indirectly, making them your devoted players and fans, spawned conspiracy theorist, youtuber having fun, word-of-mouth niche success? I hopefully believe so.

Animu Torrent Checker

I have a few resolutions to do this year, and one of them is to create and/or share my hobby projects to public. It will be open-source and released monthly. This could mean I’ll release my old project or create new ones within a month. So.. this is it for January.

AnimuTorrentChecker: a program to automatically download anime torrent, because I’m that lazy that’s why.

Background

Anime series are usually aired per-season, for example on Winter, Spring, or Summer. Every season there will be a number of anime released weekly according to their own schedule. Then, after it was aired, fansubber will create the english subtitle for the series, re-encode the files, and release the translated anime for the public. These files will be distributed using torrent and the torrent files usually hosted on torrent tracker sites such as tokyotosho or nyaa.

I already said I’m lazy, and it takes so much effort to check on some days, especially weekends, which anime that were already subbed and released from certain fansubber team. I also can say that every person have their own favourite fansub team for certain series, because some fansub team is good, some is fast, some is amusing. I think it was on AFAID 2013 dates, and I needed to travel to other city for several days. I can just imagine going home exhaustedly and need to download all the missed show, when I just want to watch them fast and sleep.

Implementation

All the above leads me to a solution, of course, to create a simple program made from me to be used just by me. Only me, not other users, not target market, whoever they are. I then programmed the functionality based the use-case I will use the program, which is download from home PC, check progress and change settings from wherever (cloud). Because the fastest language I can code is C#, the program itself is a console-based using .NET for the implementation.

When running, AnimuTorrentChecker will check every few minutes the anime list I want to check (which title and what subber), download the torrent file when it shows up, and update the data to check the next episode of the series. The downloaded torrent itself will be automatically ran by the usual torrent program (ex: uTorrent).

Animu Torrent Checker
‘ Animu Torrent Checker in Action

I usually run the program at Thursday – Monday, checking the progress from Dropbox. This can be done because I set the checker to download the torrent files to a dropbox folder, and make uTorrent to download and load whatever torrent on that folder. I can simply see if the torrents are there and already loaded by uTorrent because it will change the extension to .loaded.

Source files and usage instruction are on github. No binary as of now because I want to refactor/change technology/create new features sometimes in the future. Overall, it’s working great for me and now I can put the time when I wait and check released anime to something more productive. Like browing reddit aimlessly or something.

Unity3D Single Scene Architecture

One of my eureka moment when I learned Unity3D was when Brett Bibby introduced the concept of Single Scene Architecture at one of his talks. As the name said, it means that you only worked with one scene at your release builds. By optimizing into only one scene, we can handle resource loading better and faster. Asynchronous loading, less overhead, and less memory needed.

Disclaimer: All of the development examples comes from Tinker Games’ INheritage BoE. I only speaks from my experience and I’m sure there are better ways, so take this (hell, all of the information in this blog) with a grain of salts again. Yes, I can’t create a reusable tutorial/framework yet.

Introduction

When I’m developing a game on Unity3D, usually I will create and design scenes according to the flow and functionality I want to achieve. For example: SplashScene, StartScene, StageSelectScene, GameScene, and HighScoreScene to name a few. I will then create a script using Application.LoadLevel() to navigate through them.

On the opposite, on a single scene world, we will only work with one scene (I usually name my scene MainScene) and one object at that scene hierarchy, which is the MainCamera. There are no other objects at first, because it will be created from resource files (XML, json, prefab, what have you) later when needed. Of course, because we’re not using Unity3D scene management, we have to handle that ourselves.

By that, we have to mind (minimum) three things when implementing a single scene architecture. How we serialize and deserialize our object, how we manage our scenes, and how we manage our projects.

The concept of single scene architecture is to minimize all the object at the application starting point and to load all resources on the fly only when needed.

Serialization

Serialization itself is quite a simple topics. It’s how we save an information or an object into a physical resource file and how to load that back to the original. For example, simple cases of serialization are saving progress data and loading level data.

We want to minimize the objects down to one, so we have to serialize all of other objects. In INheritage, we basically have two kinds of scenes, Menu and Game. Menu scene is where the buttons and images and scrolls take actions whereas Game scene is where the magic happens. The key thing to notice is, all of the Unity3D scene besides Game scene are Menu scenes. And so, we can treat them all the same in serialization-wise!

mainmenu-design

MenuScene design example in Unity3D.

I began by creating a model from all the elements in the menu. What kind of information needed to make the menu? Sprite, text, actions, glows, etc? I then create a class to hold all of the information and that class can serialize all of the scene data to/from a XML file. That class is also can create game objects according it’s data.

To sum up, I will have a MenuData class (singleton for easy access):

  • Holds all menu data information in arrays.
  • Initialize() and Clear() for basic data handling.
  • Save() and Load() for serialization.
  • CreateMenu() for creating game objects.

For serializing, I will have another game object that will save all current scene menu data to MenuData class and use that to save to the XML file. Deserializing just takes a method to load that XML file and another to generate the game objects.

The usual flow for my menu serialization is:

  • Design the menu as usual in Unity3D.
  • Serialize: Save all menu game objects to MenuData class (using script) and save that to XML.
  • Deserialize: Call MenuData.Load() to the XML file and MenuData.CreateMenu() from an empty scene.

The example is for menu, but the basic is all the same for Game scene and whatever specific scene/object you need to serialize.

Managing Scenes

So we can serialize all of the game objects in one scene, how can we make the transition and interact with each other? We handle this the usual non-Unity3D way: SceneManager / ScreenManager / StateManager. There are a lot of excellent tutorials about managing scene states, so please check others out and implement how you think it best.

For me, I start by creating how a scene is supposed to behave in it’s life. It will have the default behavior to be implemented when awaked, enabled, and destroyed. Load the XML and other resources in Awake(), start the menu transition in OnEnable(), and delete the resources in OnDestroy(). This SceneController script was created as a template and will be used to implement the scenes.

In my menu design, the interaction/transition was implemented on one script per scene. This script will handle how the object transitioning between scenes (which object fade in/out, when to glow, etc.), implement the buttons delegates (what to do when this button is pressed), and other things. All of this functionality will also be copied and used in the SceneController class.

SceneManager is the usual singleton manager that manages the scene transition. My implementation is simple and only changed a little from the initial Brett’s version. Here is the basic flow:

  • When we need to change scene, we call SwitchScene in code. For example: SceneManager.SwitchScene(Scene.AboutScene).
  • It will then search AboutSceneController script and attach it to the main camera, but NOT enabled yet. it will then load the scene resources.
  • When the loading is finished, it will enable the current scene script (AboutSceneController). The game then will transition to that loaded new scene. Also, it will destroy the previous SceneController script that is attached before.

scenemanager-editor

SceneManager in action.

Multiple Projects

Because the minimal nature of single scene architecture, it will be difficult to debug or changing something. The straightforward solution is use two projects, one (or more) for designing the game and one other single scene project for building the release version.

Unity3D have a great editor capabilities, and it’s on the developer hands to extend the functionalities. Development projects is where you want to go crazy implementing editor that is easy to use and a scene that is polished and usable. Serialize that into XML and use that XML in the single scene project for best use of both worlds.

Conclusions

Using single scene architecture on Unity3D release project will optimize your game more. In your workflow, you will have a minimum of two projects, one design and one build.

  • Your design project is the usual development project. All of your scenes objects should be serialized into each own XML or whatever format you prefer.
  •  Your build project will consist of a single scene, with a MainCamera object and a SceneManager script attached. Maybe others too, but it should be minimal. This SceneManager will handle all of the scene management and attach the corresponding SceneController script according the current scene.
  • This SceneController will then load the XML and use that to create the scene objects. It will also manages it scene interaction and transition according to the implementation in the design project by copying it partially and manually.

Starting Experience in Kinect

My latest project requires me to utilize Microsoft Kinect as a way to interact with a game. So far, it’s been a good experience working with it.

Before I go further, I will answer for the obligatory question from the usual aspiring Kinect developer:

  • Yes, you can use Kinect Xbox 360 for developing the application or whatever it is. The hardware works fine using either Microsoft SDK or Open NI + Nite stack.
  • If you’re using Microsoft Kinect SDK (1.7 at this time of writing), the release version requires Kinect for Windows hardware. If you don’t want to distribute commercially, other people can still use the Kinect Xbox for the executable that you build using the Debug mode.


I can’t post the picture of my Kinect right now, so here’s the comparison between the hardware and a dog

When you’re developing an application to use a specific kind of hardware (ex: Accelerometer, Oculus Rift, or in this case, Kinect), design and integrating the interaction is best to start at the beginning of development. Microsoft creates a great guideline for the interface you should design, and it’s a great reference even if you’re not using the official SDK and WPF. Real World™ is not as ideal, as in sometimes we don’t have the luxury to create the experience from the start.

Assume we have a finished game, and we want to add Kinect integration to it. The game is designed for touch interaction, so no complicated tight controls like the usual PC/Console games.

Integrate the SDK into the Game

Include the library, listen to device input, and do stuffs in the game according to it. You could wrap them nicely and create a singleton that you can call in your update loop or wherever you listen the input.

The problem for this way is, of course, the integration. Microsoft SDK is only available for .NET languages (C#, Basic), and Open NI + Nite is using C++. If the game used other than that languages, you’ll need to write a wrapper or another glue. Of course, being Microsoft, their SDK will only be available for Windows platform. On the other hand, Open NI + Nite is open source AND cross-platform.

Note: I think Microsoft Kinect SDK also have the native version, as opposed to managed (.NET), but I haven’t got around it yet. The point of Microsoft-verse vs cross platform still stands though.

Another point is speed of development. Open NI + Nite is more low-level than Microsoft SDK, so in some cases, you will need to implement the functionality yourself. Microsoft SDK also have more features and easier to develop, seeing they are the guys who create the hardware.

Let’s use hand gesture detection as an example:

  • AFAIK if you’re using Open NI + Nite, you will have to use Open CV to create and integrate the gesture detection yourself. There are a lot of resources though, and a lot of people have already tackle the problem.
  • On the other hand, Microsoft SDK also doesn’t have crazy hand gesture detection, but they have built-in hand position and press + grip interaction.

In my case, the game is on windows platform and it will only need simple detection (hand position, press, and drag). The problem is because the game designed for touch interaction, the basic interaction for the Windows version will automatically use the windows mouse. For that, we will need to implement another input manager that handle the in-game cursor which will receive data from the Kinect.

Simulate the Mouse Interaction

Another way is to simulate the Windows mouse using Kinect, and then use that to interact with the game. This is more of a hacky way, because in this case we override the mouse input outside the game world to interact with the inside of it. Imagine creating an application that will run on background that will receive Kinect input, and then start the game application. Because the game already receive mouse input, we don’t (or shouldn’t) need to change anything in game code at all.

The experience will be a little sucks, but we can hack that again. One thing I thought is to polish the game mouse interaction, such as drawing a big cursor and change it’s image when pressing and gripping.

There are many applications and popular Kinect hacks that are already available. Man, when you got a device that can receive gestures, what more you want to do first? Of course it will be controlling your computer with it! I found three alternatives that I think is quite usable:

  • Winect: Implementation using Open NI + Nite + Open CV. Using one hand to control the cursor and two hands for more complex controls (ex: zoom). Source code is not available, and it always crashes on my computer. Expecting users to install the prerequisites library is always not good.
  • Kinect Mouse Cursor: One of the first available Kinect mouse cursor hack. Using early Microsoft SDK. Functionality is minimal, source code is there.
  • Kinecursor: Pretty good implementation. Grip detection sometimes wrong, so the experience is kinda wacky. 1.1 source code is available, the latest is not. TangoChen have a pretty nice blog if you’re interested in Kinect overall.

One thing that interests me is how an application can take control of the native OS cursor. The magic apparently comes from using the library that is available in System32 and use the function to interact with it. For example, Kinecursor implementation is:

[DllImport("user32.dll")]
static extern bool SetCursorPos(int X, int Y);

[DllImport("user32.dll")]
static extern void mouse_event(MouseEventFlag flags, int dx, int dy, uint data, UIntPtr extraInfo);

[Flags]
enum MouseEventFlag : uint
{
    Move = 0x0001,
    LeftDown = 0x0002,
    LeftUp = 0x0004,
    RightDown = 0x0008,
    RightUp = 0x0010,
    MiddleDown = 0x0020,
    MiddleUp = 0x0040,
    XDown = 0x0080,
    XUp = 0x0100,
    Wheel = 0x0800,
    VirtualDesk = 0x4000,
    Absolute = 0x8000
}

 

Using user32.dll, we can call the native methods that is already available. After we receive and process the Kinect input, we can just call SetCursorPos function to set the mouse cursor position and call mouse_event function to call the appropriate mouse event.

As an alternative for the game interaction, none of the solution above is out-of-the-box usable. If we want to go this way, the correct way is to create the implementation based on the two using Microsoft SDK for detecting the position and update it using the 1.7 version, which already have simple hand gesture detection.

Implementing that is another problem, of course. KinectInteraction, the feature that allows us to use the Kinect gesture detection in an easy way (push, grip, cursor override with feedback) is only available on KinectRegion. KinectRegion itself is a region window that we decide on the application, which means that we can’t get the cursor override and all KinectToolkitControls working outside the application window.


Example of KinectInteraction working on a WPF window

The good news is, we can access the Kinect Interactions stream by going a little bit lower in the SDK. This means that we can do the hand gesture detection (press, grip, wave) to enhance and make the detection better, and use that in our Mouse Cursor application. I found these two blog posts awesome for introducing the Kinect Interactions concept and how it works inside:

There will be still tweaks to be made (threshold, sensitivity, and others), but when it’s done, it will be a better alternative for the interaction.

Summary

You have two choices when you want to integrate Kinect to your game:

  • You can integrate it into the game, create a reusable implementation that will receive Kinect stream, use that to create a manager that will give the relevant input into your game, and use that accordingly.
  • You can create an application that will receive Kinect stream and use that to simulate mouse control, and create a hack your game for better interaction (ex: change the cursor image and use bigger buttons that assumes we are working with Kinect from 2-3 meters away, not the mouse in front of the face).

Choose your war carefully, because sometimes the best way is not the right way.

Relevant Studies

SGA: A Cross-Platform Story Part I (Introduction)

A long time since the last post, and even now I tried to escape writing more important things *cough* final project *cough* by writing this.

This is my subjective experience from developing Soccer Girl Adventure, a game that been in development since February 2012 until now. You know, the usual of writing code, refactor, change mechanics, rewrite things, rewrite features, and on and on and on. Thankfully, I can say that now it’s already on release version for four mobile platforms.

Status: We’re still discussing things with our publisher for iOS and Android, released on our own on Windows 8 store, and currently on QA at Blackberry Appworld for BB10.

I plan to divide the story by the platform, the exception being this part about the game introduction. This is because it’s more or less the same timeline when I develop the game. Next part is iOS, and then Android, Windows 8, and finally Blackberry 10 (also back to Android).

What, What, What

Title

Soccer Girl Adventure is an adventure runner game. Our heroine, Sora, love to play soccer all day. The Soccer Boys on the other hand, don’t like it when Sora play soccer. Why? Because she is a girl!

Angry, Sora then went on a journey. She want to pass all of the boys and play in the soccer stadium. From that, she will finally prove that a girl can play soccer too!

Interesting. Corny. Well, it’s quite cool and it works for a theme.

Technology

We already decided that iOS will be our first platform to develop on. I used Objective-C a bit, and to be honest, I’m not a fan. Someone then gave me a recommendation that you can’t go wrong using C++ for game development. It’s more familiar, performance is great, and if in the future I want to make it cross-platform (which I did, hence this post), I can do it easily.

Cocos2d-X is the one I put faith from then on. It’s open, meaning I can go to the source if I want to know how certain things work and don’t. Also, they have a great community and support at the forum.

At First, It Was Rhythm

Let’s talk mechanics. First iteration comes with DDR-like gameplay. Simply put, the player will have to swipe certain directions according to the rhythm direction that comes up. If succesfull, then our heroine Sora will pass the enemy by tricking him.

Old

After a few plays, it’s decided that it totally sucks. Player will be too focused on the bottom part on the screen to enjoy the great illustration of the player and the backgrounds.

Then, Someone Invented Buttons

The challenge will be remembering which button does what. With buttons, player can enjoy the art while still immersed in playing.

Lake

Of course, It doesn’t comes easily. When facing multiple enemies, a majority of player messed up and press the wrong button. One of the important fact we realized is, why the fuck it need buttons? We don’t have platforms and Sora will always run in a straight line. Why force the player to remember and use the buttons? Also, it seems very counter-intuitive to use virtual buttons on a touch-device.

Gestures Saves the Day

Remove all the buttons and other ideas, go with simple things. Swipe for tricks, tap to get items and buy lemons, and touch the bar for using special trick.

Game0

Testing and testing, we got good responses and went through the gesture mechanics. So, the challenges remaining now is the actual game implementation from the beginning until the end on iOS platform.

To be continued on the next part!

Barousel

So, my morning was spent playing with Barousel, <quote> a pretty nice jQuery plugin for generating simple carousel </quote>. It’s sure quite easy and only need some few tweaks to make it usable.

Result: http://tinkerworlds.com/

Orange, shiny, and stuffs. This site is only for tomorrow purpose.. so yeah.. I (or someone) will definitely make a more awesome version of Tinker’s site for the future.

Pig Rider

This post is about Tinker Games’ latest release, Pig Rider. Our site is not finished yet so I’ll just post about it here.

Pig Rider is a drag racing game. You will help the Rider by changing the gear perfectly, make him accelerate faster and jumping to avoid all the obstacles.

YouTube Trailer

The inspiration comes from the Android/iOS hit Drag Racing. Simple mechanics, addictive with its upgrades and races. We have a gameplay addition, jumping, to make it not-so-simple and have more variety.

Pig Rider have a twist. In a world of endless game, you can actually finish this race. By doing the quests and collecting money from doing various cool things, you can make your ride more powerful. Hence, your chance to finish will be higher. 500 meters is quite a long journey though, and is quite impossible to achieve without upgrading the rider’s armor (or so we plan it to be). You will get a special screen if you win the race.

Oh, you also can ask any questions about it here.

Screen1Screen2Screen3Screen4