Posted on 11 Oct 2019
This is the twelfth article of the Android Game Programming series. In this article, I would like to discuss how to manage user input on Android for our video game.
In an Android application, graphic elements such as buttons, text fields, radio buttons, etc., composes the user interface. In Android, all these objects derive from the View interface and grouped together using ViewGroup objects such as a Layout. With such interfaces, input management is simple because if we click a button, Android itself that takes the care of it running the relative action code.
For example, the Button class extends the View class which allows you to set up a listener to respond to click commands. If you want to execute some code when you click a button, you need to write a code like this:
Unfortunately, writing a video game is very different from writing a normal Android application. Having to draw game objects by ourselves we cannot leverage the Android APIs to manage buttons.
We have to do everything ourselves and rely on a few classes (such as View and ViewGroup) and from these build our infrastructure. In an application the input can occur in several ways:
This article series presents only the necessary concepts for the implementation of our video game, so we won’t talk about keyboard input, accelerometer or gyroscope, but only events resulting from the video touch.
However, we will create such a flexible interface that adding other input modes will be a piece of cake. In this article, we will add to our Start Screen a button to turn the sound on and off (even if at the moment there is no audio management yet). The button will have a speaker icon that indicates its use.
Clicking on the button, the icon will change making a speaker appear in mute mode. As we have seen we cannot use the Android Button class but we’ll have to draw the button ourselves, capture the touch event, understand that the button has been clicked and execute the corresponding action.
Luckily the View interface (hence also our AndroidFastRenderView class) allows you to register touch events with the setOnTouchListener method and create listeners that implement the OnTouchListener interface just to intercept these events. This interface has only one method to implement:
The idea is to collect all the events that arrive in a buffer and when the user requests them, we supply them in output. To do this we need to extend the interface OnTouchListener so in the package org.androidforfun.framework.impl we add the TouchHandler interface defined as follows:
TouchEvent static class will represent touch-type events in our framework, it will have the following public fields:
As you can see there is the need to know what kind of touch event is, for example, if there was a touch (TOUCH_DOWN) or a release (TOUCH_UP) or if the finger was dragged along the video (TOUCH_DRAGGED), the x, y coordinates of the video you touch and a pointer we’ll see better later.
With version 1.6 and earlier, Android was only able to detect individual touch events, with later versions you can also manage multi-touch events. What is a multi-touch event?
Have you ever wanted to zoom in on one web page and used two fingers spreading them outwards? This is an example of a multi-touch. To avoid having to handle exceptions on older versions of Android will be necessary to implement the TouchHandler interface in two ways depending on whether the Android version is higher or lower than version 1.6. We introduce two classes SingleThouchHandler and MultiTouchHandler which will implement the TouchHandler interface depending on the Android version:
SDK 5 corresponds to Android 2.0 which is the version after 1.6. Because today is really difficult to find phones with Android below 2.0 we can safely say that in 99% of cases we will use the MultiTouchHandler class which is the one that we will analyze in this article, the other is similar. The implementation of the MultiTouchHandler class must take into account some very important technical requirements so that input management is done correctly and without degrading performance.
Touch events can be single or multi-touch. In the former case, what is needed to know is if the screen has been touched, released or scrolled. It is also necessary to know in which point of the video the touch occurred. Furthermore, in the case of multi-touch, you need this information for each finger. Let’s see what Android offers to manage user touch events.
Every time the user touches the screen, Android generates a MotionEvent event and associates it with the View currently in the foreground. The action field describes what kind of event it is, in our case, it will be an ACTION_DOWN. Vice versa, when the action releases it, the action will be ACTION_UP. In the case of dragging, the action will be ACTION_MOVE. The event will also provide the x, y coordinates where the finger (or mouse, pen, etc.) touched the screen. In the case of single touches, the pointer will always be 0. In the case of multiple touches, a MotionEvent event will be created for each finger. A pointer field will provide information on which finger touched the screen. For each finger, we will have the following actions:
Now let’s see how to implement the MultiTouchHandler class.
The class manages a maximum of 20 pointers (typically 2 or 3 are enough), a pool of 100 events (touchEventPool), a buffer (touchEventsBuffer) and scaleX and scaleY factors. The constructor is quite simple and requires no comment.
The most important method of the class is onTouch which will convert Android events into events understandable by our framework. How easy it is to observe events ACTION_DOWN and ACTION_POINTER_DOWN will be converted to TOUCH_DOWN events. The x, y coordinates of the event and the pointer will be taken. The same thing for ACTION_UP events, ACTION_POINTER_UP and ACTION_CANCEL converted to TOUCH_UP. Finally, the ACTION_MOVE event converted to TOUCH_DRAGGED. You can see that every event arriving is stored in the touchEventsBuffer buffer.
The video game, as we have seen, is a game loop that updates and draws itself N times per second, with N corresponding to the FRAME RATE. At each update, the code must get all user input and do it with a code like this:
The video game, as we have seen, is a game loop that updates and draws itself N times per second, with N corresponding to the FRAME RATE. At each update the code must get all user input and do it with a code like this:
Now that we’ve added the infrastructure to manage user input let’s see how to add a button to our screen and how to change its appearance by clicking repeatedly. First, let’s copy the buttons.png image into the assets folder. You can take this image with the source code here. In the Assets class we add a new bitmap:
and in the LoadingScreen class we provide to load it in memory:
At this point, in the StartScreen class we define a soundEnabled attribute that indicates whether the sound is active or not. By default, it is activated. In the update method, we take the user inputs and check whether the button has been pressed or not, setting the relative soundEnabled attribute. In the draw method, we will draw the button with the loudspeaker or mute depending on the value of the soundEnabled attribute.
Gdx.input is the reference to the input subsystem of the framework. It must be initialized in onCreate method of the class MyActivity as we have already done for the file subsystems and graphics.
To execute the code you can perform the procedure seen in paragraph 1.4. The source code the exercise can be found here.