Browsed by
Author: omerjerk

Getting the instance of a system service without an application context in Android

Getting the instance of a system service without an application context in Android

I have been working on a project where I am supposed to execute some Java code, which uses Android APIs, outside the application environment. This means I don’t have access to any Android component i.e. Activity, Service, Broadcast Receiver etc. I was supposed to get the display’s width and height. Hence arose the need for hacking a way to achieve what is mentioned in the title, i.e. getting the instance of a system service without having an application context.

Normally, you need an instance of WindowManager service to get the device’s dimensions and the code would be something like this-

This works fine in normal situation. However, if you are not running your code inside an Android app, you don’t have an Android context. This means that you can’t call the method getSystemService() in the first place.

A large part of the Android OS is written in Java and that Java code also accesses the system services without having the application context. We’ll follow the same approach with our “app”.

There’s a way of doing inter-process communication in Android using AIDL. AIDL is short for Android Interface Definition Language. Basically, you define the same AIDL interface on the server side and the client side, and the client is able to call functions of a server service like a normal procedure call.

Specifically for getting a reference of WindowManager service, we do something like this. First of all, create an aidl folder next to the java folder in Android Studio. Create a package called android.view inside the newly created aidl directory. There, create a file named IWindowManager.aidl which should contain the methods you want to execute. For instance, if you want to get the device’s dimensions it could look like this –

Note: For any other system service, look for the corresponding aidl file for that service in the Android OS source tree.

After this, you need to get the IBinder object exposed by that service for IPC. For this, we need to call some hidden API methods via reflection.

After getting the IWindowManager instance, we can easily call its methods like a normal procedure call.

Lastly, the API exposed by the service using the AIDL interface is not exactly the same as the one exposed by the actual system service when you go the normal way. However, you can easily find the corresponding methods for doing one thing or the other, or may be even extra.

Create touch events programmatically in Android

Create touch events programmatically in Android

Creating a touch event in Android is pretty easy. Although, is the easy way the best? Let’s find out!
The simplest way is to use the “input” binary which you can use to create touch events all over the screen (via the shell user). The command to create a simple tap event is as simple as – adb shell input tap x y.

However, there’s a latency of more than 1 second while executing this command. If we’re making some automation related app, we definitely wouldn’t want this.

Another approach is to write values directly to the device node file available inside /dev for input. This approach has slightly lesser latency, but file writing operations are anyways expensive. On top of that, some devices/OEMs might have different locations/names for the device nodes which makes this approach a nightmare to implement.

Can we do better both in terms of consistency and latency? Definitely!

Luckily, there’s a class called InputManager in Android which has some interesting methods, although the methods of our interest are either private or hidden.

Now’s when Java’s Reflection API comes to the rescue. We’ll follow a series of hacky reflection API calls to get our input injection working.

First of all, we need the instance of InputManager class. For that, we’ll just invoke the getInstance method via reflection.

Next, we need to make obtain method of MotionEvent class accessible. Then, we will get the reflection reference of injectInputEvent method.

We’re all set now, and just need to write the code to actually pass the touch events to the Android system. The code involves creating a MotionEvent object, and calling the injectInputEvent via reflection.

Another awesome thing with this approach is that, the framework will itself figure out if the current event will be a normal tap, swipe or a long press.

The full implementation of this approach is present in RemoteDroid –

Where can you create touch events?

Of course, a random user (every app is run by its own user) can’t just create touch events on anywhere on the screen just like that. A particular app can create touch events on those views only which are owned by the same app.

But there’s a catch, shell user (uid 200) can create touch events all over the screen.

Now the question becomes, how would you start your app so that the app’s code is executed by the shell user. Either figure this out yourself or search my blog for something close to that. 😉

Take screenshot programmatically without root in Android

Take screenshot programmatically without root in Android

Here’s another post related to Android hacking.

One can find a lot of ways over the internet which will tell you how take the screenshot of your own app, and that’s like pretty easy too. That’s also allowed by the Android framework. Note that here I’m not talking about the screenshots you take (as a user) by pressing Power + Volume down keys (that screenshot is taken by the SystemUI which has extra privileges). Let’s talk about the case if you want to take screenshot programatically from your own app/service and want to cover the whole screen.

According to Android security, if you were to take the screenshot of the screen programmatically, the only views visible will be the ones created by your own app. It won’t contain the views created by any other app.

So, I present a way to take screenshots of other views from your own app programmatically. The trick here is to use android’s MediaProjection API. Also, do note that this sets the minimum API level to be 21.

What we’ll do is to render the Android’s display contents on a Surface using the MediaProjection API. But then the toughest part is to get the contents of this Surface and create a proper Bitmap out of that. There’s no direct way of doing that.

Over the past few days, I tried different ways and failed –

  1. Create the Surface from the encoder, then attach a decoder to the encoder output, and create a Bitmap from the decoder’s raw output. (Didn’t work and I wasn’t surprised)
  2. Create an OpenGL texture, create a Surface using that texture, then whatever is drawn on the Surface will automatically get passed to the OpenGL texture, and then call glReadPixels() to get raw pixel data to create the Bitmap. (Should’ve worked but for some reason, I was just getting a green colored image)
  3. Then I tried Android’s ImageReader API and that finally worked out.

Enough of the bullshit, let’s start with some code.

First of all, we need to initialize the object of the ImageReader class.

Now, we need to pass this surface to the MediaProjection API. Please go through this demo code to learn how to create the object of MediaProjection class.

Implement the ImageReader.OnImageAvailableListener in your class and do the following –

Also, make sure you have the following permission in your manifest.

In case you’re too lazy and when everything setup for you, here’s a small library I created –

Execute Java code as a root user in Android

Execute Java code as a root user in Android

You can find a lot of ways on StackOverflow about executing shell commands as root in Android, and it goes as follows :

What if you want to execute some Java code which uses Android APIs as a root user ?

This is not as straightforward as executing a shell command with root.

The idea is to compile the Java class with a static main method just like a normal apk is compiled.

First of all, we need to create a class which is to be executed as the root user. This class can be placed along with the normal classes inside your Android app. This should look like this (of course, the name of the class can be anything) :

Note that the package name of the above class is “” which will be used to identify the above class later.

The hack is to start the above class just like how the Android’s framework starts the app when we click on the launcher icon. We use app_process binary to load our java class. The app_process is in class path and can be called by executing app_process32 inside the shell.
But, here’s the trick. We will start app_process as a root user, which will in turn load our class, again having root as the executing user.

The above paragraph boils down to the following command which is supposed to be executed in a rooted device.

In the above command, replace and Main with your package name and class name respectively.

That’s all you need and your Java class will get executed as the root user.

GSoC 2015 – The Processing Foundation

GSoC 2015 – The Processing Foundation

GSoC 2015 has come to an end. I was lucky to have such an intelligent mentor, Andres Colubri.

I worked on maintaining the Android mode of Processing. My GSoC project was around the following main goals :

  • Update the Android Mode to work with the updated processing base code.
  • Move PApplet from Activity to Fragment so that it can be embedded inside other apps.
  • Create a video library for the Android Mode of Processing.

Let’s start with the first sub task. It was all about laying down the ground work for the development of the later stages. Processing base went ahead and processing-android was left as it is. I fixed the build system and the Android mode itself so that sketches were at least being compiled without any problem.

Second task was to move PApplet from Activity to Fragment. The biggest benefit for this change is that users can now embed a processing sketch in their own app. Since, Fragment has its own life cycle, the developer doesn’t have to worry about any of the intricacies at all. The user just needs to create an object of the PApplet (which is a child class of Fragment class) and add this Fragment to his/her Activity.

For a demo of this feature head over to my Github – ProcessingAndroidDemo

The third and the main subtask of my project was to create a video library for Processing for Android. The video library can take input from either the device’s camera or a video file present locally on the device. Processing has it’s own image format called PImage over which we can do all the processing operations. This library presents two classes Capture and Movie, both of them are the child classes of the PImage class. For example, the Capture class represents the camera frame at any given instant of time. Since, Capture extends PImage, we can do stuffs like using shaders, applying this PImage texture over a 3D shape etc.

The video library gives an awesome performance because of no GPU to CPU data transfer at any stage. Everything is handled inside the GPU itself. Although, I’ve implemented loadPixels() which gives user the access to raw pixels but is pretty slow because the data has to be transferred from GPU to CPU.

For a demo of the video library, have a look here – Processing Video for Android library – Demo

Using variadic templates in C++

Using variadic templates in C++

I’ve lately been working on C++ and my main task was to improve the design of the code and make it as generic as possible, removing the redundant code and so. To achieve this aim Templates and void pointers helped me a lot. In the following post I’m going to explain about variadic templates in C++.

Let’s me first give a brief intro about what templates actually are before moving on to variadic templates.

Quoting from Wikipedia “Templates are a feature of the C++ programming language that allows functions and classes to operate with generic types. This allows a function or class to work on many different data types without being rewritten for each one”. That is, templates allow you to write a generic definition of such functions which are to be operated on different data types. For e.g.

I’m not continuing with templates as there’s already a lot of material about it on the internet.

One of the cool features of C++11 is what we call “Variadic Templates”. It allows us to define functions having variable number of templates parameters. A typical function using variadic templates looks like this :

This definition is pretty much like that of a function which accepts a variable number of parameters to it’s call. For e.g. our old friend printf().

Most of the other things can be found on the official C++ blog :

I’m heading on how to execute some particular code for each template parameter passed to the function via it’s call.

Let me first tell how a call to this function is made.

Note that, in the above call, even when the data types are not provided, the compiler will automatically deduce the data types from the parameters passed. They are necessary in the case when the parameters are not of type T in the definition.

Now let us write the implementation of our SampleFunction. The problem here is that data types are not available to us in the form of some list upon which we can iterate. The trick is to expand this argument pack by passing it to some dummy function. The following snippet of code illustrates this :

This code will print the size of every template parameter passed to the SameplFunction call. The above code works because the compiler expands the argument pack into the following form during the compilation :

So the output of the following call

will be

Though, there is one downside of the above approach. It doesn’t assert that the argument pack will be expanded from exact left-to-right sequence.
In case if you want to iterate in exactly the left-to-right sequence, the other approach is to use initialiser list.
The following code does the job :

I hope you liked the post.

How to install an app to /system partition ?

How to install an app to /system partition ?

Note : This article is for you if you’re making an app for rooted android devices.

The normal behaviour being that the android’s package manager installs the apk file to /data partition. But for accessing hidden APIs and having extra privileges one may want the app to be installed in /system partition. For pre-kitkat devices the extra privileges folder is /system/app whereas for kitkat and post-kitkat devices the extra privileges folder is /system/priv-app.

The trick to install your app to /system partition is to first install it the normal way, then on the first run of the app, move the apk to /system/priv-app folder (/system/app for pre-kitkat devices). The following snippet of code makes your life easy to achieve this.

Use String.format() in your code to format the above commands as shown :

Execute the above formatted String after replacing the apk name, package name and the activity name to yours as shown :

I use libsuperuser to execute the SU commands in android. The way how to execute these commands is trivial and totally up to you.

Working example :

Getting video stream from Android’s display

Getting video stream from Android’s display

This is something that has been tried to be achieved in various ways. What other people usually do is to take screenshots at regular intervals and stitch them together to make a video out of it. What I’m doing here is pretty different an much better than that approach.

And so I also present a way to capture video frames from Android’s default display and do further processing as one pleases.

I’ll broadly use two main APIs viz. MediaCodec (added in API level 16) and DisplayManager (added in API level 19). So this will limit our app to a minimum API level of 19 which is Kitkat. And further if you want to mirror the output of secure windows as well, then you’ll have to push your apk to /system/priv-app/ which will require having root access on your phone.

My logic is :

  • Create a video encoder.
  • Get an input Surface from the encoder using createInputSurface() method of the encoder object.
  • Pass this surface to DisplayManager so that the display manager will route it’s output to this surface.
  • Use the dequeOutputBuffer() method which will return you the H.264 encoded frames of your video.
  • And if you want to get raw video frames you can further pass these AVC encoded frames to a video decoder and get the raw frames. However as of now I’m not covering that in this blog post.

Let’s start with the code :

First we need to create an encoder, configure it and get an input surface out of it.

We will then pass the above created surface to createVirtualDisplay() method.

The DisplayManager will keep drawing the contents of the Android screen on our virtual display which in turn will feed the contents to video encoder through the surface.

The variable encodedData contains the AVC encoded frame and will be updated in every loop.
Now it’s upto you what you want to do with this. You can write it to a file. You can stream it over the network (Although that will require a lot more efforts).
You can further pass it to a video decoder to get raw frame and convert it to a bitmap or something. I’ve implemented the passing of video frames to video decoder in one of my projects here :

Make HTTP requests from SIM300 GSM module

Make HTTP requests from SIM300 GSM module

SIM300 is a GSM module by Simcom.

And believe me it’s no less than a piece of shit. I was unlucky enough to have my hands on it. I spent 3-4 days figuring out and testing the correct sequence of AT commands that must be used to make HTTP request via this module. In the following post I’m going to explain how to use it with the Arduino to send and receive data to a web server over HTTP.

Let’s start with the code :

Fist we need to create an object of the SoftwareSerial class. Let’s call it GPRS. The object must be created at the start of the program. The code for this is :

The SoftwareSerial library ships as a default library with the Arduino IDE. Here, 2 and 3 are the pin numbers on the Arduino board where we need to connect our GSM module. Here pin 2 is being used as the Rx pin and pin 3 is being used as the Tx pin. Rx of Arduino should be connected with the Tx of the module and Tx of Arduino with the Rx of the GSM module. The line GPRS.begin(9600) initialises the serial communication with the module with setting a baud rate of 9600. This is the default baud rate for SIM300. The line Serial.begin(9600) initialises the Arduino’s default hardware Serial communication. The Rx and Tx of this default Serial are at pins 0 and 1 of the Arduino. The output that you see in the Serial monitor is also via this default hardware Serial communication.

Upto this point we have done our initialisation part.

Let’s setup our HTTP request with the following code. I’ve provided comments at each line to tell what it does.

Upto this point we’ve initialized our connection to the web server. The following code makes the actual HTTP request.

In the above post you’ll have to change the domain name and put the domain name of your server. Update the line yourphpfile.php?key=value as per your needs. You can add extra parameters if you want.

You can browse the whole code on my Github :

And at last I would say if you’ve a choice then better go for SIM900 GSM Module. SIM900 is way better and more reliable than SIM300.

My work at Cube26

My work at Cube26

Cube26 has it’s innovations in the areas of Image Processing, Machine Vision, Machine Learning. They basically focus on gesture and image based controlling of the device. At this time Cube26 has partnerships with 6 Indian phone manufacturers including Micromax, Intex and Spice.

I’m in the Android development team of Cube26. At the time of writing this blog I’m working on the release of Micromax A290. We currently don’t have access to the whole source but some parts of the OS.

Below is the list of the features that I implemented/partly implemented in the Canvas A290 release :

  1. Low light reading flux mode : On the first day at Cube26 I started working on this flux mode. For the beta release I implemented it with the Launcher. Later I implemented it in the SystemUI.
  2. Stamina Mode : Ah. There was a lot of code involved in the Stamina mode. It’s task was to kill all the background apps, remove all apps from the recents, disable GPS, WiFi, Bluetooth. At 14% mode it will be automatically enabled. And at the 4% mode the it’ll automatically kill the RF of the phone by enabling Airplane mode.
  3. Vault : Vault itself was the part of a feature called Eyeris. In Eyeris, your phone will scan your retina and if it matches to the owner’s retina it will automatically unlock the phone and decrypt it. I implemented the functionality of Eyeris in Gallery to hide the images and in the messaging app to hide the messages.
  4. Camera : For some of the later days I kept working on implementing some features in the Camera app. The features included say cheese to capture a selfie, Anti Shake mode, Front and Back mode. Implementing all of them took a little long time.
  5. Look away to pause : To tell you the truth I didn’t wrote it’s Image Processing related code. The whole Image Processing code was written by some other genius soul. I just implemented it in the Gallery app (It’s one of the best features). The implementation is that if while watching a video if you look away to somewhere else the video will automatically pause.
  6. Auto Collage Maker : The core code of this feature was also written by someone else. I just improved upon it and enhanced it and merged it with the Gallery app.

Other projects:

  1. App for LAVA tablets to search internet for education related content given the topic as a query.
  2. Launcher to search your whole phone providing suggestions in realtime. Search was based on Lucene.