Browsed by
Category: Uncategorized

Take screenshot programmatically without root in Android

Take screenshot programmatically without root in Android

Here’s another post related to Android hacking.

One can find a lot of ways over the internet which will tell you how take the screenshot of your own app, and that’s like pretty easy too. That’s also allowed by the Android framework. Note that here I’m not talking about the screenshots you take (as a user) by pressing Power + Volume down keys (that screenshot is taken by the SystemUI which has extra privileges). Let’s talk about the case if you want to take screenshot programatically from your own app/service and want to cover the whole screen.

According to Android security, if you were to take the screenshot of the screen programmatically, the only views visible will be the ones created by your own app. It won’t contain the views created by any other app.

So, I present a way to take screenshots of other views from your own app programmatically. The trick here is to use android’s MediaProjection API. Also, do note that this sets the minimum API level to be 21.

What we’ll do is to render the Android’s display contents on a Surface using the MediaProjection API. But then the toughest part is to get the contents of this Surface and create a proper Bitmap out of that. There’s no direct way of doing that.

Over the past few days, I tried different ways and failed –

  1. Create the Surface from the encoder, then attach a decoder to the encoder output, and create a Bitmap from the decoder’s raw output. (Didn’t work and I wasn’t surprised)
  2. Create an OpenGL texture, create a Surface using that texture, then whatever is drawn on the Surface will automatically get passed to the OpenGL texture, and then call glReadPixels() to get raw pixel data to create the Bitmap. (Should’ve worked but for some reason, I was just getting a green colored image)
  3. Then I tried Android’s ImageReader API and that finally worked out.

Enough of the bullshit, let’s start with some code.

First of all, we need to initialize the object of the ImageReader class.

Now, we need to pass this surface to the MediaProjection API. Please go through this demo code to learn how to create the object of MediaProjection class.

Implement the ImageReader.OnImageAvailableListener in your class and do the following –

Also, make sure you have the following permission in your manifest.

In case you’re too lazy and when everything setup for you, here’s a small library I created – https://github.com/omerjerk/Screenshotter

GSoC 2015 – The Processing Foundation

GSoC 2015 – The Processing Foundation

GSoC 2015 has come to an end. I was lucky to have such an intelligent mentor, Andres Colubri.

I worked on maintaining the Android mode of Processing. My GSoC project was around the following main goals :

  • Update the Android Mode to work with the updated processing base code.
  • Move PApplet from Activity to Fragment so that it can be embedded inside other apps.
  • Create a video library for the Android Mode of Processing.

Let’s start with the first sub task. It was all about laying down the ground work for the development of the later stages. Processing base went ahead and processing-android was left as it is. I fixed the build system and the Android mode itself so that sketches were at least being compiled without any problem.

Second task was to move PApplet from Activity to Fragment. The biggest benefit for this change is that users can now embed a processing sketch in their own app. Since, Fragment has its own life cycle, the developer doesn’t have to worry about any of the intricacies at all. The user just needs to create an object of the PApplet (which is a child class of Fragment class) and add this Fragment to his/her Activity.

For a demo of this feature head over to my Github – ProcessingAndroidDemo

The third and the main subtask of my project was to create a video library for Processing for Android. The video library can take input from either the device’s camera or a video file present locally on the device. Processing has it’s own image format called PImage over which we can do all the processing operations. This library presents two classes Capture and Movie, both of them are the child classes of the PImage class. For example, the Capture class represents the camera frame at any given instant of time. Since, Capture extends PImage, we can do stuffs like using shaders, applying this PImage texture over a 3D shape etc.

The video library gives an awesome performance because of no GPU to CPU data transfer at any stage. Everything is handled inside the GPU itself. Although, I’ve implemented loadPixels() which gives user the access to raw pixels but is pretty slow because the data has to be transferred from GPU to CPU.

For a demo of the video library, have a look here – Processing Video for Android library – Demo