Skip to main content

Finally, a way to speed up Android emulator

The Android emulator's speed is killer. Literally. Android developers know this; irrespective of platform you are using, Windows, Linux or Macintosh, and your hardware specification, the emulator provided with the Android's SDK crawls like a snail.

Thankfully, the developers at ‎Intel® have come up with a hardware-assisted virtualization engine that uses the power of hardware virtualization to boost the performance of Android emulators. Here is how you can configure it:

1. First, you need to enable Hardware Virtualization (VTx) technology from the BIOS settings on your computer. Since different vendors have different settings, you may have to search on the web on how to do so.

2. Next, you need to download and install Android HAX Manager from Intel. During installation, you will be asked to reserve the amount of memory for HAX, keep it default (1024MB).

3. Now go to your Android SDK directory and launch "SDK Manager.exe".

4. Under Extras, check "Intel x86 Emulator Accelerator (HAXM installer)" and also select "Intel x86 Atom System Image" for your SDK version.



5. Launch "AVD Manager.exe" from your Android SDK directory and create a new Virtual Device (AVD). Select Intel Atom (x86) for the CPU. You may just copy the settings from the image below.



That's all. Launch your AVD and feel the speed. :-)

Note: Unfortunately, this may not work if your computer does not support hardware virtualization, or you are on AMD platform. Here is a list from Intel of the CPUs that support hardware virtualization (I hope your CPU is listed here).

Feel free to comment for suggestions and corrections...

Comments

Post a Comment

Popular posts from this blog

A faster, Non-recursive Algorithm to compute all Combinations of a String

Imagine you're me, and you studied Permutations and Combinations in your high school maths and after so many years, you happen to know that to solve a certain problem, you need to apply Combinations. You do your revision and confidently open your favourite IDE to code; after typing some usual lines, you pause and think, then you do the next best thing - search on Internet. You find out a nice recursive solution, which does the job well. Like the following: import java.util.ArrayList; import java.util.Date; public class Combination {    public ArrayList<ArrayList<String>> compute (ArrayList<String> restOfVals) {       if (restOfVals.size () < 2) {          ArrayList<ArrayList<String>> c = new ArrayList<ArrayList<String>> ();          c.add (restOfVals);          return c;       }       else {          ArrayList<ArrayList<String>> newList = new ArrayList<ArrayList<String>> ();          for (String

How to detach from Facebook... properly

Yesterday, I deactivated my Facebook account after using it for 10 years. Of course there had to be a very solid reason; there was, indeed... their privacy policy . If you go through this page, you might consider pulling off as well. Anyways, that's not what this blog post is about. What I learned from yesterday is that the so-called "deactivate" option on Facebook is nothing more than logging out. You can log in again without any additional step and resume from where you last left. Since I really wanted to remove myself from Facebook as much as I can, I investigated ways to actually delete a Facebook account. There's a plethora of blogs on the internet, which will tell you how you can simply remove Facebook account. But almost all of them will either tell you to use "deactivate" and "request delete" options. The problem with that is that Facebook still has a last reusable copy of your data. If you really want to be as safe from its s

A step-by-step guide to query data on Hadoop using Hive

Hadoop empowers us to solve problems that require intense processing and storage on commodity hardware harnessing the power of distributed computing, while ensuring reliability. When it comes to applicability beyond experimental purposes, the industry welcomes Hadoop with warm heart, as it can query their databases in realistic time regardless of the volume of data. In this post, we will try to run some experiments to see how this can be done. Before you start, make sure you have set up a Hadoop cluster . We will use Hive , a data warehouse to query large data sets and a adequate-sized sample data set, along with an imaginary database of a travelling agency on MySQL; the DB  consisting of details about their clients, including Flight bookings, details of bookings and hotel reservations. Their data model is as below: The number of records in the database tables are as: - booking: 2.1M - booking_detail: 2.1M - booking_hotel: 1.48M - city: 2.2K We will write a query that