Skip to main content.

Android - Using Async tasks and BroadcastReceivers

The Android ecosystem spawns a UI Thread on which the main execution for your application occurs. Any Android developer should not perform heavy operations on this thread (such as read huge amounts of data from SQLite, store or read large bitmaps from memory, read a large file from memory, etc.) as it can cause the Application to slow down or in the worst cases cause ANR issues.

The Android SDK provides a class called the AsyncTask which enables the developer to run heavy tasks in a separate thread and prevents overloading of the UI thread. This class allows the developer to perform operations in the background and publish the results to the UI thread, and the UI can then be manipulated accordingly.

This is the blueprint for creating any AsyncTasks

import android.os.AsyncTask;

public class SimpleAsyncTask extends AsyncTask<Void, Void, Void> {

    public SimpleAsyncTask() {

    protected void onPreExecute() {
        // Operations to be performed before the task is to be run

    protected Void doInBackground(Void... params) {
        // Main Operations that the task is supposed to do
        return null;

    protected void onPostExecute(Void o) {
        // Operations to be performed once the task has been  completed

As you can see the AsyncTask provides different methods which can be overridden to perform additional operations. For instance, onPreExecute and onPostExecute can be used to perform some operations before and after the task gets completed respectively. The main method which needs to be overridden in the AsyncTask is called doInBackground. This is the method where all the operations that need to run on a background thread are placed. They could be HTTP calls, SQLite database reads or any other tasks which are too heavy for the UI thread to perform.

In the definition of AsyncTask
SimpleAsyncTask extends AsyncTask<Void, Void, Void>
we have to extend the AsyncTask class and we have the opportunity to provide 3 parameters. In the above example I have supplied Void in all 3 cases. The first Void defines the parameter that the doInBackground accepts
protected Void doInBackground(Void... params)
So if the first parameter in the SimpleAsyncTask was Boolean this is how the doInBackground would look
protected Void doInBackground(Boolean... params)
The triple dot signifies that you can pass in zero or more objects (or an array of them) to that method.

The AsyncTask class also provides another method called onProgressUpdate. The second parameter is used in the signature of this method as an input parameter. For example, if instead of Void, Integer was used as the second parameter this is how the signature would look.
protected void onProgressUpdate(Integer... values)
onProgressUpdate(Progress...) is invoked on the UI thread after a call to publishProgress(Progress...). The timing of the execution is undefined. This method is used to display any form of progress in the user interface while the background computation is still executing. For instance, it can be used to animate a progress bar or show logs in a text field.

The third param is for specifying the return type of the doInBackground method and the input parameter to the onPostExecute method. For example if the 3rd parameter were Boolean the AsyncTask would look like this
import android.os.AsyncTask;

public class SimpleAsyncTask extends AsyncTask<Void, Boolean, Boolean> {

    protected void onPreExecute() {
        // Operations to be performed before the task is to be run

    protected Boolean doInBackground(Void... params) {
        // Main Operations that the task is supposed to do
        return null;

    protected void onPostExecute(Boolean o) {
        // Operations to be performed once the task has been completed

The next question that should come to any developers mind is, how would we communicate the result of this AsyncTask to the UI Thread. This is where BroadcastReceivers come into play.

We normally define a BroadcastReceiver inside the activity and have the AsyncTask broadcast the result to this receiver. Consider the following code where a BroadcastReceiver has been created in the Activity itself
public class SampleActivity extends Activity {

    private SimpleAsyncTask simpleAsyncTask;

    public static final String FILTER = "";

    protected void onCreate(Bundle savedInstanceState) {

        registerReceiver(new MyReceiver(), new IntentFilter(FILTER));

        simpleAsyncTask = new SimpleAsyncTask(getApplicationContext());
        simpleAsyncTask.execute(/* Pass in some params here */);

    public class MyReceiver extends BroadcastReceiver {
        public void onReceive(Context context, Intent intent) {
            Bundle extras = intent.getExtras();
            if (extras != null) {
                Toast.makeText(context, "Success - " + extras.getBoolean(SimpleAsyncTask.SUCCESS), Toast.LENGTH_LONG).show();
BroadcastReceivers create a pub-sub type pattern where the Activity subscribes to a channel identified by the FILTER object and the AsyncTask will publish the result to the same FILTER channel. The
public void onReceive(Context context, Intent intent)
in the BroadcastReceiver receives an Intent from the AsyncTask with the result and we can have some code there to decide what to do when the AsyncTask completes. This is how the AsyncTask will look
public class SimpleAsyncTask extends AsyncTask<Void, Integer, Boolean> {

    private Context applicationContext;

    public static final String SUCCESS="";

    public SimpleAsyncTask(Context context) {
        this.applicationContext = context;

    protected void onPreExecute() {
        // Operations to be performed before the task is to be run

    protected Boolean doInBackground(Void... params) {
        Boolean success = false;
        // Main Operations that the task is supposed to do
        // and it should set the success flag to true
        return success;

    protected void onProgressUpdate(Integer... values) {

    protected void onPostExecute(Boolean success) {
        // Operations to be performed once the task has been completed
        Intent intent = new Intent();

        Bundle bundle = new Bundle();

        if (success) {
            bundle.putBoolean(SUCCESS, true);
        } else {
            bundle.putBoolean(SUCCESS, false);

The main difference lies in the SimpleAsyncTask constructor where we will need the Context of the Application using the getApplicationContext() method. We never pass in the Activity context (The reason why will be explained later in this document). The Context object has a method called sendBroadcast which is used to broadcast an Intent to the receiver(MyReceiver) defined previously. This Intent has to have its action set to FILTER string (making the AsyncTask the publisher in the pub-sub pattern) from the Activity class and we can pass in any data from the AsyncTask to the Activity via the Bundle object. In the above example I am passing back a boolean object showing if the Task completed successfully or not.

The reason for passing in the Application Context to the AsyncTask
We pass the Application Context to the AsyncTask instead of the Activity (Activity inherits from Context) because the Activity can get destroyed due to configuration changes (for example, if the user changes the orientation from portrait to landscape or slides out a physical keyboard) even when the AsyncTask is running. But the Application Context will persist across these events and lives until the system kills the app.

How to cancel an AsyncTask
The easiest way is to call the cancel method on the asyncTask with true as a parameter


and now in our doInBackground function in our AsyncTask we can call the


method and check if the task was cancelled then take the appropriate actions.


The AsyncTask pattern is one of the possible solutions for performing heavy tasks in the background in a separate thread so that we do not overload the UI thread. Continue Reading

Architecting a radio player application for the Android

1. What is Android?

Android, the world's most popular mobile platform. It powers hundreds of millions of mobile devices in more than 190 countries around the world. It's the largest installed base of any mobile platform and growing fast - every day another million users power up their Android devices for the first time and start looking for apps, games, and other digital content.

Android gives us a Software platform for creating apps and games for Android users everywhere, as well as an open marketplace for distributing to them instantly.

The Android system provides various APIs (Application Programming Interface) for building Applications. We declare the UI in lightweight sets of XML resources, one set for parts of the UI that are common to all form factors and other sets for optimizations specific to phones or tablets. The XML files are associated with different types Activities or Fragments. At runtime, Android inflates these XML files classes to create the UI of the application.

2. About this document

A radio is type of a type of a music player where the tracks to be played are decided on the fly and played on the device in a queue fashion i.e. one after the other. This paper will not talk about the algorithm behind the song selection, but the architecture of the application for playing the tracks inside the Android ecosystem.

The Android SDK enables us to create Applications which are inherently multithreaded and if not designed correctly can lead to several synchronization and ANR(app not responding) issues. For example the UI thread needs to be updated even when the application is in the paused state, the UI thread of the radio player needs to be updated when a new track is to be played, the UI thread does not have to be overloaded by spawning too many threads and objects(media player object) which can cause an ANR issue, etc. The above mentioned issues can lead to an inconsistent user experience. This paper provides an overview of the various design patterns and some implementation details on how to create a publisher-subscriber based radio application for the Android which makes sure we do not run in the issues mentioned above.

To design a radio we need a server which will return the different tracks to be played in the queue. The server will return a streaming URL or some track information using which we could get the audio data from some music provider like Spotify, Soundcloud or iTunes. The server in this case will be treated like a black box whose function will be to simply provide the next track to be played. A typical radio player will need some basic components from the Android Ecosystem to function. They include the Activity, Service, BroadcastReceivers, IntentService, SQLite, AsyncTask and SharedPreferences for saving data.

3. Different Components

1. Activity - An activity is a single, focused thing that the user can do. Almost all activities interact with the user, so the Activity class takes care of creating a window for us in which you can place your UI with Activity.SetContentView(View). The UI in this case will be either a ListView or a GridView for showing the tracks which are queued up and which track is being played.

2. Service - A Service is an application component representing either an application's desire to perform a longer-running operation while not interacting with the user or to supply functionality for other applications to use. Each service class must have a corresponding <service> declaration in its package's AndroidManifest.xml. Services can be started with Context.startService() and Context.bindService(). In this case we will create a long lived background service which will play the music.

3. SQLite - The Android framework provides SQLite database management classes that an application would use to manage its own private database. We can manage SQLite using a class that extends SQLiteOpenHelper, which comes with a constructor and two required methods; onCreate and onUpgrade. The SQLiteOpenHelper checks whether the database exists and, if not, will call the onCreate method. If the database does exist, it will check whether the existing database version number differs from the one implemented in the constructor, so as to determine if the database has been updated. If it has, the onUpgrade method will be called.

4. BroadcastReceiver - BroadcastReceivers simply respond to broadcast messages from other applications or from the system itself. These messages are sometimes called events or intents. For example, applications broadcast intents to let other applications know that some data has been downloaded to the device and is available for them to use, a BroadcastReceiver listens for these intents and when they receive them they shall initiate the appropriate action.

5. IntentService - IntentService is a base class for Services that handle asynchronous requests (expressed as Intents) on demand. Clients send requests through startService(Intent) calls; the service is started as needed, handles each Intent in turn using a worker thread, and stops itself when it runs out of work.

6. AsyncTask - AsyncTask enables proper and easy use of the UI thread. This class allows to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.

4. Approach

We create a few basic components here -

a) An Activity for displaying the UI.

b) A Music Player Background Service which will be responsible for taking the unique

track id or the streaming url to fetch the audio stream from our Servers or Spotify or

Soundcloud or iTunes.

c) An IntentService for getting the next track to play in the radio.

d) A BroadcastReceiver inside the Music Player Background Service.

e) Another BroadcastReceiver inside the Music Player Background Service.

f) A 3rd BroadcastReceiver inside the UI Activity.

  • When the activity gets created we start the Music Player Background Service which will be a long lived background service. The Android service lifecycle states that the Android ecosystem can kill the service and recreate it based on the return value of the onStartCommand. The START_STICKY should be the return value of the onStartCommand which states that the Android OS has to recreate the Music Player Service using the same Intent which was passed to the "startService" function. But we do not want the Android OS to kill this service at all. So we start a foreground service in this background service to make sure that the OS will never kill this Music Player Service.

  • Now, with the above setup we have a service which will be basically run forever, or till the activity has been force killed(by going to the recently used apps list and killing the application from there).

  • The Activity and the Music Player Background Service will have to be updated when a new track is fetched by the IntentService. We also need to have some controls on the UI for the application to pause, play or skip the current track being played.

  • To achieve this bi-directional communication between an Activity and the Music Player Service and multicast communication between the IntentService, Music Player Service and Activity (where the IntentService sends a multicast message to the Background Service and Activity) we use the BroadcastReceivers and the LocalBroadcastManager.sendBroadcast(intent).

  • We create two BroadcastReceivers inside the Music Player Service,

  1. one for receiving the pause or skip events that the Activity publishes for controlling the media playbacks and

  2. the other one for receiving the new track information that the IntentService publishes whenever the IntentService has been started to get a new track from the server.

  • We create a third BroadcastReceiver inside the UI Activity for receiving events to update the UI when the IntentService publishes a new track.

  • Whenever the radio need another track from the Server side, it will issue a simple HTTP Call to get the next track using the IntentService.

5. Drawback with the approach

  • There is one drawback to this approach. The BroadcastReceiver in the Activity which listens for UI updates from the IntentService has to be unregistered when the Activity is paused i.e. the onPause() has been called on the Activity.

  • Consider this scenario - the Music Player service calls the startService on the IntentService to fetch a new track from the server. The IntentService successfully gets the track and publishes an event to the Music Player Service and the Activity. The Music Player Service will successfully get this event but the Activity shall not. This is because the BroadcastReceiver inside the Activity has been unregistered, the Activity will miss this event and it will not be able to update the UI.

6. How to solve this issue?

  • Here is where SQLite comes into play. In the scenario when a new track information has been fetched via the IntentService we would do 2 things

  1. Send the broadcasts to the Music Player Service and the Activity same as before.

  2. Also store that information into SQLite and each track would have its own unique Id(Unique to that android device) and that state of the current track being played(by state I am referring to the unique track Id of the current track being played) is stored inside a Shared Preference.

  • This action would happen even when the activity is paused as SQLite addition happens inside the IntentService which runs in a different worker Thread and has no connection to the UI Thread.

  • Now when the onResume is called on the activity we create an AsyncTask to retrieve the track info to update the UI. This AsyncTask would use the shared preference state (unique track Id) to retrieve all the tracks after that track Id(all the tracks after the current playing track) from SQLite.

  • The SQLite db is like a central source of truth.

  • The reason for using the AsyncTasks to update the UI is because the getter functions which the Android framework provides for retrieving the information from SQLite are synchronously and if suppose there are about 100 or more records to be retrieved from SQLite, this could block the UI thread and the ANR error dialog will be displayed.

7. Flow Diagram


The above diagram covers 3 flows -

  1. The application gets a new track to play

  1. Music Player Service calls the startService on the IntentService.

  2. IntentService makes a HTTP call to the server to get the next track to play.

  3. The server responds with a track.

  4. The IntentService publishes a message to the new track inbound

BroadcastReceiver and the UI update inbound broadcast receiver.

e) The new track inbound broadcast receivers publishes the new track received in

step (d) to music player inbound BroadcastReceiver inside the Music Player

Service and also stores that track into SQLite.

  1. The UI of the application sends play, pause or skip events to the Music Player Service.

  2. The UI of the application checks for tracks it may have missed when it was in the onPause state.

  1. When the onResume() of the Activity is called on it, an AsyncTask is called to update the UI. The AsyncTask calls SQLite.

b) The tracks that were missed as the UI update inbound BroadcastReceiver was

unregistered in the paused state will be retrieved from SQLite by the


8. Conclusions

The Android ecosystem enforces various restrictions as listed in the paper but the architecture mentioned above leverages the different components and APIs that the Android SDK provides and has to offer a solution which will help any developer who wants to create this type of an application bypass those restrictions.

Continue Reading

Configuration Management With CloudFormation

If you are using AWS, then you already know how useful CloudFormation is for creating application infrastructure. Create a cluster of servers, load balancers, DNS entries, security groups, and install your application all in a single click or command line.

But what about after the stack is created and the application is deployed? How do you manage configuration changes? Maybe you need to deploy a new version of your application. Or just change a configuration option and restart a service. Do we really need to build an entirely new CloudFormation stack just for these changes? Continue Reading

Android - creating a long living Foreground Service

A Service is an application component that can perform long-running operations in the background and does not provide a user interface. Another application component can start a service and it will continue to run in the background even if the user switches to another application. The service life cycle states that the Android Operating System can kill a service at an point and can recreate it depending on the value returned by the onStartCommand. The different return types of the onStartCommand are

  1. START_NOT_STICKY[1] - If the system kills the service after onStartCommand() returns, do not recreate the service, unless there are pending intents to deliver. This is the safest option to avoid running your service when not necessary and when your application can simply restart any unfinished jobs.
  2. START_STICKY[2] - If the system kills the service after onStartCommand() returns, recreate the service and call onStartCommand(), but do not redeliver the last intent. Instead, the system calls onStartCommand() with a null intent, unless there were pending intents to start the service, in which case, those intents are delivered. This is suitable for media players (or similar services) that are not executing commands, but running indefinitely and waiting for a job.
  3. START_REDELIVER_INTENT[3] - If the system kills the service after onStartCommand() returns, recreate the service and call onStartCommand() with the last intent that was delivered to the service. Any pending intents are delivered in turn.
This article describes how to create foreground service which is a type of a service which is not a candidate for the system to kill even when the device is low on memory. A foreground service must provide a notification for the status bar, which is placed under the "Ongoing" heading, which means that the notification cannot be dismissed unless the service is either stopped or removed from the foreground. A foreground service is created by calling the startForeground method inside the Service class definition.

The following is an implementation of how to create a foreground service which would run forever
public class SampleService extends Service {

    protected Integer NOTIFICATION_ID = 23213123; // Some random integer

    private LoadNotification loadNotification;

    public IBinder onBind(Intent intent) {
        return null;

    public int onStartCommand(Intent intent, int flags, int startId) {

        loadNotification = new LoadNotification("someTitle", "someMessage");

        return START_STICKY;

    class LoadNotification {

        private String titleMessage;
        private String textMessage;

        public LoadNotification(String titleMessage, String textMessage) {
            this.titleMessage = titleMessage;
            this.textMessage = textMessage;

        public void notifyMessage() {
            NotificationCompat.Builder builder = getNotificationBuilder(ActivityClass.class);


        protected NotificationCompat.Builder getNotificationBuilder(Class clazz) {
            final NotificationCompat.Builder builder = new NotificationCompat.Builder(getApplicationContext());

            builder.setSmallIcon(R.drawable.some_icon_id);  // icon id of the image


            Intent foregroundIntent = new Intent(getApplicationContext(), clazz);

                    | Intent.FLAG_ACTIVITY_SINGLE_TOP);

            PendingIntent contentIntent = PendingIntent.getActivity(getApplicationContext(), 0, foregroundIntent, 0);

            return builder;

One needs to understand that before they call the startForeground method we need to implement the onBind and onStartCommand. The onBind[4] method gets called when another component wants to bind with the service (such as to perform RPC), by calling bindService(). In our implementation of this method, you must provide an interface that clients use to communicate with the service, by returning an IBinder. You must always implement this method, but if you don't want to allow binding, then you should return null which I have done in this case to create the forever running service.

The main section of the code above is the LoadNotification class which creates the foreground service when the notifyMessage method is called.

The notifyMessage create a builder object of the NotificationCompact(class for creating a notification) type and assigns an icon, some title and some message to the notification.

To this builder object we would have to set some flags to it, namely the
  1. FLAG_ACTIVITY_CLEAR_TOP[5] - If set, and the activity being launched is already running in the current task, then instead of launching a new instance of that activity, all of the other activities on top of it will be closed and this Intent will be delivered to the (now on top) old activity as a new Intent.
  2. FLAG_ACTIVITY_SINGLE_TOP[6] - If set, the activity will not be launched if it is already running at the top of the history stack.
So now when the notification is clicked the "clazz" component would be opened.

With this builder object we call the startForeground with a NOTIFICATION_ID. The NOTIFICATION_ID is a unique integer for that notification.

To start the above service we have to call the startService along with an intent. Example
startService(new Intent(SomeComponent.this, SampleService.class))

The service has to also be defined in the AndroidManifest.xml as such
Stopping the service safely
The safest way to kill the foreground service is the call the stopForeground method along with the NOTIFICATION_ID mentioned previously.

[4] onBind -
[6] FLAG_ACTIVITY_SINGLE_TOP - Continue Reading

Building a VR App in 14 Days - A Designer's First Time Designing and Building with Unity3D

At AOL Alpha we constantly explore new and emerging platforms. One of the hottest new platforms is Virtual Reality so we've begun to experiment with experiences for VR. My most recent task proved to be a challenge and tested my patience but was extremely rewarding. The challenge was: Build a demo version of a VR app on my own in two weeks. It needed to be a functional Google Cardboard demo app, loosely based on AOL's Autoblog site, with the basic functionality of allowing a user to select from a list of cars and visualize the inside of that car in 360˙, and it needed to be finished in time for AOL's internal VR Summit in 14 days.

To a seasoned game designer this probably sounds easy. I am by no means a VR expert. I'm a mobile app designer, new to using Unity 3D, with limited knowledge of scripting.

Since VR is still in its infancy, design and development is challenging when you begin. I spent some down time between projects researching VR design using Unity3D engine and quickly found it's like the wild west - it lacks rules, structure, and most of all, information. Creating my demo often felt like fishing in a polluted lake - you keep throwing out your line and reeling it in, only to find a bunch of trash stuck to your hook and have to start over again.

This is a brief overview of my approach and how I learned new skills, found solutions and hacked a demo together.

Continue Reading

Solving for A New Era of VR Fragmentation

Ever feel like you've been thrown back in time? When I look at the current Virtual Reality landscape, I sometimes feel like I am stuck in 2009. That was a time when mobile was truly an "emerging technology" and the buzzword on everyone's lips was "fragmentation." Virtual Reality in 2016 is an even more fragmented platform then mobile was back then. With Google, Facebook, Sony, Microsoft, Samsung, HTC, Valve and many more all vying for elbow room, developers can be left trying to figure out the best direction to go in circumnavigating this congested mess.

Continue Reading

Cross Account AWS Deployments In Jenkins

There are plenty of tools for creating resources and deploying applications in AWS from Jenkins. But when application environments span multiple AWS accounts, it can be a little more complicated. As with all things AWS and Jenkins, there are several ways to accomplish the same goal. Here is one approach that we've used.

Continue Reading

Taming Callback Hell in Node.js


As you may know, JavaScript uses a single-threaded event loop that processes queued events. If you were to execute a long-running task within a single thread, then the process would block, causing other events to have to wait to be processed. To solve this blocking problem, JavaScript relies heavily on callbacks, which are functions that run after a potentially long-running task has finished.

While the concept of callbacks is great in theory, in many situations it can lead to difficult-to-manage and difficult-to-read code. In addition, handling errors becomes hard. In this post, we'll look at how JavaScript promises can provide a solution, and illustrate it with a real-world example.
Continue Reading

The Practical Benefits of Database Column Prefixing

If you've ever worked with SQL / RDBMS, then you may have stumbled across an argument that it's bad practice to prefix your columns and/or table names. More than a few people will side with the idea that you should never prefix columns. The argument often contests that standards lean toward support of a non-prefixing approach. Although that may be true, I would argue that prefixing columns is a more practical approach for the average developer.

I have come to believe that prefixing columns is a cleaner solution in terms of consistency and code comprehension--whereas the benefits of the contrasting approaches discussed here are more theoretical and conventional in nature, and can have fragile footing in practical design.

Continue Reading

Hydro - A Real-time Data Manipulation Framework

Hydro, a real-time data manipulation framework, or Extract-Transform-Render (ETR) framework as we used to call it, is an open source project developed at AOL. In order to better understand what the framework does, let's have a look at some use cases that many engineers are tasked with solving nowadays:

  • Quite often we find that we need to apply extra business logic after the data is loaded and before it is rendered. This logic sometimes gets written multiple times, for different sub-systems.

  • Before presenting the data, some JOIN needs to take place, yet this JOIN requires more than a single data source. For instance, the results from an analytic database requires some mapping data that exists in a metadata database. Usually, since these are two different databases, this is overcome by loading the data from the metadata database to the analytic one. For systems utilizing the lambda architecture - doing so breaks the architecture. Alternatively, some ETL tool could be used to pre-materialize the results, but this makes the results stale between ETL executions. This is not efficient, could be costly and, even worse, hurts the user experience.

  • There's logic that needs to be applied on the actual query, based on parameters to the query, and this logic would require rewriting parts of the query. For instance, based on the selected dimensions and data range, one might want to apply sampling. Moreover, different levels of sampling might apply given those or other parameters. Furthermore, sometimes run time decisions must be made based on criteria or statistics.

It is obvious that not every challenge today can be tackled with ETL solutions. Continue Reading

Dynamically Create Pages Using JSON Objects

One of my recent projects at AOL On was a single-page application generated using a JSON object. We'll take a look at this technique and where it can be useful.

Here's a CodePen that demonstrates the concept. I recommend checking out the code while reading this article.

Continue Reading

Grace Hopper 2015 - Beyond Women in Tech

As a professional and a technologist for over 15 years, there are some truths I've come to accept as the norm. We will always have the best of intentions but inevitably requirements will be incomplete. There will always be ghost issues in production. Pagers can and will go off when I least expect, and no matter how much free food is put into the office kitchen, it will disappear within 15 minutes. Another fact in our day to day in 2015 is that the number of men in the technology field significantly exceeds the number of women.

I was fortunate enough to be one of five Golden Ticket winners from AOL (@AOLCSR) to attend this year's Grace Hopper convention. Grace Hopper (1906-1992) was a computer scientist, a US Navy Rear Admiral, and one of the first computer programmers in the world. GHC happens every year as a celebration of women in computing and is the world's largest gathering of women technologists.

Continue Reading

Grace Hopper Celebration - Key Takeaways

The Grace Hopper Celebration of Women In Computing (GHC) conference was held on October 14-16 in Houston, Texas with over 12,000 female technologists.

First off, thanks to AOL for the opportunity to attend GHC and be surrounded by such talented and remarkable individuals. Having been to other conferences in the past, I could say this was indeed a special one. This conference was filled with abundant energy and excitement from every attendee and speaker. The best thing about GHC was feeling connected right away, as you shared similar experiences with everyone else in the room. I took in great inspiration and learnings from the stories and experiences shared at GHC. AOL had a booth at the career fair with very cool swag (the AOL brands cupcakes were a hit!), and I had the opportunity to speak with college students who were eagerly looking to take on internships and start their career.

Continue Reading

Grace Hopper 2015 - Our Time To Lead

The flight I took from San Jose to Houston on Oct 13th 2015 was a unique experience. 90% of the passengers were women techies heading to Grace Hopper 2015. Their faces were beaming with a sense of pride, internalizing this year's Grace Hopper theme "Our Time to Lead."

Downtown Houston had been transformed into something special. There were women in corporate branded t-shirts everywhere. New connections and friendship were being made in many corners. Mentors were being seeked. Mentees were being accepted gracefully.

On the opening day of the conference, I walked into a sea of 12000 attendees! I've never seen so many women techies under one roof before. It was an overwhelming, yet gratifying experience.

Continue Reading

AOL-Cornell Tech Connected Experiences (Cx) Lab Launches

We held our official kickoff for the AOL - Cornell Tech Connected Experiences (ConnX) laboratory this week in New York City. Here is the official site:

The lab is a four year collaboration between AOL and Cornell Tech to explore the opportunities to use technology in new ways, to enable immersive recommendations and deep content consumption experiences, especially on mobile devices, to explore new interfaces and modalities as it relates to content, community, and advertising (virtual reality, wearable devices, and other emerging platforms), and to use technology to help better connect communities and families in close-knit settings.

Continue Reading

Useful Plugins for Your Development Environment

Like most users, developers often fail to customize their tool's default settings. Take, for instance, the Windows startup sound. How many times in the past decade have you been in a meeting or conference and heard it? It is exactly in this setting, where you don't want to disturb anyone, that you wished laptops came with a mute switch. But ask yourself "Why?". Why, do you even let your laptop remain with this default configuration? Windows has a perfectly working settings application that lets you tweak this to your needs and turn off that "good for marketing" sound forever. So don't be "that guy" and tweak away!

As a software developer, however, we need to explore even further than just customizing our settings.
Continue Reading

A comparative study of distributed caches

Posted on behalf of Aishwarya Afzulpurkar:
I came to the AOL Mail team as an intern looking to gain experience in developing for large scale applications. I joined their middle-tier services team to be part of their major initiative of moving to a distributed cache model instead of in-memory cache model that exists in the system today.

The Need for a Distributed Cache

In general, distributed caches, compared to in-memory caches, provide much better consistency and reliability, and they limit system crashes as well as time-consuming calls to back-end databases.

Companies that deal with large and unpredictable volumes of data often don't need all of their servers running at full capacity at all times. In order to efficiently scale their servers inwards or outwards (as per user traffic), the team decided to make the shift to using Amazon Web Services. Additionally we needed to ensure that cache data isn't lost in the process.

What is a Distributed Cache?

Distributed caching, as opposed to in-memory caching, is the preferred caching solution of companies that deal with large and unpredictable volumes of data. In a system with multiple application instances, consistency of cache data (at all times) and reliability in case of node failures are important issues. An in-memory cache resides in the memory of every single instance, or node, of the application. So if a node happens to go down, its cached data is lost as well. This means a second instance of the application will have to fetch user data all the way from the core database and repopulate its in-memory cache.

Distributed caches, on the other hand, stay outside of the application instance. Although there are several 'cache nodes' that are part of this distributed cache, each object is passed from the application instance to the cache through a single interface. This preserves the state of the cache, and results in consistency of the data 24/7. The object itself is stored in a single cache node (maybe more, if replication is involved, but I'll go over this later). The cache engine can then determine which node to get the object from, using a hashing algorithm. Also, since the distributed cache isn't linked to any one application node, node failures wouldn't always make a difference to the performance of the system. An exception is when a node goes down while sending a request to the cache. In this case, the client communicating with the application instance will have to resend the request, regardless of the caching solution.

Continue Reading

Tuning Java Garbage Collection for Performance

AOL Mail's application platform was recently migrated from .NET to a Java based stack to offer an unified scalable platform that can power multiple mail apps across mobile and web. This mid-tier platform provides a suite of services for Mail, Contacts, Calendar and Content. These services need to be very responsive and even small optimizations can make a big impact on overall user experience. There are certain performance improvements that can be done by means of understanding Garbage Collection and tuning it as per your application needs.

In this post I'll cover key aspects of Garbage Collection, steps to tune it and share some of our learnings around GC tuning done in the AOL Mail mid-tier platform.

JAVA GC Refresher:

Garbage collector is responsible for

  1. Allocating memory

  2. Ensuring that any referenced objects remain in memory, and

  3. Recovering memory used by objects that are no longer referenced in executing code.

Objects that are referenced are said to be live. Objects that are no longer referenced are considered dead and are termed garbage. The process of finding and freeing (aka reclaiming) the space used by these objects is known as Garbage Collection.

The Java heap is the memory allocated to applications running in the JVM. The heap is divided into Young Generation, Old or Tenured Generation, and Permanent Generation. All the objects first gets created on Young Generation (Eden) and surviving objects are aged and eventually move to the Old Generation. The Permanent Generation (PERMGEN) from the pic below is also called the "method area," and it stores classes or interned character strings.

When Young Generation area fills up, it causes a minor GC. Major GCs are stop-the-world events. Often a major collection is much slower because it involves all live objects. For responsive applications, number of major GCs should be minimized.

Optimizing the Garbage Collector:

Here are some high level guidelines to optimize GC for high-throughput, low-latency requirements.

1. Define GC Characteristics
Garbage Collection needs to be observed over a period of time and optimized so its doesn't become a bottleneck for performance. As per application requirements around throughput and latency, GC characteristics needs to be defined.

Different Garbage Collectors will have different side effects. For example, stop-the-world algorithm pauses the application threads to collect garbage. Care should be taken so that the duration and frequency of these pauses don't impact the application. On the other hand, If you go with Concurrent Garbage Collector, its GC thread will contend with application threads for CPU cycles.

We used Hotspot Java7 on Linux OS with CMS (Concurrent Mark Sweep) GC with 512 MB of YoungGen and 512 MB of PermGen area. However we observed that with that setting there were frequently major and full GCs happening which were not desirable.

2. Understand GC activity

Understanding the verbose details of GC logs (with these options: -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime) helps with the overall understanding of the application's GC characteristics.

Watch GC log to understand object allocation rate, object promotion rate, death percentage of object in survivor area, total time taken in Old Gen GC i.e. CMS-initial-mark and the next CMS-concurrent-reset and Young Gen collection times.

We use home-grown graphs and a variety of open source tools to understand GC activity. Some of the popular open source tools are GCViewer , gclogviewer, visualvm, jstat

3. Evaluate number of GC pause activities

In Hotspot JVM, number of GC pause activity is directly related to the amount of the objects getting created and amount of objects getting reclaimed. If you have smaller Eden and Tenure area, GC will happen frequently. Key fact is that there is a high ratio of short lived objects versus long lived shared state.

In our case, high number of objects get created only for the span of the HttpRequest and very minimal amount of survivor objects retained for longer period. However due to smaller Eden area, there were frequent GCs happening.

4. Evaluate duration of GC pause activities

The young GC pause duration can be reduced by decreasing the young generation size as it may lead to less data being copied in survivor spaces or promoted per collection. However, as previously mentioned, we have to observe the impact of reduced young generation size and the resulting increase in GC frequency on the overall application throughput and latency.

Heap fragmentation is another factor which can affect GC duration. With CMS, try to minimize the heap fragmentation and the associated full GC pauses in the old generation collection. You can get this by controlling the object promotion rate and by reducing the -XX:CMSInitiatingOccupancyFraction value to trigger the old GC at a lower threshold. For a detailed understanding of all tunable options and associated tradeoffs, check out reference links at the end of the post.

GC tuning in the context of AOL Mail mid-tier platform:

We improved the performance of our servers after careful observation of GC logs, new object creation activity and amount of survivor objects retained. We mainly increase younger generation so minor collections occur less often. Larger eden or younger generation spaces increase the spacing between full GCs. Also, we avoided heap-resizing Full GCs by setting –Xms and –Xmx to the same value.

In the graph below, you can see full GCs reduced from 24 to 11 and full GC duration reduced from 33.745 seconds to under 1 second.

Data is based on below configuration

Hardware – Xen Virtual Machine – 8 CPU X 32 GB RAM.

Max Heap allocation configured – 6 GB.

Java Version = 1.7.0_17


GC Params Quick Reference:

GC Params



Sets the initial heap size for when the JVM starts.


Sets the maximum heap size.


Sets the minimum size of the New Generation.


Sets the maximum size of the New Generation.


Sets the starting size of the Permanent Generation.


Sets the maximum size of the Permanent Generation

Adventures in Developing for the Apple Watch

In this post I'd like to discuss AOL's recent experience writing WatchKit apps for the Apple Watch. We've released Huffington Post, Pip and the AOL native app with WatchKit support on the AppStore. I worked on getting the AOL app up and running, and it was quite a challenge. The AOL WatchKit app shows your current unread email messages, as well as top 4 news articles. These top news articles are also shown on the home page. There are quite a bit of challenges associated with bringing this capability to the Apple Watch.

Before we start, you should know that to deliver an application for the Apple Watch, you will need to embed a WatchKit Extension and a WatchKit app into your deliverable to Apple.

  • The WatchKit App is nothing more than the UI Storyboard which defines how the layout and flow of the app works, and any static resources your app needs like static images.
  • The WatchKit Extension is the piece that you write code to get the behavior you want in your application. You can also have other resources in your extension if you need to do extra processing there, but I'd argue that you should do as little in your extension as possible.

Of course you should also read all of the official documentation of WatchKit development at Apple's website:

There are three main things I'd like to discuss with regard to our experience developing with WatchKit:

  • You really need an Apple Watch to build a quality app.
  • You need to do as little work as possible in your WatchKit extension.
  • Use your resources wisely.
Lets dive in!
Continue Reading