Blog JVM Stuff.

Google I/O App Insights

A while ago I came across the Google I/O App in one of the latest Android Developers blog posts. I thought it would be interesting to have a look at some of the internals, to also gain some insights at how Android applications are developed at Google and what third party libraries are actually used there.

The source code for the Google I/O app is available on GitHub.

build.gradle

My journey through the source code began with the settings.gradle file. It contains the information about the projects Gradle modules. This app consists of two modules: one for the wearable version and the other one for the Android version.

This article will not talk about the implementation of the wearable version, I will create a separate blog post for that.

I have to say that the Android version was of particular interest for me, so I decided to go on with the Android modules build.gradle dependencies section that holds all the external dependencies that are needed by the implementation:

dependencies {
    wearApp project(':Wearable')

    compile 'com.google.android.gms:play-services:5+' 
    compile 'com.android.support:support-v13:20.+'
    compile 'com.android.support:support-v4:20.+'
    compile 'com.google.android.apps.dashclock:dashclock-api:+'
    compile 'com.google.code.gson:gson:2.+'
    compile('com.google.api-client:google-api-client:1.+') {
        exclude group: 'xpp3', module: 'shared'
        exclude group: 'org.apache.httpcomponents', module: 'httpclient'
        exclude group: 'junit', module: 'junit'
        exclude group: 'com.google.android', module: 'android'
    }
    compile 'com.google.api-client:google-api-client-android:1.17.+'
    compile 'com.google.apis:google-api-services-plus:+'
    compile 'com.github.japgolly.android:svg-android:2.0.6'
    compile fileTree(dir: 'libs', include: '*.jar')
    compile files('../third_party/glide/library/libs/glide-3.2.0a.jar')
    compile files('../third_party/basic-http-client/libs/basic-http-client-android-0.88.jar')

    compile('com.google.maps.android:android-maps-utils:0.3+') {
        exclude group: "com.google.android.gms"
    }

    compile 'com.google.http-client:google-http-client-gson:+'
    compile 'com.google.apis:google-api-services-drive:+'
}

As you can see in the code snippet above, several external dependencies have been included.

Dashclock API

Let's start with the first dependency that gained my attention:

compile 'com.google.android.apps.dashclock:dashclock-api:+'

The Android dash-clock project comes with an alternative lock screen clock widget implementation that can be used to show additional status items. Showing additional information on the lock screen is done by implementing so-called DashClockExtension extension descendant classes, as described in the DashClockExtension documentation. Although this API looked pretty interesting, I couldn't find any use for it in the Google I/O application and also removing it from the dependencies did work, so I guess it might have been planned to use it, but actually it was never implemented.

GSON

Next up is Google's JSON library: Gson:

compile 'com.google.code.gson:gson:2.+'

The Google I/O app's main purpose is the give an overview of all the scheduled talks at Google I/O and also allow some interaction for the user to give feedback about visited sessions. Gson is used to parse JSON that comes from Google's web services and contains the entire conference data.

One particular piece of code that shows some Gson usage is the ConferenceDataHandler. This handler basically is responsible for parsing most of the JSON data that holds information about the scheduled conference sessions, speakers, etc. Instead of parsing the JSON content directly to an object tree, it registers "handlers" for every JSON property in a map:

mHandlerForKey.put(DATA_KEY_ROOMS, mRoomsHandler = new RoomsHandler(mContext));
mHandlerForKey.put(DATA_KEY_BLOCKS, mBlocksHandler = new BlocksHandler(mContext));
mHandlerForKey.put(DATA_KEY_TAGS, mTagsHandler = new TagsHandler(mContext));
mHandlerForKey.put(DATA_KEY_SPEAKERS, mSpeakersHandler = new SpeakersHandler(mContext));
mHandlerForKey.put(DATA_KEY_SESSIONS, mSessionsHandler = new SessionsHandler(mContext));
mHandlerForKey.put(DATA_KEY_SEARCH_SUGGESTIONS, mSearchSuggestHandler = new SearchSuggestHandler(mContext));
mHandlerForKey.put(DATA_KEY_MAP, mMapPropertyHandler = new MapPropertyHandler(mContext));
mHandlerForKey.put(DATA_KEY_EXPERTS, mExpertsHandler = new ExpertsHandler(mContext));
mHandlerForKey.put(DATA_KEY_HASHTAGS, mHashtagsHandler = new HashtagsHandler(mContext));
mHandlerForKey.put(DATA_KEY_VIDEOS, mVideosHandler = new VideosHandler(mContext));
mHandlerForKey.put(DATA_KEY_PARTNERS, mPartnersHandler = new PartnersHandler(mContext));

With the registered handlers set up, it parses the JSON response body property by property in processDataBody:

private void processDataBody(String dataBody) throws IOException {
    JsonReader reader = new JsonReader(new StringReader(dataBody));
    JsonParser parser = new JsonParser();
    try {
        reader.setLenient(true); // To err is human

        // the whole file is a single JSON object
        reader.beginObject();

        while (reader.hasNext()) {
            // the key is "rooms", "speakers", "tracks", etc.
            String key = reader.nextName();
            if (mHandlerForKey.containsKey(key)) {
                // pass the value to the corresponding handler
                mHandlerForKey.get(key).process(parser.parse(reader));
            } else {
                LOGW(TAG, "Skipping unknown key in conference data json: " + key);
                reader.skipValue();
            }
        }
        reader.endObject();
    } finally {
        reader.close();
    }
}

When we have a look at one of the handler classes, let's say at SessionsHandler, we will see that it not only encapsulates the code for parsing the session JSON objects, but also code for building so-called "content provider operations". The ContentProviderOperation class is a class from the Android SDK that is used to build content provider actions such as inserting, updating or deleting entities stored by a content provider. The handler classes provide methods to directly create content provider operations based on the current state of an entity. E.g. if a session is new, needs to be updated or deleted, its makeContentProviderOperations method from the handler class will create the appropriate operation. Let's have a look now how actually parsing JSON is done for the SessionsHandler:

@Override
public void process(JsonElement element) {
    for (Session session : new Gson().fromJson(element, Session[].class)) {
        mSessions.put(session.id, session);
    }
}

The code is quite slick. It uses an array of Session model classes as GSON target type and GSON will create the instances and populate the available properties from the JSON values:

public class Session {
    public String id;
    public String url;
    public String description;
    public String title;
    public String[] tags;
    public String startTimestamp;
    public String youtubeUrl;
    public String[] speakers;
    public String endTimestamp;
    public String hashtag;
    public String subtype;
    public String room;
    public String captionsUrl;
    public String photoUrl;
    public boolean isLivestream;
    public String mainTag;
    public String color;
    public String relatedContent;
    public int groupingOrder;

    public String getImportHashCode() {
        StringBuilder sb = new StringBuilder();
        sb.append("id").append(id == null ? "" : id)
                .append("description").append(description == null ? "" : description)
                .append("title").append(title == null ? "" : title)
                .append("url").append(url == null ? "" : url)
                .append("startTimestamp").append(startTimestamp == null ? "" : startTimestamp)
                .append("endTimestamp").append(endTimestamp == null ? "" : endTimestamp)
                .append("youtubeUrl").append(youtubeUrl == null ? "" : youtubeUrl)
                .append("subtype").append(subtype == null ? "" : subtype)
                .append("room").append(room == null ? "" : room)
                .append("hashtag").append(hashtag == null ? "" : hashtag)
                .append("isLivestream").append(isLivestream ? "true" : "false")
                .append("mainTag").append(mainTag)
                .append("captionsUrl").append(captionsUrl)
                .append("photoUrl").append(photoUrl)
                .append("relatedContent").append(relatedContent)
                .append("color").append(color)
                .append("groupingOrder").append(groupingOrder);
        for (String tag : tags) {
            sb.append("tag").append(tag);
        }
        for (String speaker : speakers) {
            sb.append("speaker").append(speaker);
        }
        return HashUtils.computeWeakHash(sb.toString());
    }

    public String makeTagsList() {
        int i;
        if (tags.length == 0) return "";
        StringBuilder sb = new StringBuilder();
        sb.append(tags[0]);
        for (i = 1; i < tags.length; i++) {
            sb.append(",").append(tags[i]);
        }
        return sb.toString();
    }

    public boolean hasTag(String tag) {
        for (String myTag : tags) {
            if (myTag.equals(tag)) {
                return true;
            }
        }
        return false;
    }
}

What's interesting about this class (and the other model classes) is the getImportHashCode method. This method is needed to find out about changes that might have been done on already processed entities and is actually a main method to be used by the data sync logic implemented by the SyncAdapter.

google-api-client

Next up in our list of dependencies is the Google APIs client library and its Android extension. Both libraries are used in conjunction with the Google Plus API from the next dependency

compile 'com.google.apis:google-api-services-plus:+'

to fetch the latest announcements via the AnnouncementsFetcher class. Once the announcements are fetched from the Google+ profile, they are stored by the content provider ScheduleProvider:

Plus plus = new Plus.Builder(httpTransport, jsonFactory, null)
        .setApplicationName(NetUtils.getUserAgent(mContext))
        .setGoogleClientRequestInitializer(
                new CommonGoogleClientRequestInitializer(Config.API_KEY))
        .build();

ActivityFeed activities;
try {
    activities = plus.activities().list(Config.ANNOUNCEMENTS_PLUS_ID, "public")
            .setMaxResults(100l)
            .execute();
    if (activities == null || activities.getItems() == null) {
        throw new IOException("Activities list was null.");
    }

} catch (IOException e) {
    LOGE(TAG, "Error fetching announcements", e);
    return batch;
}

// ...

StringBuilder sb = new StringBuilder();
for (Activity activity : activities.getItems()) {
    // ...

    // Insert announcement info
    batch.add(ContentProviderOperation
            .newInsert(ScheduleContract
                    .addCallerIsSyncAdapterParameter(Announcements.CONTENT_URI))
            .withValue(SyncColumns.UPDATED, System.currentTimeMillis())
            .withValue(Announcements.ANNOUNCEMENT_ID, activity.getId())
            .withValue(Announcements.ANNOUNCEMENT_DATE, activity.getUpdated().getValue())
            .withValue(Announcements.ANNOUNCEMENT_TITLE, activity.getTitle())
            .withValue(Announcements.ANNOUNCEMENT_ACTIVITY_JSON, activity.toPrettyString())
            .withValue(Announcements.ANNOUNCEMENT_URL, activity.getUrl())
            .build());
}

Again, the ContentProviderOperation builder methods are used to create the appropriate operations and return them to the class client.

Android SVG

Next up is a very interesting dependency: the Android SVG library:

compile 'com.github.japgolly.android:svg-android:2.0.6'

The SVG Android project adds support for showing scalable vector graphic files in an Android application. In the Google I/O application it is used to show the location of different floors in the Google I/O venue.

One place to have a look at SVG processing is the ConferenceDataHandler implementation, again, a handler class:

private void processMapOverlayFiles(Collection<Tile> collection, boolean downloadAllowed) throws IOException, SVGParseException {
    boolean shouldClearCache = false;
    ArrayList<String> usedTiles = Lists.newArrayList();

    for (Tile tile : collection) {
        final String filename = tile.filename;
        final String url = tile.url;

        usedTiles.add(filename);

        if (!MapUtils.hasTile(mContext, filename)) {
            shouldClearCache = true;

            if (MapUtils.hasTileAsset(mContext, filename)) {

                MapUtils.copyTileAsset(mContext, filename);

            } else if (downloadAllowed && !TextUtils.isEmpty(url)) {
                try {
                    // download the file only if downloads are allowed and url is not empty
                    File tileFile = MapUtils.getTileFile(mContext, filename);
                    BasicHttpClient httpClient = new BasicHttpClient();
                    httpClient.setRequestLogger(mQuietLogger);
                    HttpResponse httpResponse = httpClient.get(url, null);
                    FileUtils.writeFile(httpResponse.getBody(), tileFile);

                    // ensure the file is valid SVG
                    InputStream is = new FileInputStream(tileFile);
                    SVG svg = new SVGBuilder().readFromInputStream(is).build();
                    is.close();
                } catch (IOException ex) {
                    LOGE(TAG, "FAILED downloading map overlay tile "+url+
                            ": " + ex.getMessage(), ex);
                } catch (SVGParseException ex) {
                    LOGE(TAG, "FAILED parsing map overlay tile "+url+
                            ": " + ex.getMessage(), ex);
                }
            } else {
                LOGD(TAG, "Skipping download of map overlay tile" +
                        " (since downloadsAllowed=false)");
            }
        }
    }

    if (shouldClearCache) {
        MapUtils.clearDiskCache(mContext);
    }

    MapUtils.removeUnusedTiles(mContext, usedTiles);
}

The code looks if the SVG graphic is available in the APK's asset directory. If so, it copies the file to a custom directory. If not, it downloads the SVG and uses the svg-android library to validate if it is a valid SVG graphic.

The main place where the SVG graphics are later used is in the MapFragment implementation. It uses a TileOverlay and registers multiple TileProvider implementations of type SVGTileProvider class. The SVGTileProvider uses the previously shown SVGBuilder in order to draw the currently shown floor onto the map.

public SVGTileProvider(File file, float dpi) throws IOException {
    // ...

    SVG svg = new SVGBuilder().readFromInputStream(new FileInputStream(file)).build();
    mSvgPicture = svg.getPicture();

    // ...
}

// later on when drawing:

public byte[] getTileImageData(int x, int y, int zoom) {
    mStream.reset();

    Matrix matrix = new Matrix(mBaseMatrix);
    float scale = (float) (Math.pow(2, zoom) * mScale);
    matrix.postScale(scale, scale);
    matrix.postTranslate(-x * mDimension, -y * mDimension);

    mBitmap.eraseColor(Color.TRANSPARENT);
    Canvas c = new Canvas(mBitmap);
    c.setMatrix(matrix);

    // NOTE: Picture is not thread-safe.
    synchronized (mSvgPicture) {
        mSvgPicture.draw(c);
    }

    BufferedOutputStream stream = new BufferedOutputStream(mStream);
    mBitmap.compress(Bitmap.CompressFormat.PNG, 0, stream);
    try {
        stream.close();
    } catch (IOException e) {
        Log.e(TAG, "Error while closing tile byte stream.");
        e.printStackTrace();
    }
    return mStream.toByteArray();
}

As can be seen in the code above, the method getTileImageData applies some scaling and translating, but in the end it draws the mSvgPicture onto a newly created Canvas and writes it to the resulting ByteArrayOutputStream. In order to enhance performance on creating the tile graphics, there is the CachedTileProvider implementation that uses a disk LRU cache to cache results on disk.

I found it very refreshing to see an application of the svg-android library in action. Its definetly an implementation option to carry in mind for future Android apps.

Glide

Another third party library in use is Glide:

compile files('../third_party/glide/library/libs/glide-3.2.0a.jar')

Glide is an image loading and caching library that comes with extensions to other commonly used libraries such as OkHttp and Volley. In the Google I/O application the Glide API is encapsulated in the ImageLoader class.

One interesting detail in this class is the VariableWidthImageLoader implementation:

// ...
private static final Pattern PATTERN = Pattern.compile("__w-((?:-?\\d+)+)__");
// ...

@Override
protected String getUrl(String model, int width, int height) {
    Matcher m = PATTERN.matcher(model);
    int bestBucket = 0;
    if (m.find()) {
        String[] found = m.group(1).split("-");
        for (String bucketStr : found) {
            bestBucket = Integer.parseInt(bucketStr);
            if (bestBucket >= width) {
                // the best bucket is the first immediately bigger than the requested width
                break;
            }
        }
        if (bestBucket > 0) {
            model = m.replaceFirst("w"+bestBucket);
            LOGD(TAG, "width="+width+", URL successfully replaced by "+model);
        }
    }
    return model;
}

The VariableWidthImageLoader is used by Glide in order to return a customized URL that should be used for a given width and height. The implementation above looks for an image indicator in the current URL (think of model as being an URL to an image) that might look like __w-200-400-800__. If this indicator is available it replaces it with w<desiredWith> to actually fetch an image with a width that is actually larger than the requested width.

We used a similar pattern in our applications for image URLs (though with a width request parameter), but I wasn't aware of Glide providing such a nice API to inject this behaviour.

Basic HTTP Client

Of course, the Android basic http client implementation must also not be missed. It is needed to execute the actual HTTP requests for example in the RemoteConferenceDataFetcher that fetches the JSON content from Google servers. In fact, it first fetches only a so-called manifest file and checks whether data has changed based on that manifest. A detailed explanation on the actual synchronisation of the conference data can be found at the Android developers blog.

Conclusion

This article had a look at some places in the Google I/O Android application and showed some third party libraries in use. The application has been open-sourced on GitHub and is available under the Apache license.

REST Interfaces and Android

As I already told in my last blog post, I lately discovered this very good collection of open source projects from Square Inc., a commercial payment service.

One project among their released open source projects that raised my interest was also Retrofit. Retrofit allows to access REST web interfaces via type-safe Java types and annotations.

Configuring the RestAdapter

Before we can declare our interface and start making REST requests, we need to configure the so-called RestAdapter. It allows to change various aspects from HTTP settings to adding custom converters for the content found in HTTP responses (in our case, the REST interface returned JSON) from the accessed web service.

I put this setup code into our Dagger application module:

@Provides
@Singleton
public AppConnector providesAppConnector() {

  OkHttpClient okHttpClient = new OkHttpClient();
  okHttpClient.setConnectTimeout(ApplicationConstants.HTTP_TIMEOUT, TimeUnit.MILLISECONDS);
  okHttpClient.setWriteTimeout(ApplicationConstants.HTTP_TIMEOUT, TimeUnit.MILLISECONDS);
  okHttpClient.setReadTimeout(ApplicationConstants.HTTP_TIMEOUT, TimeUnit.MILLISECONDS);

  RestAdapter restAdapter = new RestAdapter.Builder()
    .setEndpoint(application.getString(R.string.appEndPoint))
    .setLogLevel(RestAdapter.LogLevel.FULL)
    .setClient(new OkClient(okHttpClient))
    .setConverter(new GsonConverter(gson))
    .build();

    return restAdapter.create(AppConnector.class);
}

Usually the RestAdapter defaults to parsing JSON responses (utilizing Google's GSON library), but in our case we needed to adapt the pre-defined GSON converter slightly, this shouldn't irritate in this example. The endpoint is the base URL which is used for all the REST requests. The HTTP client in use is OkHttpClient, another great project from Square, actually worth another blog post. It supports loading content from multiple IPs if single hosts aren't available for some reason and also does HTTP response caching based on the given response headers, if so configured. That's it for our Retrofit configuration.

Declaring the Java REST interface

Once the RestAdapter is available, it can be used to instantiate proxy implementations for your Java REST interfaces. So let's first create a simple REST Java interface:

public interface AppConnector {

  @Headers("Cache-Control: max-age=14400")
  @GET("/connector/contents/app-id/{app-id}")
  Contents getContents(@Path("app-id") String appId);

}

The example above is taken from one of our production apps (with only a little change). With this interface, calling restAdapter.create(AppConnector.class) returns a REST client object (proxy) that implements AppConnector and that does all the content parsing and conversion into Java objects for us. This works for plain Java types, collection types and custom Java classes that are used as return types and/or parameter types.

The example above actually makes a synchronous request. In fact, we do use asynchronous requests in our application. Going from synchronous to asynchronous requests only needs a little change:

public interface AppConnector {

  @Headers("Cache-Control: max-age=14400")
  @GET("/connector/contents/app-id/{app-id}")
  void getContents(@Path("app-id") String appId, Callback<Contents> callback);

}

For aynchronous requests a second parameter is introduced to our interface method called callback. The Callback instance is called once the request has succeeded or failed. We can also go reactive. Retrofit integrates with RxJava and allows to return rx.Observable instances that enfore reactive programming in your Android code :-).

The @Headers, @GET and @Path annotations are all pretty self-explainatory. As you can imagine there is also @POST, @PUT, @DELETE or, for example, @Query which can be used to added request parameters to the configured URL.

Conclusion

This article should serve as a small pointer and introduction to Retrofit, a leight-weight library for calling REST web services from Android code. Retrofit basically supports the creation of REST clients based on type-save Java interfaces and types and supports synchronous, asynchronous or reactive programming paradigms. More information on Retrofit can be found at their GitHub project page.

Android Dependency Injection

I recently came across a very good collection of open source projects maintained by the guys from Square, a commercial payment service.

One of their projects came in use in one of my current Android projects: Dagger, a light-weight dependency injection library that can be used in Java and Android projects. Dagger kind of introduced me to the world of dependency injection libraries for Android.

DI and Android

If you are like me with a history (and present) in writing Java backend code, you will consider dependency injection as an almost given pattern when you start writing applications. For Android applications, dependency injection not only gets interesting for injecting things such as services or repositories, but also for injecting views.

Let's have a look at the more familiar pattern of injecting Java objects/beans like services into activities.

DI with Dagger

Dagger is a light-weight dependency injection library that comes with support for Java's JSR-330 annotations like @Inject or @Singleton . The creation of object instances that need to be injected at runtime are created in a so-called module definition:

@Module
public class ApplicationModule {

    @Provides
    @Singleton
    public AppConnector providesAppConnector() {

        OkHttpClient okHttpClient = new OkHttpClient();
        okHttpClient.setConnectTimeout(ApplicationConstants.HTTP_TIMEOUT, TimeUnit.MILLISECONDS);
        okHttpClient.setWriteTimeout(ApplicationConstants.HTTP_TIMEOUT, TimeUnit.MILLISECONDS);
        okHttpClient.setReadTimeout(ApplicationConstants.HTTP_TIMEOUT, TimeUnit.MILLISECONDS);

        try {
            File cacheDir = new File(application.getCacheDir(), "http-cache");
            Cache cache = new Cache(cacheDir, 1024 * 1024 * 5l); // 5 MB HTTP Cache

            okHttpClient.setCache(cache);
        } catch (IOException e) {
            Log.e("ApplicationModule", "Could not create cache directory for HTTP client: " + e.getMessage(), e);
        }

        RestAdapter restAdapter = new RestAdapter.Builder()
                .setEndpoint(application.getString(R.string.appEndPoint))
                .setLogLevel(RestAdapter.LogLevel.FULL)
                .setClient(new OkClient(okHttpClient))
                .setConverter(new GsonConverter(gson))
                .build();

        return restAdapter.create(AppConnector.class);
    }
}

In the case above, the module definition provides a REST interface called AppConnector. Therefore, it configures an OkHttpClient instance and the REST endpoint. At runtime, the ApplicationModule needs to be instantiated and the dependency object graph defined by the module needs to be bootstrapped. This can be done in a custom Application implementation:

public class Application extends android.app.Application {

    private ObjectGraph objectGraph;

    @Override
    public void onCreate() {
        super.onCreate();

       objectGraph = ObjectGraph.create(new ApplicationModule());
    }

    public ObjectGraph getObjectGraph() {
        return objectGraph;
    }
}

In order to actually trigger the dependency injection for an activity, ObjectGraph#inject() needs to be called. This is typically done inside a common activity base class:

public abstract class BaseActivity extends FragmentActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        ((Application) getApplication()).getObjectGraph().inject(this);
    }
}

Once this initial setup is done, activities can be injected with the objects created in the module:

public class LoginActivity extends BaseActivity {

    @Inject
    AppConnector appConnector;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        appConnector.doSomething(...);
    }
}

Of course, Dagger also comes with more advanced conecepts and features. But with the features shown above, you can already start to use dependency injection for services, repositories and other patterns.

VDI with Butterknife

When developing on Android, glue code to set instance variables which are all View descendants is frequently needed:

public class LoginActivity extends Activity {

    private EditText username;
    private EditText password;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

    setContentView(R.layout.login);

        username = (EditText) findViewById(R.id.username);
        password = (EditText) findViewById(R.id.password);

        // ...
    }

    // ...
}

View dependecy injection is a way to get rid of glue code that is needed to link instance variables with resources from the layout, and to speed up development. One library that is determined to be used exactly for this case is Butterknife. It comes with annotations that can be used to inject views, but also to annotated methods which at compile-time are transformed to listener instances.

But let's first of all have a look at view injection. Actually, it is pretty simple, the annotation to use is @InjectView. It needs a single annotation parameter which is the resource ID of the targeted view:

public class LoginActivity extends BaseActivity {

    @InjectView(R.id.username)
    EditText username;

    @InjectView(R.id.password)
    EditText password;

    // ...
}

No more code needed for manually retrieving and casting the view instances in the onCreate Activity callback. One little detail that is hidden here is the call to ButterKnife.inject(this) which actually causes Butterknife to inject all the instance fields with the target views at runtime. I do the calls to inject usually in my BaseActivity which is shared across all activities in my project:

public abstract class BaseActivity extends FragmentActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        ButterKnife.inject(this);
    }

    // ...
}

Among other nice Butterknife features, I really like the annotations for various listeners, take @OnClick for example. This is a method level annotation that annotates methods which are at compile-time transformed/wrapped with an actual listener implementation:

public class LoginActivity extends Activity {

    @OnClick(R.id.btnLogin)
    public void login() {
        String user = username.getText().toString();
        // ...
    }
}

Or suppose the layout has a Spinner and we want to react when an item is selected:

@OnItemSelected(R.id.mobilePrefix)
public void mobilePrefixSelected(int position) {
    // ...
}

The method targeted by OnItemSelected may come with any parameters also found in the actual listener interface.

Conclusion

I used Dagger and Butterknife in one of my latest Android projects and I won't go without them anymore. Especially Butterknife comes in very handy as it allows to get rid of view glue code that is needed to get references to UI view elements.

It should be noted that there is also the AndroidAnnotations project that comes with an even richer set of annotations to do basically everything with annotations. In my current projects, our needs were totally satisfied with the Butterknife feature set, so I can't say anything about AndroidAnnotations in depth.

JUnit Rules and Spock

I recently had to implement a file upload for a Grails application. In this application we have quite a bunch of JUnit tests but we do want to utilize Spock for all the newly added tests.

As it is in the nature of file uploads, the Spock specification needs to create quite a few temporary files and folders.

One option I've seen quite a few times is to either use File#createTemp or use JDK 7's Files#createTempDirectory/Files#createTempFile methods. Both have the disadvantage of the intial setup and cleanup, which results in helper code that is added to the test and distracts from the real test code.

JUnit Rules

JUnit provides a way to intercept the test suite and test method execution by providing the concept of rules. A rule implementation can intercept test method execution and alter the behaviour of these tests, or add cleanup work as it was done in @Before, @After, @BeforeClass and @AfterClass or Spock's setup, cleanup, setupSpec and cleanupSpec methods. For a particular test case, an instance of the rule implementation must be available via a public instance field and it must be annotated with @Rule.

For example, the TestName rule implementation can be used to have access to the test case name in a test method at runtime.

public class NameRuleTest {
  @Rule
  public TestName name = new TestName();

  @Test
  public void testA() {
    assertEquals("testA", name.getMethodName());
  }

  @Test
  public void testB() {
    assertEquals("testB", name.getMethodName());
  }
}

Spock and JUnit Rules

As it turns out, Spock has support for applying JUnit rules in specifications. This be done by providing a Groovy property with the type of the rule implementation, annotated by @Rule. This was really good news for my file upload tests as this allowed me to use one of my favorite JUnit rules: the org.junit.rules.TemporaryFolder rule.

As its name implies, the TemporaryFolder rule gives a convenient way to create temporary folders and files in test methods. The rule concept is used to intercept before and after each test method execution to do all the setup work.

This makes testing my AttachmentService very slick:

@TestFor(AttachmentService)
@Mock([Attachment])
class AttachmentServiceSpec extends Specification {

    @Rule
    TemporaryFolder temporaryFolder // see Peter's comment below :-) = new TemporaryFolder()

    def "load the persistent file for a given attachment"() {
        setup:
          def tempFile = temporaryFolder.newFile('test.txt')
          def attachment = new Attachment(
                               uploadId: '123', 
                               originalFilename: 'test.txt', 
                               location: tempFile.toURL()).save()

        when: "an attachment with a URL reference is loaded"
          def file = service.loadFile(attachment)

        then: "the underyling File must be returned"
          file == tempFile
    }
}

As you can see in the code above, the TemporaryFolder can be used to create a new file with the newFile method. If we wanted to create a new folder, there is also a newFolder method available. We do not have to specify any temporary folder or do any cleanup work, this is all done by the rule implementation itself.

There is a good overview for the base rules provided in JUnit at Github.

Conclusion

Spock comes with support for JUnit rules. A rule can intercept test method execution, do setup or cleanup work and might even change the test results. The TemporaryFolder rule is a useful rule that allows to create temporary files and folders in test cases while keeping track of these files and cleaning them up after the test execution.

Vagrant - Configuring a Solr Box

Recently I started some experiments with Vagrant. Vagrant is a tool that lets you (pre-) configure development environments. This can be done based on VirtualBox, Hyper-V, VMWare or many more so-called providers. You see, another way to put it is that Vagrant is a tool that helps with configuring, building and running virtual machine instances.

The default provider is VirtualBox but the documentation actually recommends to use VMWare for more stable and performant environments.

Welcome to Vagrant

Vagrant can be installed via a manual download or your preferred package manager, such as brew on Mac OS.

We needed to write (I guess they're called) system tests, to test integration scenarios between an Apache Solr server instance and a library. As you can imagine this implies that an Apache Solr instance needs to be installed on every developer machine that intends to execute the entire test suite.

Now Vagrant comes into play. It's a command-line tool that is used to setup a virtual machine with all components pre-installed.

However, instead of configuring virtual machine images from scratch, Vagrant introduces the concept of boxes. Boxes are the package format that is used to bring up identical development environments. The easiest way to use a box is to choose one from the publicy available ones.

The "precise64" Box

For our purpose, we based our Vagrant box on the "precise64" box. It contains an Ubuntu 12.04 LTS (precise) installation with some tools pre-installed. The default box is specified in a file called Vagrantfile.

Running vagrant init will create a template Vagrantfile in the current directory.

Vagrantfile is a file written in a Ruby DSL and it contains the configuration of our custom box:

Vagrant.configure("2") do |config|
  # Defines the Vagrant box name, download URL, IP and hostname
  config.vm.define :vagrant do |vagrant|
    vagrant.vm.box = "precise64"
    vagrant.vm.box_url = "http://files.vagrantup.com/precise64.box"

    vagrant.vm.network :private_network, ip: "192.168.66.6"
    vagrant.vm.network "forwarded_port", guest: 8983, host: 8898

    vagrant.vm.hostname = "vagrant.dcl"
  end
end

The configuration specifies the pre-defined box precise64 as "parent" vm box. In addition, it specifies the URL under which this box can be downloaded.

More Vagrantfile

The Vagrantfile may consist of various sections. For a detailed overview of all available configuration options, please have a look at the Vagrant documentation.

Next up in our configuration file is the network configuration. We use a private network, this means we can access our guest from the host machine but the box won't be visible from the outside. It gets an IP address in the private address space. Plus, we define a forwarded port. In fact, this is the port under which Apache Solr listens in the default configuration. Once we access localhost:8983 on the host machine, the request will be forwarded to the Vagrant virtual machine instance port 8983.

Would we start with vagrant up we would have a running Ubuntu 12.04 LTS instance within seconds. Unfortunately, Ubuntu 12.04 doesn't come with Solr pre-installed, so there's some work left for us.

As the pre-defined "precise64" box doesn't fix our use case, we need to alter the environment a bit. We need to install Java and Apache Solr in our custom box. The process of adding/tweaking stuff in a box is called provisioning. The simplest way of implementing provisioning is to write shell scripts. An advanced way would be to use tools such as Chef or Puppet.

We decided to use plain shell scripts and added a shell-script called provision.sh to our Vagrantfile configuration:

Vagrant.configure("2") do |config|

  config.vm.provision :shell, :inline => "
    sh /vagrant/scripts/provision.sh;
  "

  # Defines the Vagrant box name, download URL, IP and hostname
  config.vm.define :vagrant do |vagrant|
    vagrant.vm.box = "precise64"
    vagrant.vm.box_url = "http://files.vagrantup.com/precise64.box"

    vagrant.vm.network :private_network, ip: "192.168.66.6"
    vagrant.vm.network "forwarded_port", guest: 8983, host: 8983

    vagrant.vm.hostname = "vagrant.dcl"
  end
end

The provision.sh shell-script defines all the commands to install Java and run the Solr instance:

#! /bin/bash

##### VARIABLES #####

# Throughout this script, some variables are used, these are defined first.
# These variables can be altered to fit your specific needs or preferences.

# Server name
HOSTNAME="vagrant.dcl"

# Locale
LOCALE_LANGUAGE="en_US" # can be altered to your prefered locale, see http://docs.moodle.org/dev/Table_of_locales
LOCALE_CODESET="en_US.UTF-8"

# Timezone
TIMEZONE="Europe/Paris" # can be altered to your specific timezone, see http://manpages.ubuntu.com/manpages/jaunty/man3/DateTime::TimeZone::Catalog.3pm.html

VM_ID_ADDRESS="192.168.66.6"

#----- end of configurable variables -----#


##### PROVISION CHECK ######

# The provision check is intended to not run the full provision script when a box has already been provisioned.
# At the end of this script, a file is created on the vagrant box, we'll check if it exists now.
echo "[vagrant provisioning] Checking if the box was already provisioned..."

if [ -e "/home/vagrant/.provision_check" ]
then
  # Skipping provisioning if the box is already provisioned
  echo "[vagrant provisioning] The box is already provisioned..."
  exit
fi


##### PROVISION SOLR #####
echo "[vagrant provisioning] Updating mirrors in sources.list"

# prepend "mirror" entries to sources.list to let apt-get use the most performant mirror
sudo sed -i -e '1ideb mirror://mirrors.ubuntu.com/mirrors.txt precise main restricted universe multiverse\ndeb mirror://mirrors.ubuntu.com/mirrors.txt precise-updates main restricted universe multiverse\ndeb mirror://mirrors.ubuntu.com/mirrors.txt precise-backports main restricted universe multiverse\ndeb mirror://mirrors.ubuntu.com/mirrors.txt precise-security main restricted universe multiverse\n' /etc/apt/sources.list
sudo apt-get update

echo "[vagrant provisioning] Installing Java..."
sudo apt-get -y install curl
sudo apt-get -y install python-software-properties # adds add-apt-repository

sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update

# automatic install of the Oracle JDK 7
echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections

sudo apt-get -y install oracle-java7-set-default

export JAVA_HOME="/usr/lib/jvm/java-7-oracle/jre"

echo "[vagrant provisioning] Installing Apache Solr..."

sudo apt-get -y install unzip

curl -O http://tweedo.com/mirror/apache/lucene/solr/4.7.1/solr-4.7.1.zip
unzip solr-4.7.1.zip

rm solr-4.7.1.zip
cd solr-4.7.1/example/

# Solr startup
java -jar start.jar > /tmp/solr-server-log.txt &

echo "[vagrant provisioning] Bootstrapping Apache Solr..."

sleep 1
while ! grep -m1 'Registered new searcher' < /tmp/solr-server-log.txt; do
    sleep 1
done

echo "[vagrant provisioning] Apache Solr started. Index test data ..."

# Index some stuff
cd exampledocs/
java -jar post.jar solr.xml monitor.xml

##### CONFIGURATION #####

# Hostname
echo "[vagrant provisioning] Setting hostname..."
sudo hostname $HOSTNAME

##### CLEAN UP #####

sudo dpkg --configure -a # when upgrade or install doesn't run well (e.g. loss of connection) this may resolve quite a few issues
apt-get autoremove -y # remove obsolete packages

##### PROVISION CHECK #####

# Create .provision_check for the script to check on during a next vargant up.
echo "[vagrant provisioning] Creating .provision_check file..."
touch .provision_check

The provisioning process starts with the definition of some variables and the check for the .provision_check file. This file will be touched once the provisioning process has gone through successfully. All files within the directory containing the Vagrantfile will be available in the guest at /vagrant. The current home directory all file operations will be operated in during provisioning is /home/vagrant. Vagrant allows to configure more of these so-called synced folders but for our purposes the defaults were perfectly fine.

After the file check, sources.list will be updated and the following lines will be added at the beginning of the file:

sudo sed -i -e '1ideb mirror://mirrors.ubuntu.com/mirrors.txt precise main restricted universe multiverse\ndeb mirror://mirrors.ubuntu.com/mirrors.txt precise-updates main restricted universe multiverse\ndeb mirror://mirrors.ubuntu.com/mirrors.txt precise-backports main restricted universe multiverse\ndeb mirror://mirrors.ubuntu.com/mirrors.txt precise-security main restricted universe multiverse\n' /etc/apt/sources.list
sudo apt-get update

The deb mirror:... entries are used to optimize the selected apt-get mirror. This will result in the selection of the fastest available mirror without any hard-coded local preliminaries.

The next lines are pretty straight-forward. The ppa:webupd8team/java repository is used to fetch Oracle JDK 7u51. As the JDK installer comes with a modal confirmation dialog, the following lines are needed:

echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections

This quietly confirms and installs the JDK automatically.

Next up is Apache Solr installation. The Solr 4.7.1 zip is downloaded and gets extracted in the home directory. Afterwards, the example configuration is run with java -jar start.jar. This starts a Jetty instance. The script uses the following code to monitor when bootstrapping is done:

sleep 1
while ! grep -m1 'Registered new searcher' < /tmp/solr-server-log.txt; do
    sleep 1
done

Once the message "Registered new searcher" appears we can safely assume the Solr instance is started.

Solr's post.jar can be used to add documents to the index. In this script we simply add the files from the exampledocs directory.

Conclusion

And that's it.

With this configuration the development environment can be started via vagrant up. Once the machine shall be stopped, we can use vagrant halt and vagrant resume to resume were we stopped.

What's really cool about it is that once we commit our Vagrantfile and provision.sh to our git repository every developer is free to check it out and run the vagrant command line tool with it. It results in exactly the same development virtual machine that can now be used to run our systems tests.