Blog JVM Stuff.

Grails, GPars and Hibernate

The GPars library is a great way to add concurrency supporting constructs to your Groovy code. It equips Groovy projects with powerful concurrency concepts like parallel collections, map reduce operations, actors and dataflow variables.

Adding the GPars dependency

GPars can also be added to Grails applications (in our case, we have a Grails 2.2.5 application, so this article was not tested with any versions up or below). Simply add GPars as a dependency in the BuildConfig.groovy:


dependecies {
  compile 'org.codehaus.gpars:gpars:some_version_number' // we use 1.1.0 for our code
}

That is actually enough to have the GPars features enabled, once included, you can access the parallel collection methods in the targeted collection classes, like:


GParsPool.withPool {
  [1, 2, 3, 4].findParallel { it == 3 }
}

You can see in the above example that the GPars convention is to use the original method name with a Parallel appended. The GParsPool.withPool method has to be called in order to initialise the GPars thread pool and equip collection classes with the parallel methods. An overview of all the available methods can be found in GParsExecutorsPoolEnhancer.

GPars and Hibernate

Our requirement was to speed up a Quartz job iterating over all our customers - which are all Hibernate entities. As the customers could be easily grouped into different sets, the precondition for jork-join processing were met. But the question was how to enable Hibernate read-only processing in the eachParallel GPars method as we wanted to do something like:


differentCustomerGroups.eachParallel { CustomerGroup customerGroup ->
  // process all customers of the given customer group
  // IN THE CURRENT HIBERNATE SESSION
}

As every Hibernate session is bound to the current thread, it was necessary to create a new Session for the current thread created by GPars, attach it and close it once processing was done.

The key to enabling this (in Grails 2.2.5 at least), was to use the persistenceInterceptor bean that can be injected into any Grails artefact:


def persistenceInterceptor

It implements the PersistenceContextInterceptor interface which can be used to initialise and destroy the current persistence context. In the case of its Hibernate implementation, HibernatePersistenceContextInterceptor, the persistence context is the current Hibernate session. Thus, the persistence context interceptor bean can be used to initialise and destroy the session in our closure. The eachParallel uses the bean like that (this pattern can also be found in other places in Grails btw):


differentCustomerGroups.eachParallel { CustomerGroup customerGroup ->
  // init the persistence context
  persistenceInterceptor.init()

  try {
    int offset = 0
    def customers = Customer.executeQuery("select c from customer c where c.customerGroup = ?", customerGroup, [readOnly: true, max: 100, offset: offset])

    // loop over customers till all are processed ...

    // flush the context
    persistenceInterceptor.flush()
  } finally {
    // destroy the context and release resources
        persistenceInterceptor.destroy()
  }
}

This is effectively enough to create and destroy a new Hibernate session to be used by the GPars code. Note, that we also had to disable the automatic Hibernate session creation done by Quartz, specifying the def sessionRequired = false property in the job class:


class CustomerJob {
  def concurrent = false
  def sessionRequired = false // do not create a Hibernate session on job startup

  def execute() {
    // ...
  }
}

Another thing to note is maybe the readOnly option that was given to the executeQuery method. It disables snap-shotting of entities, which in our case was possible (since the customer instances were not modified themselves).

Conclusion

GPars is a library that adds concurrency constructs to your class and also provides concurrency concepts like actors, agents and dataflow variables. This article shows how to handle the Hibernate session in code that is concurrently executed by GPars. The persistence context interceptor is a class provided by Grails that enables setting up and destroying the persistence context in arbitrary places. Note that this article is based on Grails 2.2.5.

Grails: Reconnecting JDBC Connections

At our company we are utilising the well-known Quartz library and Grails plugin for our batch jobs in Grails applications. When going throgh the server log files of our production server I lately came acress this error:


org.apache.tomcat.jdbc.pool.ConnectionPool abandon
WARNING: Connection has been abandoned PooledConnection[com.mysql.jdbc.JDBC4Connection@650599cb]:java.lang.Exception
        at org.apache.tomcat.jdbc.pool.ConnectionPool.getThreadDump(ConnectionPool.java:967)
        at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:721)
        at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:579)
        at org.apache.tomcat.jdbc.pool.ConnectionPool.getConnection(ConnectionPool.java:174)
        at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:111)
        at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
        at org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy$TransactionAwareInvocationHandler.invoke(TransactionAwareDataSourceProxy.java:224)
        ...
        at grails.plugins.quartz.GrailsJobFactory$GrailsJob.execute(GrailsJobFactory.java:104)
        at org.quartz.Job$execute.call(Unknown Source)
        at grails.plugins.quartz.QuartzDisplayJob.execute(QuartzDisplayJob.groovy:29)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:207)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:560)

The beginning of the stack-trace showed that the exception was thrown by a Quartz worker thread - so how come the current DB connection was abandoned?

I investigated the connection pool settings and I found the following in the Tomcat connection pool configuration:
removeAbandoned="true"
removeAbandonedTimeout="3600"

The abandoned timeout was set to 3600 seconds aka as one hour. The batch job took over 1 hour, therefore the connection pool abandoned the connection which led to the error above.

A Session in a Quartz Job

As long as the property sessionRequired is not explicitly set to false in a Grails Quartz job class, the Quartz plugin will create a Hibernate session that is bound to the Quartz worker thread. This is done by the SessionBinderJobListener that comes with the Quartz plugin that uses the current persistence context interceptor of type HibernatePersistenceContextInterceptor, it is only enabled when Hibernate is in use as general data store.

For long-running quartz jobs the binding to a Hibernate session is bad, as this means that the Hibernate session binds a JDBC connection from the connection pool for the live-time of the session (starting once the session has to use the connection to the database).

When skipping through the documentation for org.hibernate.Session the disconnect(Connection) method appeared to me. As the documentation mentioned:

Disconnect the Session from the current JDBC connection. If the connection was obtained by Hibernate close it and return it to the connection pool; otherwise, return it to the application. This is used by applications which supply JDBC connections to Hibernate and which require long-sessions (or long-conversations)


And it adds:

Note that disconnect() called on a session where the connection was retrieved by Hibernate through its configured org.hibernate.connection.ConnectionProvider has no effect, provided ConnectionReleaseMode.ON_CLOSE is not in effect.


So this means that the DB connection used by the session has to be given by the application-code in order to use disconnection. In our application, this was the case, so we could go into that direction to solve this issue.

Refreshing a Session

So the goal was to rewrite the long-running batch jobs so that they disconnect and close the JDBC connection from time to time. This would release the connection and put it back to the pool, a connection abandonment as we have seen before is not possible anymore as long as the time span for the disconnect is of course smaller than an hour.

First thing we did in the Quartz job was to disable the automatic Hibernate session creation via the sessionRequired property:


class SomeJob {

  static triggers = {
      // ...
  }

  def concurrent = false
  def sessionRequired = false

  GrailsApplication grailsApplication
  SessionFactory sessionFactory
  DataSource dataSource

  // ...
}

In order to get a connection from the DB pool we injected the javax.sql.DataSource and rewrote the code so that it would use the created Hibernate session (instead of the Grails provided ones):


def session = sessionFactory.openSession(dataSource.connection)

All the HQL queries and Hibernate operations would be rewritten to be running over this particular session.

Keeping the session open for the entire life-time of the job implies flushing and clearing the session from time to time to avoid running in to performance or out-of-memory issues. As suggested in the Hibernate Documentation, the session is flushed after a certain nubmer of entities processed:


protected void refreshJDBCConnection(Session session) {
    session.flush()
    session.clear()

    // ...
  }

Exactly at this pace in the code, we added the code that would take the current connection, close it, and return it to the database connection pool:


protected void refreshJDBCConnection(Session session) {
    session.flush()
    session.clear()

    // we need to disconnect and get a new DB connection here
    def connection = session.disconnect()
    connection.close()
    
    session.reconnect(dataSource.connection)
  }

The code above will get the JDBC connection out of the session and uses the dataSource to obtain a new database connection and reconnect it with the existing session. With this we can keep the abandoned connection setting and ensure that the job is not aborted after the configured aboandonment time.

Conclusion

Connection pools allow to abandon database connections. After a certain time, the DB connection can be forced to be put back into the connection pool, no matter if it is still in use by the application or not. With Grails Quartz jobs there can be an issue with this feature in combination with long-running jobs that exceed the configured abandonment time. This article showed how to disconnect and reconnect a Hibernate session that can be kept open during the life-time of such long-running jobs.

Spock Quick-Tip: Grails Integration Tests

I'm currently working in project with a nearly six year old Grails application. Over time, we changed from having plain JUnit 4/GroovyTestCase unit and integration tests to Spock tests. Spock is such a great library to write both, unit and integration tests. Normally, we tend to write unit tests whenever possible. But there are cases (and code parts) where it is more practical to rely on a fully initialized application context and (Hibernate) data-store.

Integration Tests with Spock

The Spock Grails plugin comes with a distinct base class for Grails integration tests: IntegrationSpec.

IntegrationSpec initialises the application context, sets up an autowirer that autowires bean properties from the specification class and create a transactional boundary around every feature method in the specification.

All our Spock integration tests extend IntegrationSpec.

Mocking with Groovy's meta-classes

One thing I love about Spock is that it comes out-of-the-box with great support for mocking and stubbing. But there are times when you actually need to stub certain parts of the Grails artifact that is currently under test by your integration test.

We do this with the help of the Groovy Meta-Object protocol (MOP), that is, by altering the underlying meta-class. The next example shows how getCurrentUser is overwritten, as we do want to stub out the Spring Security part from the StatisticsService.


StatisticsServiceIntegrationTest extends IntegrationSpec {
    
    StatisticsService statisticsService

    void "count login to private area"() {

        setup:
            def user = new User(statistics: new UserStatistics())
            statisticsService.metaClass.getCurrentUser = { -> user }

        when:
            statisticsService.countLoginPA()

        then:
            user.statistics.logins == 1

    }    
}

Altering classes at runtime is a nice feature, but it can also become confusing when you don't know about the side-effects it may cause. For integration tests, changes to the meta-class won't be resetted, so once you do changes to a meta-class (we are working with per-instance meta-class changes, the same is even more true for global meta-class changes) those will be persistent through the entire test.

To solve that, we added a helper method that allows to revoke meta-class changes inbetween test runs:


public static void revokeMetaClassChanges(Class type, def instance = null)  {
    GroovySystem.metaClassRegistry.removeMetaClass(type)
    if (instance != null)  {
        instance.metaClass = null
    }
}

And applied it like this:


StatisticsServiceIntegrationTest extends IntegrationSpec {
    
    StatisticsService statisticsService

    void "count login to private area"() {

        setup:
            def user = new User(statistics: new UserStatistics())
            statisticsService.metaClass.getCurrentUser = { -> user }

        when:
            statisticsService.countLoginPA()

        then:
            user.statistics.logins == 1

        cleanup:
            revokeMetaClassChanges(StatisticsService, statisticsService)

    }    
}

This actually sets back the meta-class code and the service class is again un-altered when executing the next feature method.

Be warned.

Meta-class overriding can become tricky. One thing we came across multiple times is that you can't replace methods of super classes being called from super class methods. Here is a simplified example:


class A {
    def a(){
       a2()

    }
    def a2(){   
         println 'In class A'
    }
}

class B extends A{
    def b(){
        a()
    }
}

B b = new B();

b.metaClass.a2 = {
    println 'In class B'
}

b.b(); // still prints 'In class A'

If we wanted to stub the implementation of b inside our test code, this wouldn't work, as a and a2 are implemented in the same class A and therefore the method call won't be intercepted by a per-instance change to instance b. This now might seem obvious, but we had a hard time tracking this down.

If you start to experience weird issues of tests failing when you run the entire test suite, but being green when executed separately, it almost certainly has to do with meta-class rewriting that isn't undone between feature methods or even specifications. Just be aware of that.

@ConfineMetaClassChanges

Lately I became aware that our revokeMetaClassChanges is actually "part" of Spock with the @ConfineMetaClassChanges extension annotation.

The code behind it works a bit differently but the meaning is the same; it can be used on methods or classes to rollback meta-class changes declaratively:


@ConfineMetaClassChanges([StatisticsService])
StatisticsServiceIntegrationTest extends IntegrationSpec {
    
    StatisticsService statisticsService

    void "count login to private area"() {

        setup:
            def user = new User(statistics: new UserStatistics())
            statisticsService.metaClass.getCurrentUser = { -> user }

        when:
            statisticsService.countLoginPA()

        then:
            user.statistics.logins == 1

    }    
}

Speaking of Spock extensions. It's definitely worth to have a look at the chapter on Spock Extensions in the documentation. There is lots of great stuff already available (and coming in Spock 1.0).

Conclusion

Besides Spock's great mocking and stubbing capabilities, writing Grails integration tests also involves meta-class changes. This article shows how to rollback these changes to avoid side-effects and explained the usage of @ConfineMetaClassChanges a Spock extension annotation.

Grails - Tracking Principals

We use the Grails auto timestamp feature in nearly all of our domain classes. It basically allows the definition of two special domain class properties dateCreated and lastUpdated and automatically sets the creation and modification date whenever a domain object is inserted or updated.

In addition to dateCreated and lastUpdated we wanted to have a way to define two additional properties userCreated and userUpdated to save the principal who created, updated or deleted a domain class (deletion because we have audit log tables that track all table changes, so when an entry is deleted and the principal is set before, we can see who deleted an entry).

PersistenceEventListener

Grails provides the concept of GORM events, so we thought its implementation might be a good hint on how to implement our requirement for having userCreated and userUpdated. And indeed, we found DomainEventListener, a descendant class of AbstractPersistenceEventListener. It turns out that DomainEventListener is responsible for executing the GORM event hooks on domain object inserts, updates and deletes.

The event listener is registered at the application context as the PersistenceListener interface (which is implemented by AbstractPersistenceListener) extends from Spring's ApplicationListener and therefore actually uses the Spring event system.

In order to create a custom persistence listener, we just have to extend AbstractPersistenceEventListener and listen for the GORM events which are useful to us. Here is the implementation we ended up with:


@Log4j
class PrincipalPersistenceListener extends AbstractPersistenceEventListener {

    public static final String PROPERTY_PRINCIPAL_UPDATED = 'userUpdated'
    public static final String PROPERTY_PRINCIPAL_CREATED = 'userCreated'

    SpringSecurityService springSecurityService

    PrincipalPersistenceListener(Datastore datastore) {
        super(datastore)
    }

    @Override
    protected void onPersistenceEvent(AbstractPersistenceEvent event) {

        def entityObject = event.entityObject

        if (hasPrincipalProperty(entityObject)) {
            switch (event.eventType) {
                case EventType.PreInsert:
                    setPrincipalProperties(entityObject, true)
                    break

                case EventType.Validation:
                    setPrincipalProperties(entityObject, entityObject.id == null)
                    break

                case EventType.PreUpdate:
                    setPrincipalProperties(entityObject, false)
                    break

                case EventType.PreDelete:
                    setPrincipalProperties(entityObject, false)
                    break
            }
        }
    }

    protected boolean hasPrincipalProperty(def entityObject) {
        return entityObject.metaClass.hasProperty(entityObject, PROPERTY_PRINCIPAL_UPDATED) || entityObject.metaClass.hasProperty(entityObject, PROPERTY_PRINCIPAL_CREATED)
    }

    protected void setPrincipalProperties(def entityObject, boolean insert)  {
        def currentUser = springSecurityService.currentUser

        if (currentUser instanceof User) {
            def propertyUpdated = entityObject.metaClass.getMetaProperty(PROPERTY_PRINCIPAL_UPDATED)
            if (propertyUpdated != null)  {
                propertyUpdated.setProperty(entityObject, currentUser.uuid)
            }

            if (insert)  {
                def propertyCreated = entityObject.metaClass.getMetaProperty(PROPERTY_PRINCIPAL_CREATED)
                if (propertyCreated != null)  {
                    propertyCreated.setProperty(entityObject, currentUser.uuid)
                }
            }
        }
    }

    @Override
    boolean supportsEventType(Class eventType) {
        return eventType.isAssignableFrom(PreInsertEvent) ||
                eventType.isAssignableFrom(PreUpdateEvent) ||
                eventType.isAssignableFrom(PreDeleteEvent) ||
                eventType.isAssignableFrom(ValidationEvent)
    }
}

As you can see in the code above, the implementation intercepts the PreInsert, PreUpdate and PreDelete events. If any of these event types is thrown, the code checks the affected domain object for the existence of either the userCreated or userUpdated property. If available, it uses the springSecurityService to access the currently logged-in principal and uses its uuid property, as this is the unique identifier of our users in this application.

To register the PrincipalPersistenceListener and attach it to a Grails datastore, we need to add the following code to BootStrap.groovy:


def ctx = grailsApplication.mainContext
ctx.eventTriggeringInterceptor.datastores.each { key, datastore ->

    def listener = new PrincipalPersistenceListener(datastore)
    listener.springSecurityService = springSecurityService

    ctx.addApplicationListener(listener)
}

To make this work, the springSecurityService needs to be injected, the same is true for grailsApplication.

But that's all we have to do to support our new domain class properties userCreated and userUpdated. The last step is to add both properties to the domain class(es) we want to track.

Conclusion

Grails integrates with Spring's event mechanism and provides the AbstractPersistenceEventListener base class to listen to certain GORM events. Grails uses this mechanism internally for example for the GORM event hooks but it can of course be used by the application logic too. This article showed how to introduce support for userCreated and userUpdated which are similar to dateCreated and lastUpdated but store the principal how created, updated or deleted a domain object.

Google I/O App Insights

A while ago I came across the Google I/O App in one of the latest Android Developers blog posts. I thought it would be interesting to have a look at some of the internals, to also gain some insights at how Android applications are developed at Google and what third party libraries are actually used there.

The source code for the Google I/O app is available on GitHub.

build.gradle

My journey through the source code began with the settings.gradle file. It contains the information about the projects Gradle modules. This app consists of two modules: one for the wearable version and the other one for the Android version.

This article will not talk about the implementation of the wearable version, I will create a separate blog post for that.

I have to say that the Android version was of particular interest for me, so I decided to go on with the Android modules build.gradle dependencies section that holds all the external dependencies that are needed by the implementation:


dependencies {
    wearApp project(':Wearable')

    compile 'com.google.android.gms:play-services:5+' 
    compile 'com.android.support:support-v13:20.+'
    compile 'com.android.support:support-v4:20.+'
    compile 'com.google.android.apps.dashclock:dashclock-api:+'
    compile 'com.google.code.gson:gson:2.+'
    compile('com.google.api-client:google-api-client:1.+') {
        exclude group: 'xpp3', module: 'shared'
        exclude group: 'org.apache.httpcomponents', module: 'httpclient'
        exclude group: 'junit', module: 'junit'
        exclude group: 'com.google.android', module: 'android'
    }
    compile 'com.google.api-client:google-api-client-android:1.17.+'
    compile 'com.google.apis:google-api-services-plus:+'
    compile 'com.github.japgolly.android:svg-android:2.0.6'
    compile fileTree(dir: 'libs', include: '*.jar')
    compile files('../third_party/glide/library/libs/glide-3.2.0a.jar')
    compile files('../third_party/basic-http-client/libs/basic-http-client-android-0.88.jar')

    compile('com.google.maps.android:android-maps-utils:0.3+') {
        exclude group: "com.google.android.gms"
    }

    compile 'com.google.http-client:google-http-client-gson:+'
    compile 'com.google.apis:google-api-services-drive:+'
}

As you can see in the code snippet above, several external dependencies have been included.

Dashclock API

Let's start with the first dependency that gained my attention:


compile 'com.google.android.apps.dashclock:dashclock-api:+'

The Android dash-clock project comes with an alternative lock screen clock widget implementation that can be used to show additional status items. Showing additional information on the lock screen is done by implementing so-called DashClockExtension extension descendant classes, as described in the DashClockExtension documentation. Although this API looked pretty interesting, I couldn't find any use for it in the Google I/O application and also removing it from the dependencies did work, so I guess it might have been planned to use it, but actually it was never implemented.

GSON

Next up is Google's JSON library: Gson:


compile 'com.google.code.gson:gson:2.+'

The Google I/O app's main purpose is the give an overview of all the scheduled talks at Google I/O and also allow some interaction for the user to give feedback about visited sessions. Gson is used to parse JSON that comes from Google's web services and contains the entire conference data.

One particular piece of code that shows some Gson usage is the ConferenceDataHandler. This handler basically is responsible for parsing most of the JSON data that holds information about the scheduled conference sessions, speakers, etc. Instead of parsing the JSON content directly to an object tree, it registers "handlers" for every JSON property in a map:


mHandlerForKey.put(DATA_KEY_ROOMS, mRoomsHandler = new RoomsHandler(mContext));
mHandlerForKey.put(DATA_KEY_BLOCKS, mBlocksHandler = new BlocksHandler(mContext));
mHandlerForKey.put(DATA_KEY_TAGS, mTagsHandler = new TagsHandler(mContext));
mHandlerForKey.put(DATA_KEY_SPEAKERS, mSpeakersHandler = new SpeakersHandler(mContext));
mHandlerForKey.put(DATA_KEY_SESSIONS, mSessionsHandler = new SessionsHandler(mContext));
mHandlerForKey.put(DATA_KEY_SEARCH_SUGGESTIONS, mSearchSuggestHandler = new SearchSuggestHandler(mContext));
mHandlerForKey.put(DATA_KEY_MAP, mMapPropertyHandler = new MapPropertyHandler(mContext));
mHandlerForKey.put(DATA_KEY_EXPERTS, mExpertsHandler = new ExpertsHandler(mContext));
mHandlerForKey.put(DATA_KEY_HASHTAGS, mHashtagsHandler = new HashtagsHandler(mContext));
mHandlerForKey.put(DATA_KEY_VIDEOS, mVideosHandler = new VideosHandler(mContext));
mHandlerForKey.put(DATA_KEY_PARTNERS, mPartnersHandler = new PartnersHandler(mContext));

With the registered handlers set up, it parses the JSON response body property by property in processDataBody:


private void processDataBody(String dataBody) throws IOException {
    JsonReader reader = new JsonReader(new StringReader(dataBody));
    JsonParser parser = new JsonParser();
    try {
        reader.setLenient(true); // To err is human

        // the whole file is a single JSON object
        reader.beginObject();

        while (reader.hasNext()) {
            // the key is "rooms", "speakers", "tracks", etc.
            String key = reader.nextName();
            if (mHandlerForKey.containsKey(key)) {
                // pass the value to the corresponding handler
                mHandlerForKey.get(key).process(parser.parse(reader));
            } else {
                LOGW(TAG, "Skipping unknown key in conference data json: " + key);
                reader.skipValue();
            }
        }
        reader.endObject();
    } finally {
        reader.close();
    }
}

When we have a look at one of the handler classes, let's say at SessionsHandler, we will see that it not only encapsulates the code for parsing the session JSON objects, but also code for building so-called "content provider operations". The ContentProviderOperation class is a class from the Android SDK that is used to build content provider actions such as inserting, updating or deleting entities stored by a content provider. The handler classes provide methods to directly create content provider operations based on the current state of an entity. E.g. if a session is new, needs to be updated or deleted, its makeContentProviderOperations method from the handler class will create the appropriate operation. Let's have a look now how actually parsing JSON is done for the SessionsHandler:


@Override
public void process(JsonElement element) {
    for (Session session : new Gson().fromJson(element, Session[].class)) {
        mSessions.put(session.id, session);
    }
}

The code is quite slick. It uses an array of Session model classes as GSON target type and GSON will create the instances and populate the available properties from the JSON values:


public class Session {
    public String id;
    public String url;
    public String description;
    public String title;
    public String[] tags;
    public String startTimestamp;
    public String youtubeUrl;
    public String[] speakers;
    public String endTimestamp;
    public String hashtag;
    public String subtype;
    public String room;
    public String captionsUrl;
    public String photoUrl;
    public boolean isLivestream;
    public String mainTag;
    public String color;
    public String relatedContent;
    public int groupingOrder;

    public String getImportHashCode() {
        StringBuilder sb = new StringBuilder();
        sb.append("id").append(id == null ? "" : id)
                .append("description").append(description == null ? "" : description)
                .append("title").append(title == null ? "" : title)
                .append("url").append(url == null ? "" : url)
                .append("startTimestamp").append(startTimestamp == null ? "" : startTimestamp)
                .append("endTimestamp").append(endTimestamp == null ? "" : endTimestamp)
                .append("youtubeUrl").append(youtubeUrl == null ? "" : youtubeUrl)
                .append("subtype").append(subtype == null ? "" : subtype)
                .append("room").append(room == null ? "" : room)
                .append("hashtag").append(hashtag == null ? "" : hashtag)
                .append("isLivestream").append(isLivestream ? "true" : "false")
                .append("mainTag").append(mainTag)
                .append("captionsUrl").append(captionsUrl)
                .append("photoUrl").append(photoUrl)
                .append("relatedContent").append(relatedContent)
                .append("color").append(color)
                .append("groupingOrder").append(groupingOrder);
        for (String tag : tags) {
            sb.append("tag").append(tag);
        }
        for (String speaker : speakers) {
            sb.append("speaker").append(speaker);
        }
        return HashUtils.computeWeakHash(sb.toString());
    }

    public String makeTagsList() {
        int i;
        if (tags.length == 0) return "";
        StringBuilder sb = new StringBuilder();
        sb.append(tags[0]);
        for (i = 1; i < tags.length; i++) {
            sb.append(",").append(tags[i]);
        }
        return sb.toString();
    }

    public boolean hasTag(String tag) {
        for (String myTag : tags) {
            if (myTag.equals(tag)) {
                return true;
            }
        }
        return false;
    }
}

What's interesting about this class (and the other model classes) is the getImportHashCode method. This method is needed to find out about changes that might have been done on already processed entities and is actually a main method to be used by the data sync logic implemented by the SyncAdapter.

google-api-client

Next up in our list of dependencies is the Google APIs client library and its Android extension. Both libraries are used in conjunction with the Google Plus API from the next dependency


compile 'com.google.apis:google-api-services-plus:+'

to fetch the latest announcements via the AnnouncementsFetcher class. Once the announcements are fetched from the Google+ profile, they are stored by the content provider ScheduleProvider:


Plus plus = new Plus.Builder(httpTransport, jsonFactory, null)
        .setApplicationName(NetUtils.getUserAgent(mContext))
        .setGoogleClientRequestInitializer(
                new CommonGoogleClientRequestInitializer(Config.API_KEY))
        .build();

ActivityFeed activities;
try {
    activities = plus.activities().list(Config.ANNOUNCEMENTS_PLUS_ID, "public")
            .setMaxResults(100l)
            .execute();
    if (activities == null || activities.getItems() == null) {
        throw new IOException("Activities list was null.");
    }

} catch (IOException e) {
    LOGE(TAG, "Error fetching announcements", e);
    return batch;
}

// ...

StringBuilder sb = new StringBuilder();
for (Activity activity : activities.getItems()) {
    // ...

    // Insert announcement info
    batch.add(ContentProviderOperation
            .newInsert(ScheduleContract
                    .addCallerIsSyncAdapterParameter(Announcements.CONTENT_URI))
            .withValue(SyncColumns.UPDATED, System.currentTimeMillis())
            .withValue(Announcements.ANNOUNCEMENT_ID, activity.getId())
            .withValue(Announcements.ANNOUNCEMENT_DATE, activity.getUpdated().getValue())
            .withValue(Announcements.ANNOUNCEMENT_TITLE, activity.getTitle())
            .withValue(Announcements.ANNOUNCEMENT_ACTIVITY_JSON, activity.toPrettyString())
            .withValue(Announcements.ANNOUNCEMENT_URL, activity.getUrl())
            .build());
}

Again, the ContentProviderOperation builder methods are used to create the appropriate operations and return them to the class client.

Android SVG

Next up is a very interesting dependency: the Android SVG library:


compile 'com.github.japgolly.android:svg-android:2.0.6'

The SVG Android project adds support for showing scalable vector graphic files in an Android application. In the Google I/O application it is used to show the location of different floors in the Google I/O venue.

One place to have a look at SVG processing is the ConferenceDataHandler implementation, again, a handler class:


private void processMapOverlayFiles(Collection collection, boolean downloadAllowed) throws IOException, SVGParseException {
    boolean shouldClearCache = false;
    ArrayList usedTiles = Lists.newArrayList();
    
    for (Tile tile : collection) {
        final String filename = tile.filename;
        final String url = tile.url;

        usedTiles.add(filename);

        if (!MapUtils.hasTile(mContext, filename)) {
            shouldClearCache = true;
            
            if (MapUtils.hasTileAsset(mContext, filename)) {
                
                MapUtils.copyTileAsset(mContext, filename);

            } else if (downloadAllowed && !TextUtils.isEmpty(url)) {
                try {
                    // download the file only if downloads are allowed and url is not empty
                    File tileFile = MapUtils.getTileFile(mContext, filename);
                    BasicHttpClient httpClient = new BasicHttpClient();
                    httpClient.setRequestLogger(mQuietLogger);
                    HttpResponse httpResponse = httpClient.get(url, null);
                    FileUtils.writeFile(httpResponse.getBody(), tileFile);

                    // ensure the file is valid SVG
                    InputStream is = new FileInputStream(tileFile);
                    SVG svg = new SVGBuilder().readFromInputStream(is).build();
                    is.close();
                } catch (IOException ex) {
                    LOGE(TAG, "FAILED downloading map overlay tile "+url+
                            ": " + ex.getMessage(), ex);
                } catch (SVGParseException ex) {
                    LOGE(TAG, "FAILED parsing map overlay tile "+url+
                            ": " + ex.getMessage(), ex);
                }
            } else {
                LOGD(TAG, "Skipping download of map overlay tile" +
                        " (since downloadsAllowed=false)");
            }
        }
    }

    if (shouldClearCache) {
        MapUtils.clearDiskCache(mContext);
    }

    MapUtils.removeUnusedTiles(mContext, usedTiles);
}

The code looks if the SVG graphic is available in the APK's asset directory. If so, it copies the file to a custom directory. If not, it downloads the SVG and uses the svg-android library to validate if it is a valid SVG graphic.

The main place where the SVG graphics are later used is in the MapFragment implementation. It uses a TileOverlay and registers multiple TileProvider implementations of type SVGTileProvider class. The SVGTileProvider uses the previously shown SVGBuilder in order to draw the currently shown floor onto the map.


public SVGTileProvider(File file, float dpi) throws IOException {
    // ...

    SVG svg = new SVGBuilder().readFromInputStream(new FileInputStream(file)).build();
    mSvgPicture = svg.getPicture();
    
    // ...
}

// later on when drawing:

public byte[] getTileImageData(int x, int y, int zoom) {
    mStream.reset();

    Matrix matrix = new Matrix(mBaseMatrix);
    float scale = (float) (Math.pow(2, zoom) * mScale);
    matrix.postScale(scale, scale);
    matrix.postTranslate(-x * mDimension, -y * mDimension);

    mBitmap.eraseColor(Color.TRANSPARENT);
    Canvas c = new Canvas(mBitmap);
    c.setMatrix(matrix);

    // NOTE: Picture is not thread-safe.
    synchronized (mSvgPicture) {
        mSvgPicture.draw(c);
    }

    BufferedOutputStream stream = new BufferedOutputStream(mStream);
    mBitmap.compress(Bitmap.CompressFormat.PNG, 0, stream);
    try {
        stream.close();
    } catch (IOException e) {
        Log.e(TAG, "Error while closing tile byte stream.");
        e.printStackTrace();
    }
    return mStream.toByteArray();
}

As can be seen in the code above, the method getTileImageData applies some scaling and translating, but in the end it draws the mSvgPicture onto a newly created Canvas and writes it to the resulting ByteArrayOutputStream. In order to enhance performance on creating the tile graphics, there is the CachedTileProvider implementation that uses a disk LRU cache to cache results on disk.

I found it very refreshing to see an application of the svg-android library in action. Its definetly an implementation option to carry in mind for future Android apps.

Glide

Another third party library in use is Glide:


compile files('../third_party/glide/library/libs/glide-3.2.0a.jar')

Glide is an image loading and caching library that comes with extensions to other commonly used libraries such as OkHttp and Volley. In the Google I/O application the Glide API is encapsulated in the ImageLoader class.

One interesting detail in this class is the VariableWidthImageLoader implementation:


// ...
private static final Pattern PATTERN = Pattern.compile("__w-((?:-?\\d+)+)__");
// ...

@Override
protected String getUrl(String model, int width, int height) {
    Matcher m = PATTERN.matcher(model);
    int bestBucket = 0;
    if (m.find()) {
        String[] found = m.group(1).split("-");
        for (String bucketStr : found) {
            bestBucket = Integer.parseInt(bucketStr);
            if (bestBucket >= width) {
                // the best bucket is the first immediately bigger than the requested width
                break;
            }
        }
        if (bestBucket > 0) {
            model = m.replaceFirst("w"+bestBucket);
            LOGD(TAG, "width="+width+", URL successfully replaced by "+model);
        }
    }
    return model;
}

The VariableWidthImageLoader is used by Glide in order to return a customized URL that should be used for a given width and height. The implementation above looks for an image indicator in the current URL (think of model as being an URL to an image) that might look like __w-200-400-800__. If this indicator is available it replaces it with w<desiredWith> to actually fetch an image with a width that is actually larger than the requested width.

We used a similar pattern in our applications for image URLs (though with a width request parameter), but I wasn't aware of Glide providing such a nice API to inject this behaviour.

Basic HTTP Client

Of course, the Android basic http client implementation must also not be missed. It is needed to execute the actual HTTP requests for example in the RemoteConferenceDataFetcher that fetches the JSON content from Google servers. In fact, it first fetches only a so-called manifest file and checks whether data has changed based on that manifest. A detailed explanation on the actual synchronisation of the conference data can be found at the Android developers blog.

Conclusion

This article had a look at some places in the Google I/O Android application and showed some third party libraries in use. The application has been open-sourced on GitHub and is available under the Apache license.