First go at creating a Garmin watch face

For a while Garmin have been adding “wearable technology” functionality to their watches - notifications from the phone etc. I got a Garmin Fenix 3 for Christmas and had a play with the vast number of watch faces and widgets that were available.

The recent updates have really added some great functionality and I ended up settling on the out of the box digital face as my normal watch face but I still wanted to have a go at creating my own watch.

###Monkey C The programming language used to develop the faces, apps, widgets and data fields is called Monkey C and the IDE is mostly intended to be Eclipse.

Its almost certainly going to be worth your time looking at the Garmin developer site to get the full over view of whats required, but I’m covering the headlines below.

####Getting the SDK First you need to download the Garmin SDK from here. I’m using a Macbook so I’ve dropped it in /usr/local/garmin-sdk.

####Eclipse Add in If you look at the getting started pages here you’ll get all the information about adding the Eclipse add in which will give the you project template and intellisense as well as access to some richer wizards for interacting with the simulator.

I’m not very comfortable using Eclipse so I was pleased to see that there is an Intellij plugin that someone had started.

####Intellij Plug in The IntelliJ plugin is added in the normal way, the details about it are here.

At the time of writing this it was fairly basic and doesn’t have many features, I had to rely on the API docs to get the message signatures of everything I needed to use.

####API Docs The API docs are pretty good for getting the gist of what you want to do. You need to know what you want to do and see if you can rather, but thats no different to any API I guess.

###My First Watch face And so to my first watch face. I wanted to have a clean digital watch without the clutter of bars and tickers. I did want to know my relative step progress and the current battery level though, I’ve been caught out a few too many times.

My understanding from the docs is that your watch face updates minutely in low power mode until the gesture of looking at the watch is detected then it becomes every second. There are some methods which are called during this state change but I didn’t have any need for them.

I’m loading the components programatically so I don’t need much in my layouts file

<layout id="WatchFace">
</layout>

Your watch face must extend the Ui.WatchFace class

class FirstwatchfaceView extends Ui.WatchFace {

  function initialize() {
    WatchFace.initialize();

  function onLayout(dc) {
      setLayout(Rez.Layouts.WatchFace(dc));

  function onUpdate(dc) {
    // clear the display
    dc.clear();
    var font = Gfx.FONT_NUMBER_THAI_HOT;
      // get the info needed
    var activity = ActivityMonitor.getInfo();
    var stats = Sys.getSystemStats()
    var clockTime = Sys.getClockTime();
      var today = Time.today();
    var dateInfo = Time.Gregorian.info(today, Time.FORMAT_MEDIUM);
    var timeString = Lang.format("$1$ $2$", [clockTime.hour, clockTime.min.format("%02d")])
    // get the text size to work out where to position it
    var textDim = dc.getTextDimensions(timeString, font);
    var x = (dc.getWidth() / 2);
    var y = (dc.getHeight() /2) - textDim[1] /2;
    var stepsX = x - textDim[0]/2 ;
    var batteryPercent = Lang.format("$1$%", [stats.battery.format("%02d")]);
    var date = Lang.format("$1$ $2$", [dateInfo.day, dateInfo.month]);
    var percent =  (activity.steps*100)/activity.stepGoal
    // set the whole screen black
      dc.setColor(Gfx.COLOR_BLACK, Gfx.COLOR_BLACK);
      dc.fillRectangle(0, 0, dc.getWidth(), dc.getHeight())
    if (percent > 100) {
       dc.setColor(Gfx.COLOR_GREEN, Gfx.COLOR_BLACK);
         dc.drawText(x,y, font, timeString, Gfx.TEXT_JUSTIFY_CENTER);
      } else {
         dc.setColor(Gfx.COLOR_RED, Gfx.COLOR_BLACK);
         dc.fillRectangle(stepsX, y+1, min(percent, textDim[0]), textDim[1]);
         dc.setColor(Gfx.COLOR_WHITE, Gfx.COLOR_BLACK);
       dc.fillRectangle(stepsX + percent, y+1, textDim[0]-percent, textDim[1]);
       dc.setColor(Gfx.COLOR_TRANSPARENT, Gfx.COLOR_BLACK);
       dc.drawText(x,y, font, timeString, Gfx.TEXT_JUSTIFY_CENTER);
    }
    dc.setColor(Gfx.COLOR_WHITE, Gfx.COLOR_BLACK);
    dc.drawText(x, (dc.getHeight() - 10 - (dc.getFontHeight(Gfx.FONT_TINY))), Gfx.FONT_TIN batteryPercent, Gfx.TEXT_JUSTIFY_CENTER);
    dc.drawText(x, (10 + dc.getFontHeight(Gfx.FONT_TINY)), Gfx.FONT_TINY, date Gfx.TEXT_JSTIFY_CENTER); }

function min(a, b) {
  if (a > b) {
     return b;
   }
   return a;
}

Running in IntelliJ for me is a case of Shift+F10 and the Run Configuration loads the simulator.

In the simulator you can set the levels of activity and change properties of the device such as battery status and GPS etc.

Steps in progress Daily steps in progress

Steps in progress Daily steps in completed


Scala eXchange 2015 - Embracing the community

Today I’m at my first Scala eXchange conference - the 5th annual one to be precise.

The day started with the keynote session by @jessitron which was both inspiring and enlightening. The general tone being that code should be clear and useful so that others can learn.

Drawing on examples where opaque libraries with poor documentation or overly simplified examples limited the accessibility to the new user. Words like “Simply”, “Just”, “Obviously” and “Clearly” are the scourge of documentation and for the most part lazy.

Contribution to any project should be welcomed, its an active attempt to say “hey, I’m here to help and I’m interested”. Even if the contribution isn’t entirely helpful or needs polishing it should still be embraced and encouraged.

I’ve mumbled for years about how to get involved with a project assuming that I needed to offer huge value on the first pull request - Jessica has help me see that the first step is to just do something.

If only the second speaker had been listening, maybe his talk would have been less obscure and contained less usage of the words “Simply” and “Just”.


Unit testing HDFS code

I need to write a couple of unit tests for some code to add a log entry into HDFS but I don’t want to have to rely on having access to full blown HDFS cluster or a local install to achieve this.

The MiniDFSCluster in org.apache.hadoop:hadoop-hdfs can be used to create a quick clustered file system which can be used to testing.

The following dependencies are required for the test to work.

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs</artifactId>
    <version>2.6.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-hdfs</artifactId>
    <type>test-jar</type>
    <version>2.6.0</version>
    <scope>test</scope>
</dependency>

The code is reasonably simple, I’m creating the cluster in the Test setup and tearing it down during the teardown phase of the tests

private MiniDFSCluster cluster;
private String hdfsURI;

public void setUp() throws Exception {
    super.setUp();
    Configuration conf = new Configuration();
    File baseDir = new File("./target/hdfs/").getAbsoluteFile();
    FileUtil.fullyDelete(baseDir);
    conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath());
    MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf);
    cluster = builder.build();
    hdfsURI = "hdfs://localhost:"+ cluster.getNameNodePort() + "/";
}

public void tearDown() throws Exception {
    cluster.shutdown();
}

This makes a cluster available for the tests, in this case its a simple log entry which is going to return the path to the log entry, because this has a guid in I need to just make sure the file is created starting as expected

public void testCreateLogEntry() throws Exception {
	String logentry = new LogEntry().createLogEntry("TestStage", "TestCategory", "/testpath", cluster.getFileSystem());
	String date = new SimpleDateFormat("yyyyMMdd").format(new Date());
	assertTrue(logentry.startsWith(String.format("/testpath/TestStage_%s_", date)));
}

Writing a Flume Interceptor

He we are in June, some five months since the last post and I finally have some time and content to sit and write a post.

In April 2013 I started working with Hadoop, the plan was to suck in server application logs to determine who was using what data within the business to make sure it was being correctly accounted for. At the time, Flume seemed like the obvious choice to ingest these files till we realised the timing, format and frequency made Flume a little like over kill. As it happened, it was discounted before I could get my teeth into it.

Two years later and there is a reason to use Flume - high volumes of regularly generated XML files which need ingesting into HDFS for processing - clearly a use case for Flume.

There are two key requirements for this piece, one that the file name be preserved somehow and that the content be converted to JSON inflight - for this post I’m going to focus only on the former.

When setting up the configuration for the Flume agent, the Spooling Directory Source can be configured to with fileHeader = true which will add the full path of the originating file into the header where it can be used by the interceptor. This can be appended to the destination path in HDFS, but as it contains the complete originating path it will go into a similar structure to source - in our case that isn’t desirable.

To solve this, I’m writing and interceptor which will mutate the path to just have the filename with no extension.

Creating the inteceptor requires a number of steps;

  1. Importing required dependencies;
<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-core</artifactId>
    <version>1.5.0</version>
</dependency>

Then we need to create the abstract class which implements Interceptor which will be used as a base for future interceptors.

public class AbstractFlumeInterceptor implements Interceptor {

    public void initialize() {    }

    public Event intercept(Event event) {
        return null;
    }

    public List<Event> intercept(List<Event> events) {
        for (Iterator<Event> eventIterator = events.iterator(); eventIterator.hasNext(); ) {
            Event next =  intercept(eventIterator.next());
            if(next == null) {
                eventIterator.remove();
            }
        }
        return events;
    }

    public void close() {    }
}

Now we have this class which wraps up the logic of handling a list of Events we need to create the concrete class called FilenameInterceptor

@Override
public Event intercept(Event event) {
    Map<String, String> headers = event.getHeaders();
    String headerValue = headers.get(header); // header in this case is 'file' as per the config
    if(headerValue == null) {
        headerValue = "";
    }
    Path path = Paths.get(headerValue);
    if (path != null && path.getFileName() != null) {
        headerValue = FilenameUtils.removeExtension(path.getFileName().toString());
    }
    headers.put(header, headerValue);
    return event;
}

In the conf file for Flume we need the nested class in our Interceptor to build it, so the following Builder class is added

public static class Builder implements Interceptor.Builder {
    private String headerkey = "HostTime";

    public Interceptor build() {
        return new FilenameInterceptor(headerkey);
    }

    public void configure(Context context) {
        headerkey = context.getString("key");
    }
}

Now we have all this we can mvn clean package and copy the jar to the lib folder - in my case we’re using Cloudera so its in the parcels folder /opt/cloudera/parcels/CDHxxx/flume-ng/lib, from here it will be picked up with flume-ng starts.

The new additions to the conf file are;

# ... source1 props ...
agent1.sources.source1.fileHeader = true
agent1.sources.source1.interceptors = interceptor1
agent1.sources.source1.interceptors.interceptor1.type = [package].[for].[Interceptor].FilenameInterceptor$Builder
# ... hdfs1 props ...
agent1.sinks.hdfs1.filePrefix = %{file}

Quick introduction to pyspark

All the work I have been doing with AWS has been using Python, specifically boto3 the rework of boto.

One of the intentions is to limit bandwidth when transferring data to S3 the idea is to send periodic snapshots then daily deltas to merge and form a latest folder so a diff mechanism is needed - I originally implemented this in Scala as a Spark process but in an effort to settle on one language I’m looking to redo in Python using pyspark

I’m using my Macbook and to keep things quick and easy I’m going to download a package with Hadoop and Spark then dump it in /usr/share

wget http://archive.apache.org/dist/spark/spark-1.0.2/spark-1.0.2-bin-hadoop2.tgz
tar -xvf spark-1.0.2-bin-hadoop2.tgz
mv spark-1.0.2-bin-hadoop2 /usr/share/spark-hadoop

I’m going to create a folder to do my dev in under my home folder, to keep things clean I like to use virtualenv

cd ~/dev
virtualenv pyspark
cd pyspark

To start pyspark with IPYTHON we need to start it with some IPYTHON_OPTS

IPYTHON_OPTS="notebook" /usr/share/spark-hadoop/bin/pyspark

This opens IPython notebook in the default browser.

Finally, a quick and dirty demo with word count

file = sc.textFile("/data/bigtextfile.txt")
counts = file.flatMap(lambda line: line.split(" ")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)
counts.saveAsTextFile("/data/bigtextfile.txt")