Native vs. Web Apps (again)

When all you have is a hammer, everything looks like a nail.

Another example of developers coming to their senses –  accounting software developers Xero are ditching HTML5 in favour of native iOS and Android apps

In their blog post, the company explain that developing in HTML and JavaScript was not the wisest decision:

…building a complicated mobile application in HTML5 has been hard. Even with frameworks as amazing as Sencha Touch, we’ve found the ability to iterate as fast as we would like has become harder as our application has become more complex.

The HTML / JavaScript stack initially seem attractive as a time-saving route to development, but sadly this isn’t always the case. In an attempt to save time for the developer, the user ends up with a third-rate experience. Xero say:

Xero prides itself on not compromising on customer experience, and when it comes down to it, the question isn’t “How can we use our existing skills to build a mobile application?” but “What is going to enable us to deliver the best customer experience on the mobile devices that our customers use?”

There has been a cost:

And the lesson we’ve learnt over the last 12 months has been that the cost in time, effort and testing to bring an HTML5 application to a native level of performance seems to be far greater than if the application was built with native technologies from the get-go.

Phil Libin, CEO of Evernote, wrote something similar two years ago in his guest post, Four Lessons from Evernote’s First Week on the Mac App Store. Libin wrote then that:

…people gravitate towards the products with the best overall user experience. It’s very hard for something developed in a cross-platform, lowest-common-denominator technology to provide as nice an experience as a similar native app.

Sure, I agree, it would be nice to write once, run anywhere, but, as with Java desktop apps, you never get the best experience. Libin is realistic:

As the CEO of a software company, I wish this weren’t true. I’d love to build one version of our App that could work everywhere. Instead, we develop separate native versions for Windows, Mac, Desktop Web, iOS, Android, BlackBerry, HP WebOS and (coming soon) Windows Phone 7. We do it because the results are better and, frankly, that’s all-important. We could probably save 70% of our development budget by switching to a single, cross-platform client, but we would probably lose 80% of our users. And we’d be shut out of most app stores and go back to worrying about distribution.

When all you have is a team of HTML and JavaScript developers, everything looks…third-rate.

GitHub is getting easier

Today, GitHub announced that you can move and rename files within your repositories actually at their website, rather than by making the changes locally, committing and then pushing them. I confess that I didn’t even know, until today, that you could create and edit files in the first place on the GitHub website, I’ve just been using it as a freetard’s on-line repository – commit, push and forgeddaboutit.

I think this changes the game both for the user and for GitHub itself. Normally, one would create and edit files locally, commit them to the local git repository, and then push the changes up to GitHub (or any on-line git repository such as those hosted by SourceForge or BitBucket). Such is this work-flow so natural that I rarely ever visit the GItHub web-site, except when I need to create a new, blank, repository. Now a novice user can very easily create and initialise a new Git repository, create and edit files, rename and move them, create branches, fork and merge – all without so much as a nod to a command line or a Git client. So the dirty mechanics of Git recede into the background and what we have is an easy to use web interface/client to create and manage versions of files, and with a social sharing aspect, to boot.

GitHub is therefore de-emphasising the difficult and opaque “Git” aspect and increasing its “Hub-ness”. I think this potentially increases the user base and positions it for services and features that are not necessarily tied to just geeky and boring Source Control Management. It’s not that Git is getting sexy, but that file version control and repository forking is getting easier. It also means that users might spend more time at the GitHub web-site rather than in their Git client or IDE. Ads next, maybe?

I propose a new word:

giterate
adjective
able to use Git literately.

As in, “Hey, Steve’s getting quite giterate these days!”

Why is Git scary?

There have been a few occasions where I’ve tried to explain how the distributed nature of Git works to an interested listener, usually a Subversion (or perhaps CVS) user, who just downright couldn’t get it and got annoyed at not getting it. Clearly I didn’t explain it well, but if you don’t know how Git works, and you’ve only ever used a centralised repository system it sure can sound kind of mysterious and scary. How the hell can you have more than one repository? Which one is the real, or canonical, one? What if someone codes something phenomenal, but it’s in their own repo; how do we get that in the build? What if there are a hundred different versions? How can you integrate the work of different developers? What if there’s a conflict? What if…

I think the missing piece of information that can help in understanding Git is that there actually is a central, or canonical, repository. It’s the one you do your build from. Let’s take the Linux kernel as an example. There are probably thousands of versions of this pulled from the main Git repository, perhaps hobbyist developers, or people actually working on the kernel. But when it comes to do the build, then the agreed main repository is the one to use. One of the most useful things you can do to get a better understand of Git is to watch Linus Torvalds explain it at a Google Tech Talk in 2007 (Git has come a long way since then):

A great takeaway from that video is the slide (at about 12:30) that shows the difference between centralised and distributed systems. I’ve made my own versions.

Here’s a centralised system such as Subversion or CVS:

centralised

And a distributed system like Git:

distributed

In the first diagram, each user has to commit their work to the one central repository, let’s call it The Central Scrutinizer. To pay homage to the Central Scrutinizer you have to be online. You’re out of luck if you want to commit some code when you’re on a plane. Each user works in isolation, checking their changes in and checking things out hoping there’s no conflict that requires a merge.

In the second diagram you can see that there is one main repository in the centre, let’s call it the Le Big Mac, but there are many satellite repositories owned by users that seem to have formed sub-groups. These can cluster together creating and refining their own secret sauce that can be pushed to Le Big Mac when they’re ready.

So, if you have a problem visualising how Git works, just remember The Central Scrutinizer and Le Big Mac.

Bundling a Java Runtime Environment (JRE) with an Eclipse RCP application

I’ve figured out how to bundle the Oracle Java Runtime Environment (JRE) 7 with the Mac version of Archi, an Eclipse-based Rich Client Platform (RCP) application that I developed. There’s no real need to do this at the moment, because the first time an application with a dependency on desktop Java, such as Archi, is run on a Mac, and Java is not currently installed on the system, the user will be informed that the application needs it and Java will be automagically downloaded and installed. This of course ensures that you can easily run your favourite Java-based applications such as Archi, XMind, SmartGit, Eclipse, IDEA and a lot of other useful tools. (Note that Apple only supports JRE 6.)

For Archi I ensure that the user has reasonable choices – on Windows they can either use the forget-about-it installer which includes its own local copy of the JRE (and is only used by Archi and for no other purposes, not even the Browser) or download the manual zip file, which means that the user needs to manually install their own copy of Java. On Linux, the user will probably want to compile the source anyway and knows what type of Java framework they want on their system (probably OpenJDK). But on the Mac, the user has to let the system install Apple’s version of Java (version 6) or, if they prefer to use the latest version 7, they have to manually download and install the JRE from Oracle’s website. This is a bit of a pain. I tried this myself and could only get the JDK to work, not the JRE. Whilst Lion, Mountain Lion and Mavericks OS X will install JRE 6, this might not always be the case in future version of OS X.

The advantages of bundling a local copy of the JRE with Archi are:

  • The user doesn’t have to worry about installing Java (or even care that the application requires it)
  • The JRE is local and is only used by the application and is therefore “sandboxed”
  • It isn’t installed as an extension in the Browser (this is the real vector for trojans and virii)
  • When the user deletes the application off their system, they also delete the local JRE – an instant complete uninstall

Disadvantages:

  • The download size and application footprint is bigger (adds about another 140mb or so when unpacked)
  • Each Java based application will have its own copy of the JRE when only one system-wide copy is necessary, so you could end up with some disk bloat
  • …can’t think of any more 🙂

So, how do we bundle a copy of the the Mac JRE 7 with Archi, or any Eclipse-based RCP application for that matter?

I already do this for the Windows version of Archi by simply copying the “jre” folder of the official JRE (with its “lib” and “bin” sub-directories) and putting it at the root level of the Archi installation. This procedure works for any other Eclipse-based application, including Eclipse itself.

On OS X this has only been possible since Oracle’s later versions of JRE 7, and later versions of Eclipse itself. The same principle applies on a Mac as for Windows – include the JRE in a “jre” folder at the root level of the application. Of course, as Archi is delivered as a self-contained application bundle on Mac (Archi.app) the “jre” folder sits inside the Archi.app bundle at the same level as the other folders:

So, how do we make a re-distributable copy of the JRE to add to the application bundle? The only way I could figure out how to do this was to firstly install the JDK onto a Mac and then make a copy of some sub-folders and files:

  1. Install Oracle’s JDK 7 on a Mac (not the JRE)
  2. Copy the “/Library/Java/JavaVirtualMachines/jdk1.7.0_xx.jdk” folder and rename the copy  to “jre” (xx = the two-digit version number of the JDK)
  3. Delete everything in the copy’s “Contents/Home” sub-folder except for the “jre” sub-folder

You end up with a slimmed-down JRE with this folder structure:

This “Jre” folder then needs to be added to the Archi.app bundle.

Note – as I use a Windows build machine and an Ant script to create the installation archives for Archi I found that some files in the JRE folder lost their executable bit and so some files didn’t work. To get around this I simply zipped up the copied JRE folder on a Mac and used this zip file as the source of the Mac JRE, so that the Ant script simply copies the zip file’s content to the overall target installation archive, preserving file attributes (including an “alias” type file).

I haven’t rolled this out yet, but might do so for a future version of Archi. It can only be the 64-bit version, though, as Oracle’s JRE 7 only supports 64-bit.

Update – some people have emailed to ask how I get the Archi Eclipse application into an Archi.app application bundle in the first place. By default, an Eclipse product export doesn’t do this, so I’ve written an Ant script that moves the “plugins” and “configuration” folders down one level into the .app bundle, moves the “Archi” executable launcher file down a level into the Archi.app bundke, and modifies the Info.plist file to adjust the path to the launcher file. The folder structure inside the app bundle looks like this:

archi foldersThe Info.plist file is modified to set the launcher path:

Gamification and Vacuous Neologisms

I read a post this morning by Nigel Green – Four G’s: Gartner, Gamification Getting Things Done & Game Theory

A nice post, fair enough. But the part that made me choke on my cornflakes* was this quote from Gartner’s Steve Prentice:

We all do Gamification already. Gamification is when we create a To Do List and enjoy the satisfaction of ticking items off and finally completing the list. It gives us focus and goals to achieve.

No, no, no, no, no! When I tick a task off of my To Do list it’s because I damn well made myself do it in spite of not wanting to. It’s called “discipline”. Do I need another word for this?

What’s the value in using another word for something I already do? When I’m on my daily jog I set myself small goals to make it more interesting, such as “run to the next lamp-post”. “Gamification!” goes the cry. Setting goals and achieving them is now “Gamification”. Groan. So what added value does this re-branding provide? To me, none. To a consultant, an academic, or an author, possibly a whole lot – opportunities for workshops, for consultancy, perhaps a paper, or a trend-setting “How To” book.

A quick Googling of the neologism led me to this article from How Stuff Works:

McGonigal believes that if people worldwide could play more, not less, in the right game scenario, their experience could help solve some of the world’s biggest problems like hunger, poverty and global conflict.

My heart sinks.

And this:

In his 2010 book “Game-Based Marketing,” co-authored with writer Joselin Linder, Zichermann defines a related term he coined: funware. Funware describes the everyday activities we’re already engaging in that we consider a game. Zichermann explains that business should look for ways to apply funware in their marketing. Funware, he says, is the core component in applying gamification to business.

My heart sinks even further. This is the kind of nonsense Douglas Adams would have included in the “B” Ark.

But, sadly, I need to get back to work, there’s a bug I need to fix. Damn, if only I had some Funware to fix it.

(* Disclaimer – I don’t actually eat cornflakes for breakfast, preferring instead that prince of foods, the muffin)

MOOCs

Most of the recent anti-MOOC commentary by the cleverati sounds more like sour grapes to me. One bogus argument is that courses achieve a low completion rate. 10% of several thousand is doing OK by anybody’s book.

Here are some comments from Tucker Balch who’s actually taught a MOOC:

The cost for a MOOC is zero. All a student need do is provide an email address, and click a button labeled “sign me up.”

Failing a course at a university is costly in many ways for a student. Besides the time and funds lost, there’s the cost of that “F” on the transcript. There are no such costs associated with MOOCs.

But MOOC completion rates aren’t really low in the context of Internet engagement. A click through rate of 5% for a google ad is considered a strong success. Convincing 5% to engage intellectually for 8 weeks is, I think, a big deal.

A refreshing change to the the tiresome armchair punditry of those who typically haven’t taken a MOOC or taught one. It reminds me of the brouhaha in the 1980s when the UK Musician’s Union tried to limit the use of Samplers because they feared that “real” musicians would be done out of a job. That’s the real issue here isn’t it? The bogus edutech cleverati weren’t consulted, MOOCs have been launched without their (unwanted) say-so, and they’re basically out of a job.