301 – Moved Permanently

This blog now lives at http://blog.rraghur.in

If you’ve subscribed to feeds, please update the feed urls as well.

Advertisements

And we’re back to windows

And back to Win 7

Well not really – but I have your attention now… So in my last post, I talked about moving my home computer from Win 7 to Linux Mint KDE. That went ok for the most part other than some minor issues.

Fast-forward a day and I hit my first user issue :)… wife’s workplace has some video content that is distributed as DRM protected swf files that wil play only through a player called HaiHaiSoft player!

Options

  1. Boot into windows – painful and slow and kills everyone’s else session.
  2. Wine – Thought it’d be worth a try – installed Wine and dependencies through Synaptic. As expected, it would’nt run haiHaiSoft player – crashed aat launch.
  3. Virtualization: so the final option was a VM through virtualbox. Installed Virtualbox and its dependencies (dkms, guest additions etc) and brought out my Win 7 install disk from cold storage.

Virtualbox and Windows VM installation

Went through installation and got Windows up and running. Once I got the OS installed, also installed guest additions and it runs surprisingly well. I’d only used Virtualbox for a linux guest from a Windows host before so it was a nice change to see how it worked the other way around.

Anyway, once the VM was installed, downloaded and installed the player and put a shortcut to virtualbox on the desktop. Problem solved!

Single Page Apps

We released the Scheduler service (cloud hosted cron that does webhooks) on the 18th of Jan. It was our first release (still in beta) and you can sign up for it via the Windows Azure store as an addon. Upcoming release will have a full portal and the ability to register without going via the Windows Azure portal.

We’ve been building the user portal to the Scheduler service as a Single Page app (SPA) and I wanted to share a some background and insights we’ve gained.

SPA overview

To review, an SPA is a web app contained in a single page – where ‘pages’ are nothing but divs being shown/hidden based on the state of the app and user navigation.

The benefits are that you never have a full page refresh at all – essentially, page loads are instantaneous and data is retrieved and shown via AJAX calls. From a UX standpoint, this delivers a ‘speedier’ experience, since you never see the ‘static’ portions of your page reload when you navigate around.

All that speediness is great but the downsides are equally important:

SPA – Challenges

  1. Navigation – SPA’s by nature break the normal navigation mechanism of the browser. Normally, you click a link, it launches off a request and would update the url on the address bar. The response is then fetched and painted. In an SPA however, a link click is trapped in JS and the state is changed and you show a different div (with a background AJAX req being launched).
    This breaks Back/Forward navigation and since the URL doesn’t change, bookmarkability is also broken to boot.

  2. SEO – SEO also breaks because links are associated with JS script and most bots cannot follow such links.

Now, none of this is really new. Gmail was probably the first well known SPA implementation and that’s been around since 2004. What’s changed is tha t now there are better tools and frameworks for writing SPAs. So how do you get around the problems?

  1. Back/Forward nav and Bookmarkability: SPA’s use hash fragment navigation – links contain hash fragments. According to the original HTTP standard, hash fragments are for within page navigation and hence while the browser will update the address on the address bar and push an entry into the history stack, it will not make a request to the server. Client side routing can listen for changes to the location hash and manipulate the DOM to show the right ‘section’ of the page.
  2. SEO – Google (and later Bing) support crawling SPA websites provided the links are formatted specifically. See

Why we went the SPA way

When we started out with the Portal, we needed to take some decisions around how to go about it

  1. Scheduler REST service is a developer focused offering and the primary interaction for our users is the API interface itself. While the portal will have a Scheduler management features, this is really to give our users a ‘manual’ interface to scheduler. The other important use case for the portal is when you want to see history of a task’s execution. Given that the API was primary, we wanted to build the UI using the APIs to dogfood our API early and often.
  2. It just made sense to have the UI consume the APIs so that we weren’t re-writing the same capabilities again just to support the UI.
  3. Getting the portal to work across devices was important. In that sense, going with an approach that reduces page loads makes sense.
  4. We wanted public pages to be SEO friendly – so the SPA experience kicks in only after you login.
  5. Bookmarkability is important and it should be easy to paste/share links within the app.

Tools and frameworks

We evaluated different frameworks for building the SPA. We wrote a thin slice of the portal – a few public pages, a Social login page and a couple of logged in pages for navigation and bookmarkability.
1. KO+ approach – I’m calling this KO+ as it’s KO is just a library for MVVM binding and we needed a bunch of other libraries for managing other aspects of the SPA.
– Knockout.js – for MVVM binding
– Sammy.js – Client side routing
– Require.js – script dependency management.
– Jquery – general DOM manipulation when we needed it.
2. Angular.js – Google’s Angular.js is a full suite SPA framework that handles all the aspects of SPA

We chose the KO+ approach as there was knowledge and experience on KO in the team. The learning curve’s also lesser since each library can be tackled at a time. While Angular offers a full fledged SPA framework, it also comes with more complexity to be grappled with and understood – essentially, the ‘Angular’ way of building apps.

That said, once you get over the initial learning curve of Angular, it does have a pleasant experience and you don’t have to deal with integration issues that come up when using different libraries. We had prior experience on KO on the team so it just made sense to pick it given our timelines.

I’ll post an update once we have it out of the door and ready for public consumption.

Rewriting history with Git

What’s this about rewriting history?

While developing any significant piece of code, you end up making a lot of incremental advances. Now, it’ll be ideal
if you are able to save your state at each increment with a commit and then proceed forward. This gives you the freedom to try out approaches, go in one way or the other and at each point have a safe harbor to return to. However, this ends up with your history looking messy and folks whom you’re collaborating with have to follow your mental drivel as you slowly built up the feature.
Now imagine if you could do incremental commits but at the same time, before you share your epic with the rest of the world, were able to clean up your history of commits by reordering commits, dropping useless commits, squashing a few commits together (remove those ‘oops missed a change’ commits) and clean up your commit messages and so on and then let it loose on the world!
Git’s interactive rebase lets you do exactly this!!!

git rebase –interactive to the rescue

Git’s magic incantation to rewrite history is git rebase -i. This takes as argument a commit or a branch on which to apply the effects of rewritten rebase operation

Lets see it in operation:

squashing and reordering commits

Let’s say you made two commits A and B. Then you realize that you’ve missed out something which should really have been a part of A, so you fix that with a ‘oops’ commit and call it C. So your history looks like A->B->C whereas you’d like it to look like AC->B

Let’s say your history looks like this:

    bbfd1f6 C                           # ------> HEAD
    94d8c9c B                           # ------> HEAD~1
    5ba6c52 A                           # ------> HEAD~2
    26de234 Some other commit           # ------> HEAD~3
    ....
    ....

You’d like to fix up all commits after ‘some other commit’ – that’s HEAD~3. Fire up git rebase -i HEAD~3

The HEAD~3 needs some explaining – you made 3 commits A, B and C. You’d like to rewrite history on top of the 4th commit before HEAD (HEAD~3). The commit you specify as the base in rebase is not included. Alternatively, you could just pick up the SHA1 for the commit from log and use that in your rebase command.

Git will open your editor with something like this:

    pick 5ba6c52 A
    pick 94d8c9c B
    pick bbfd1f6 C
    # Rebase 7a0ff68..bbfd1f6 onto 7a0ff68
    #
    # Commands:
    #  p, pick = use commit
    #  r, reword = use commit, but edit the commit message
    #  e, edit = use commit, but stop for amending
    #  s, squash = use commit, but meld into previous commit
    #  f, fixup = like "squash", but discard this commit's log message
    #  x, exec = run command (the rest of the line) using shell
    #
    # These lines can be re-ordered; they are executed from top to bottom.
    #
    # If you remove a line here THAT COMMIT WILL BE LOST.
    #
    # However, if you remove everything, the rebase will be aborted.
    #
    # Note that empty commits are commented out

Basically, git is showing you the list of commands it will use to operate on all commits since your starting point. Also, it gives instructions on how to pick (p), squash (s)/fixup (f) or reword(r) each of your commits. To modify the history order, you can simply reorder the lines. If you delete any line altogehter, then that commit totally skipped (However, if you delete all the lines, then the rebase operation is aborted).

So, here we tell that we want to pick A, squash commit C into it and then pick commit B.

    pick 5ba6c52 A
    squash bbfd1f6 C
    pick 94d8c9c B

Save the editor and Git will perform the rebase. It will then pop up another editor window allowing you to give a single commit message for AC (helpfully pre filled with the two original messages for A and C). Once you provide that, git rebase proceeds and now your history looks like AC->B as you’d like it to be.

Miscellaneous tips

Using GitExtensions

  1. If you use Git Extensions, you can do the rebase though it’s not very intuitive. First, select the commit on which you’d like the interactive rebase. Right click and choose ‘Rebase on this’.
    Selecting the commit
  2. This opens the rebase window. In this window, click ‘Show Options’
    Rebase window
  3. In the options, select ‘Interactive rebase’ and hit the ‘Rebase’ button on the right
  4. You’ll get an editor window populated similarly.

If the editor window comes up blank then the likely cause is that you have both cygwin and msysgit installed and GitExtensions is using the cygwin version of git. Making sure that msysgit is used in GitExtensions will avoid any such problems.

Using history rewriting

Rewrite history only for what you have not pushed. Modifying history for something that’s shared with others is going to confuse the hell out of them and cause global meltdown. You’ve been warned.

Handling conflicts

You could end up with a conflict – in which case you can simply continue the rebase after resolving the conflicts with a git rebase --continue

Aborting

Sometimes, you just want the parachute to safety in between a rebase. Here, the spell to use is git rebase --abort

Final words

Being able to rewrite history is a admittedly a powerful feature. It might even feel a little esoteric at first glance. However, embracing it gives you the best of both worlds – quick, small commits and a clean history.

Another and probably more important effect is that instead of ‘waiting to get things in shape’ before committing, commits happen all the time. Trying out that ingenious approach that’s still taking shape in your head isn’t a problem now since you always have a point in time to go back to in case things don’t work out.

Being able to work ‘messily’ and commit anytime and being secure in the knowledge that you’d be able fix up stuff later provides an incredible amount of freedom of expression and security. Avoiding the wasted mental cycles spent around planning things carefully before you attack your codebase is worth it’s weight in gold!!!

Nexus 7 – First impressions and tips and tricks

So I got my Dad the 8GB Nexus 7. This is an awesome tablet – exactly what a good tablet should be. The UI is buttery smooth and things just fly. The hardware is not a compromise, excellent price point and overall a superb experience.

Of course, there are some things to deal with like 8 GB storage,lack of mobile data connectivity, lack of expandable storage and no rear camera. These aren’t issues at all as far as I’m concerned.

If I’m traveling with the tablet, then I always have the phone’s 3G data to tether to using WiFi tethering. The 8GB storage is only an issue if you’re playing the heavyweight games or want to carry all your videos or a ton of movies with you. Given the 8GB storage, I’m more than happy to load up a few movies/music before travel. Provided you have a good way to get files/data in and out of the computer and are OK with not carrying your complete library with you always, you don’t have to worry about the storage. A camera though would be nice – but then hey – you can’t have everything your way :).

File transfer to/from PC

Which brings us to the topic of file transfers to/from your PC. Now wifi is really the best way to go – and I couldn’t find a way to make WiFi direct work with Windows 7. So for now, Connectify seems to be the best option. It runs in the background on your PC and makes your PC’s wireless card publish its own Wireless network. You can connect to this network from your phone and if you share folders on your PC, you’re set to move data around.

Now, on the Android side, ES file explorer is free and gets the job done from a file management/copying/moving perspective. I also tried File Expert but its more cumbersome. ES excels in multiple file selection and copying.

Ebooks

The one area where the N7 excels is for reading books. The form factor and weight are just right for extended reading sessions. However, Google Play books doesn’t work in India and so you need an alternate app. I tried out Moon+ Reader, FBReader and Reader+ – and out of the lot, FBReader was the best. Moon+ has a nicer UI but choked on some of my Ebooks. Reader+ didn’t get the tags right and felt a little clunky. FB reader provided the smoothest experience of the lot. I’m already through half of my first book – and did not have any issues. I have a decent collection of e-books on my PC but once I copied them to the N7, all the meta data was messed up. Editing metadata and grabbing covers is a pain on the tablet and best done on the PC.

This is where Calibre comes in – this is a full blown ebook library management app. It does a great job of keeping your ebooks organized and editing the metadata on them. It can also fetch metadata and covers from Amazon and google and update your collection. Once you’re done, transferring to the N7 is a little tricky. The first time, I just copied the library over to the N7 – but N7 showed each book thrice. Some troubleshooting later, found that the best way was to create an export folder and use teh ‘Connect to Folder’ feature to mount it as a destination. Then you can select all the books you want and use the ‘Send to destination in one format’ to publish EPub format to the folder. This generates one epub file per book with the metadata and covers embedded in it and you can then copy this folder over to the N7’s Books folder using ESFileExplorer

Playing movies on your N7 over WIFI

My movie collection is on XBMC – and XBMC is DLNA/uPNP compatible. Dive into XBMC system settings and make turn on the uPnP/DLNA services. Then on the N7, you can use uPnPlay. For playing video, it relies on having a video player app isntalled. I like MXplayer. Don’t forget to also install the HW Player codec for ARM V7 and to turn on HW decoding in the settings.

Playing movies on your TV from the N7

You wont be doing much of this as there isn’t a rear camera – but if you do decide to take a video or pics from the N7’s FFC, then you can use the uPnPlay to project them on to your TV (that is, provided you have a DLNA/uPnP compatible TV or compliant media center hooked to your TV)
For XBMC, turn on uPnp in settings and you’re done. XBMC should be able to discover your tablet and you’ll be able to browse and play videos.
If you’d rather use the table to control what’s played on XBMC, then turn on the setting to allow control via uPnP in XBMC settings. Now, in uPnPlay you can select XBMC as the play to device and playing any video/song, plays it on the tv.

That’s all for now… I’m loving this tablet and the stuff it can do… looks like I’d be buying a few more soon 🙂

Websocket server using Jetty/Cometd

So I just wrote up a Websocket server using CometD/Bayeux. It’s a ridiculously simple app – but went quite a long way in helping to understand the nitty gritties with putting up a Websocket server and CometD/Bayeux. Thought that I’ll put it up for reference – should help in getting a leg up on getting started with CometD.

The sample’s up on github at https://github.com/raghur/rest-websocket-sample

Here’s how to go about running it:

  1. clone the repo above
  2. run mvn jetty:run
  3. Now browse to http://localhost:8080 to see the front page
  4. There are two parts to the app

    1. A RESTful API at http://localhost:8080/user/{name} – hypothetical user info – get retrieves a user, put creates a user and delete obviously deletes the user.
    2. The websocket server at localhost:8080/cometd has a broadcast channel at /useractivity which receives events whenever a user is added/deleted. The main page at http://localhost:8080 has a websocket client that updates the page with the user name whenever a user is added or removed.

And here’s the nuts and bolts:

  1. BayeuxInitializer – initializes the Bayeux Service and the EventBroadcaster. Puts the EventBroadcaster in the servlet context from where the RESTful service can pick it up to broadcast.
  2. EventBroadcaster – creates a broadcast channel in the ctor. Provides APIs to publish messages on this channel.
  3. HelloService – basic echo service taken from Maven archetype
  4. MyResource – the RESTful resource which responds to GET/PUT/DELETE – nothing major here. If a user is added or deleted, then it pushes a message on the broadcast channel by getting the EventBroadcaster instance from the servlet context.

It’s about as simple as you can get (beyond a Hello world or a chat example). Specifically, I wanted a sample where back end changes can be pushed to clients.

Android WordHero – product lessons

So, yesterday I figured that now I’m an addict.. fully and totally to something called wordhero on my phone… it’s one of those games where you have a 4×4 grid of letters and you need to find as many words as you can within 2 mins. Nothing special… and there are tons of look alikes and also rans on the Google Play store. Even installed some of them and then removed them…

So what’s different? Turns out that there’s quite a few things – and apart from one, they’re all at the detail level. The most significant one is that there are its online only and everyone’s solving the same grid at the same time – so you get to see your ranking at the end. No searching for opponents, no clicking – just every game.
Apart from that, the main game idea is the same (form words on a 4×4 grid) so details are the only place where one can innovate… reminds me of Jeff Atwood’s post that a product is nothing but a collection of details.

So what are these details?

  1. Its online only. You can play only if you have an Internet connection.. otherwise, scoot!
  2. The information level and detail is just right: Tracing through the letters highlights the whole word; If you find a word, you see green; wrong word, red; dupe – yellow. At 10s, there’s a warning been upto 5s. Not down to 0… so it warns – but doesn’t distract. Simple. Effective. Efficient. Brillant!

Now sample the competition:

  1. Tracing – line through the letters, shaky squiggly letters when you pass over them and other sorts of UI idiocy, grid that’s too small, grid that isn’t a square, word check indicators at some other place. Sure, some of this is debatable..esp the ones around the bells and whistles. They looks great the first time, the second time and a few more times after that. By the time you hit the tenth time (if you do ), you start hating it.
  2. Offline mode – this is counter intuitive.. in fact, after playing wordhero, I ran to find one which had an offline mode. Once I found it though, surprisingly, I did not like it.. Turns out that there’s little thrill in forming words on a grid; the thrill is in seeing where you stand and if you’re improving.
  3. Timed mode – pretenders to the throne have untimed modes, customizable timers and so on. Didn’t work for me – 2 minutes is that absolute sweet spot where you can grab a game anytime… and have that deadline adrenaline rush work for you… Thought I’d do great on the untimed games – but while I scored more, it wasn’t significantly more. More importantly, it was missing the fun. Turns out that we want to see where we rank far more than we want to form words 😀

So after promising myself one last game at 11 in the night yesterday and ending up playing up to 12:30 AM, I tore myself away from this satanic game. Kept the phone far away to ensure that I don’t pick it up again in the middle of the night and started thinking what makes wordhero tick. There’s nothing earth shaking about the reasons – but the effect of getting it right is surprising:

  1. Figure out what will tickle the right pleasure centers – and optmize like hell for that: This is hard… in wordhero, this is the global rankings per game and the stats… optimizing for this means that you take away offline mode totally. That isn’t a small decision – especially when an offline mode is easy to implement and feels like giving the user ‘more’. Tough to argue against it too – but as I’ve seen myself – something like that will kill the multiplier effect of seeing a large number of people play. Chances are, your users dont know that either – so no point asking them. Apple seems to have figured this out very well.
  2. Keep the UI simple and efficient – and show me what I need when I need it: Should look good for the casual user. For power users, it should be efficient and not irritating… so keep all those nice bells and whistles under control.
  3. Keep the options simple – I like options.. I like options more than what your average joe likes them… most of the times, I’ve seen the options that you didn’t know were there… but when you’re designing a game that’s 2:30 minutes from start to finish, I don’t want to think about options. More importantly, don’t ask me questions about it.. just start the damn game…

So does it mean that WordHero’s perfect? Far from it – but its successful by anyone’s measure. If you’re looking for perfection, you won’t ever launch :). Some of the stuff that I’m sure they’ll get to at some point

  1. Better explanation of the stats
  2. Charts/trends over the stats instead of only the current value
  3. Better explanation of some of the UI color coding on the results screen.