This blog now lives at http://blog.rraghur.in
If you’ve subscribed to feeds, please update the feed urls as well.
This blog now lives at http://blog.rraghur.in
If you’ve subscribed to feeds, please update the feed urls as well.
Well not really – but I have your attention now… So in my last post, I talked about moving my home computer from Win 7 to Linux Mint KDE. That went ok for the most part other than some minor issues.
Fast-forward a day and I hit my first user issue :)… wife’s workplace has some video content that is distributed as DRM protected swf files that wil play only through a player called HaiHaiSoft player!
Went through installation and got Windows up and running. Once I got the OS installed, also installed guest additions and it runs surprisingly well. I’d only used Virtualbox for a linux guest from a Windows host before so it was a nice change to see how it worked the other way around.
Anyway, once the VM was installed, downloaded and installed the player and put a shortcut to virtualbox on the desktop. Problem solved!
We released the Scheduler service (cloud hosted cron that does webhooks) on the 18th of Jan. It was our first release (still in beta) and you can sign up for it via the Windows Azure store as an addon. Upcoming release will have a full portal and the ability to register without going via the Windows Azure portal.
We’ve been building the user portal to the Scheduler service as a Single Page app (SPA) and I wanted to share a some background and insights we’ve gained.
To review, an SPA is a web app contained in a single page – where ‘pages’ are nothing but
divs being shown/hidden based on the state of the app and user navigation.
The benefits are that you never have a full page refresh at all – essentially, page loads are instantaneous and data is retrieved and shown via AJAX calls. From a UX standpoint, this delivers a ‘speedier’ experience, since you never see the ‘static’ portions of your page reload when you navigate around.
All that speediness is great but the downsides are equally important:
Navigation – SPA’s by nature break the normal navigation mechanism of the browser. Normally, you click a link, it launches off a request and would update the url on the address bar. The response is then fetched and painted. In an SPA however, a link click is trapped in JS and the state is changed and you show a different div (with a background AJAX req being launched).
This breaks Back/Forward navigation and since the URL doesn’t change, bookmarkability is also broken to boot.
SEO – SEO also breaks because links are associated with JS script and most bots cannot follow such links.
Now, none of this is really new. Gmail was probably the first well known SPA implementation and that’s been around since 2004. What’s changed is tha t now there are better tools and frameworks for writing SPAs. So how do you get around the problems?
When we started out with the Portal, we needed to take some decisions around how to go about it
We evaluated different frameworks for building the SPA. We wrote a thin slice of the portal – a few public pages, a Social login page and a couple of logged in pages for navigation and bookmarkability.
1. KO+ approach – I’m calling this KO+ as it’s KO is just a library for MVVM binding and we needed a bunch of other libraries for managing other aspects of the SPA.
– Knockout.js – for MVVM binding
– Sammy.js – Client side routing
– Require.js – script dependency management.
– Jquery – general DOM manipulation when we needed it.
2. Angular.js – Google’s Angular.js is a full suite SPA framework that handles all the aspects of SPA
We chose the KO+ approach as there was knowledge and experience on KO in the team. The learning curve’s also lesser since each library can be tackled at a time. While Angular offers a full fledged SPA framework, it also comes with more complexity to be grappled with and understood – essentially, the ‘Angular’ way of building apps.
That said, once you get over the initial learning curve of Angular, it does have a pleasant experience and you don’t have to deal with integration issues that come up when using different libraries. We had prior experience on KO on the team so it just made sense to pick it given our timelines.
I’ll post an update once we have it out of the door and ready for public consumption.
While developing any significant piece of code, you end up making a lot of incremental advances. Now, it’ll be ideal
if you are able to save your state at each increment with a commit and then proceed forward. This gives you the freedom to try out approaches, go in one way or the other and at each point have a safe harbor to return to. However, this ends up with your history looking messy and folks whom you’re collaborating with have to follow your mental drivel as you slowly built up the feature.
Now imagine if you could do incremental commits but at the same time, before you share your epic with the rest of the world, were able to clean up your history of commits by reordering commits, dropping useless commits, squashing a few commits together (remove those ‘oops missed a change’ commits) and clean up your commit messages and so on and then let it loose on the world!
Git’s interactive rebase lets you do exactly this!!!
Git’s magic incantation to rewrite history is
git rebase -i. This takes as argument a commit or a branch on which to apply the effects of rewritten rebase operation
Lets see it in operation:
Let’s say you made two commits A and B. Then you realize that you’ve missed out something which should really have been a part of A, so you fix that with a ‘oops’ commit and call it C. So your history looks like A->B->C whereas you’d like it to look like AC->B
Let’s say your history looks like this:
bbfd1f6 C # ------> HEAD 94d8c9c B # ------> HEAD~1 5ba6c52 A # ------> HEAD~2 26de234 Some other commit # ------> HEAD~3 .... ....
You’d like to fix up all commits after ‘some other commit’ – that’s HEAD~3. Fire up
git rebase -i HEAD~3
The HEAD~3 needs some explaining – you made 3 commits A, B and C. You’d like to rewrite history on top of the 4th commit before HEAD (HEAD~3). The commit you specify as the base in rebase is not included. Alternatively, you could just pick up the SHA1 for the commit from log and use that in your rebase command.
Git will open your editor with something like this:
pick 5ba6c52 A pick 94d8c9c B pick bbfd1f6 C # Rebase 7a0ff68..bbfd1f6 onto 7a0ff68 # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # x, exec = run command (the rest of the line) using shell # # These lines can be re-ordered; they are executed from top to bottom. # # If you remove a line here THAT COMMIT WILL BE LOST. # # However, if you remove everything, the rebase will be aborted. # # Note that empty commits are commented out
Basically, git is showing you the list of commands it will use to operate on all commits since your starting point. Also, it gives instructions on how to pick (p), squash (s)/fixup (f) or reword(r) each of your commits. To modify the history order, you can simply reorder the lines. If you delete any line altogehter, then that commit totally skipped (However, if you delete all the lines, then the rebase operation is aborted).
So, here we tell that we want to pick A, squash commit C into it and then pick commit B.
pick 5ba6c52 A squash bbfd1f6 C pick 94d8c9c B
Save the editor and Git will perform the rebase. It will then pop up another editor window allowing you to give a single commit message for AC (helpfully pre filled with the two original messages for A and C). Once you provide that, git rebase proceeds and now your history looks like AC->B as you’d like it to be.
If the editor window comes up blank then the likely cause is that you have both cygwin and msysgit installed and GitExtensions is using the cygwin version of git. Making sure that msysgit is used in GitExtensions will avoid any such problems.
Rewrite history only for what you have not pushed. Modifying history for something that’s shared with others is going to confuse the hell out of them and cause global meltdown. You’ve been warned.
You could end up with a conflict – in which case you can simply continue the rebase after resolving the conflicts with a
git rebase --continue
Sometimes, you just want the parachute to safety in between a rebase. Here, the spell to use is
git rebase --abort
Being able to rewrite history is a admittedly a powerful feature. It might even feel a little esoteric at first glance. However, embracing it gives you the best of both worlds – quick, small commits and a clean history.
Another and probably more important effect is that instead of ‘waiting to get things in shape’ before committing, commits happen all the time. Trying out that ingenious approach that’s still taking shape in your head isn’t a problem now since you always have a point in time to go back to in case things don’t work out.
Being able to work ‘messily’ and commit anytime and being secure in the knowledge that you’d be able fix up stuff later provides an incredible amount of freedom of expression and security. Avoiding the wasted mental cycles spent around planning things carefully before you attack your codebase is worth it’s weight in gold!!!
So I got my Dad the 8GB Nexus 7. This is an awesome tablet – exactly what a good tablet should be. The UI is buttery smooth and things just fly. The hardware is not a compromise, excellent price point and overall a superb experience.
Of course, there are some things to deal with like 8 GB storage,lack of mobile data connectivity, lack of expandable storage and no rear camera. These aren’t issues at all as far as I’m concerned.
If I’m traveling with the tablet, then I always have the phone’s 3G data to tether to using WiFi tethering. The 8GB storage is only an issue if you’re playing the heavyweight games or want to carry all your videos or a ton of movies with you. Given the 8GB storage, I’m more than happy to load up a few movies/music before travel. Provided you have a good way to get files/data in and out of the computer and are OK with not carrying your complete library with you always, you don’t have to worry about the storage. A camera though would be nice – but then hey – you can’t have everything your way :).
Which brings us to the topic of file transfers to/from your PC. Now wifi is really the best way to go – and I couldn’t find a way to make WiFi direct work with Windows 7. So for now, Connectify seems to be the best option. It runs in the background on your PC and makes your PC’s wireless card publish its own Wireless network. You can connect to this network from your phone and if you share folders on your PC, you’re set to move data around.
Now, on the Android side, ES file explorer is free and gets the job done from a file management/copying/moving perspective. I also tried File Expert but its more cumbersome. ES excels in multiple file selection and copying.
The one area where the N7 excels is for reading books. The form factor and weight are just right for extended reading sessions. However, Google Play books doesn’t work in India and so you need an alternate app. I tried out Moon+ Reader, FBReader and Reader+ – and out of the lot, FBReader was the best. Moon+ has a nicer UI but choked on some of my Ebooks. Reader+ didn’t get the tags right and felt a little clunky. FB reader provided the smoothest experience of the lot. I’m already through half of my first book – and did not have any issues. I have a decent collection of e-books on my PC but once I copied them to the N7, all the meta data was messed up. Editing metadata and grabbing covers is a pain on the tablet and best done on the PC.
This is where Calibre comes in – this is a full blown ebook library management app. It does a great job of keeping your ebooks organized and editing the metadata on them. It can also fetch metadata and covers from Amazon and google and update your collection. Once you’re done, transferring to the N7 is a little tricky. The first time, I just copied the library over to the N7 – but N7 showed each book thrice. Some troubleshooting later, found that the best way was to create an export folder and use teh ‘Connect to Folder’ feature to mount it as a destination. Then you can select all the books you want and use the ‘Send to destination in one format’ to publish EPub format to the folder. This generates one epub file per book with the metadata and covers embedded in it and you can then copy this folder over to the N7′s Books folder using ESFileExplorer
My movie collection is on XBMC – and XBMC is DLNA/uPNP compatible. Dive into XBMC system settings and make turn on the uPnP/DLNA services. Then on the N7, you can use uPnPlay. For playing video, it relies on having a video player app isntalled. I like MXplayer. Don’t forget to also install the HW Player codec for ARM V7 and to turn on HW decoding in the settings.
You wont be doing much of this as there isn’t a rear camera – but if you do decide to take a video or pics from the N7′s FFC, then you can use the uPnPlay to project them on to your TV (that is, provided you have a DLNA/uPnP compatible TV or compliant media center hooked to your TV)
For XBMC, turn on uPnp in settings and you’re done. XBMC should be able to discover your tablet and you’ll be able to browse and play videos.
If you’d rather use the table to control what’s played on XBMC, then turn on the setting to allow control via uPnP in XBMC settings. Now, in uPnPlay you can select XBMC as the play to device and playing any video/song, plays it on the tv.
That’s all for now… I’m loving this tablet and the stuff it can do… looks like I’d be buying a few more soon
So I just wrote up a Websocket server using CometD/Bayeux. It’s a ridiculously simple app – but went quite a long way in helping to understand the nitty gritties with putting up a Websocket server and CometD/Bayeux. Thought that I’ll put it up for reference – should help in getting a leg up on getting started with CometD.
The sample’s up on github at https://github.com/raghur/rest-websocket-sample
Here’s how to go about running it:
There are two parts to the app
And here’s the nuts and bolts:
It’s about as simple as you can get (beyond a Hello world or a chat example). Specifically, I wanted a sample where back end changes can be pushed to clients.
So, yesterday I figured that now I’m an addict.. fully and totally to something called wordhero on my phone… it’s one of those games where you have a 4×4 grid of letters and you need to find as many words as you can within 2 mins. Nothing special… and there are tons of look alikes and also rans on the Google Play store. Even installed some of them and then removed them…
So what’s different? Turns out that there’s quite a few things – and apart from one, they’re all at the detail level. The most significant one is that there are its online only and everyone’s solving the same grid at the same time – so you get to see your ranking at the end. No searching for opponents, no clicking – just every game.
Apart from that, the main game idea is the same (form words on a 4×4 grid) so details are the only place where one can innovate… reminds me of Jeff Atwood’s post that a product is nothing but a collection of details.
So what are these details?
Now sample the competition:
So after promising myself one last game at 11 in the night yesterday and ending up playing up to 12:30 AM, I tore myself away from this satanic game. Kept the phone far away to ensure that I don’t pick it up again in the middle of the night and started thinking what makes wordhero tick. There’s nothing earth shaking about the reasons – but the effect of getting it right is surprising:
So does it mean that WordHero’s perfect? Far from it – but its successful by anyone’s measure. If you’re looking for perfection, you won’t ever launch :). Some of the stuff that I’m sure they’ll get to at some point
I finally got my nettop – AMD E-350 based barebones system. Installed 4G of RAM and the plan was to set it up with XBMCBuntu or XBMC-XvBA. Instead of installing the XBMC-XvBA version directly, I figured that I could start with XBMCBuntu, see how it does and then if necessary move to the XvBA enabled builds.
I don’t have a hard drive for the nettop – the plan was to have the system run off a 8Gig pen drive.
At this point, I had XBMCBuntu up and running however, there were a few problems:
Of these, the high CPU utilization was the biggest worry – so there’s a few steps available to try
<advancedsettings> <useddsfanart>true</useddsfanart> <cputempcommand>cputemp</cputempcommand> <samba> <clienttimeout>30</clienttimeout> </samba> <network> <disableipv6>true</disableipv6> </network> <loglevel hide="false">1</loglevel> <gui> <algorithmdirtyregions>1</algorithmdirtyregions> <visualizedirtyregions>false</visualizedirtyregions> <nofliptimeout>0</nofliptimeout> </gui> <measurerefreshrate>true</measurerefreshrate> <videoextensions> <add>.dat|.DAT</add> </videoextensions> <tvshowmatching append="yes"> <!-- matches title 01/04 episode title and similar.--> <regexp>[s]?([0-9]+)[/._ ][e]?([0-9]+)</regexp> </tvshowmatching> <gputempcommand>/usr/bin/aticonfig --od-gettemperature | grep Temperature | cut -f 2 -d "-" | cut -f 1 -d "." | sed -e "s, ,," | sed 's/$/ C/'</gputempcommand> </advancedsettings>
Did those and while they dropped the CPU utilization to about 25% which was quite good. However, during videos, the CPU was still high – and that’s because even though XBMCBuntu official uses hardware acceleration through VAAPI, it still is spotty.
I went over to the XBMC-XvBA installation thread and followed the directions in the first post to add the XBMC-XvBA ppas. The download took some time and XvBA build got installed. Started XBMC and things were much, much better.
sudo apt-add-repository ppa:wsnipex/xbmc-xvba sudo apt-get update sudo apt-get install xbmc xbmc-bin
There are other tweaks that are listed on the XBMC-XvBA installation thread which I also went ahead and applied.
Installing on a pen drive /usb flash drive has its pain points. My boot time was around painfully slow (~3.5 minutes). Opening Chromium took forever and even page loads were slow (it would be stuck with the status bar on ‘checking cache’…). Also, the incessant writing to disk is probably killing off my pen drive much much faster. I ended up doing the following:
nodiratimeflags for the USB drive
# /etc/fstab UUID=39f52ccf-363b-4b6e-abdd-927809618d83 / ext4 noatime,nodiratime,errors=remount-ro 0 1
# /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0
.xbmcto NAS/External drive along with your media. Makes a lot more sense to keep your .xbmc folder with your media on a external hdd.
# Assuming sda is your USB drive sudo echo noop > /sys/block/sda/queue/scheduler
# /etc/sysctl.conf vm.swappiness=1
I had greatest trouble here – but was able to get
pm-utils working eventually.
pm-utils is a framework of shell scripts around suspend/hibernation/wakeup that provides hooks to execute scripts before standby/hibernation and when the computer resumes from sleep/shutdown.
First test if basic suspend/hibernate works
# check suspend methods supported cat /sys/power/state # S3 sudo sh -c "echo mem > /sys/power/state"
If your system goes into standby, then things are good. But its just a good start. In my case, system would go into standby only the first time after boot. After that, it would go into standby but then resume immediately. Its been asked enough times on Google and I’ve probably tried all the fixes. The first one is to update a kernel param
# /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_enforce_resources=lax"
After that, make sure to run
In my case, the magic incantation above failed (your mileage might vary). Nothing bad happened so I kept it on. Anyway, so I rebooted, then suspended and resumed the first time (which works) and took a dump of
dmesg > dmesg.1.log. After that again tried to suspend and when it came back immediately, I could get a dmesg output and scan the entries after the first run. Turned out that the log had entries related to xhci_hcd – so decided to unload it first and then try to suspend:
sudo modprobe -r xhci_hcd sudo sh -c "echo mem > /sys/power/state"
After this, the system was able to standby each and every time. Now it was time to get pm-utils working. Out of the box, pm-utils came with a config that had a bunch of things that I didn’t understand (and I doubt they applied to this machine). If standby was working directly, then it should have worked through pm-utils. However, it needed some amount of pushing around before that comes around to a functional state.
pm-utilsto play nice
So now that I had confirmed suspend working, it was time to see why
pm-utils was being so bad. First off, time to clean up the default configuration. So copied
/etc/pm/config.d/config and then start editing it
SLEEP_MODULE="kernel" # These variables will be handled specially when we load files in # /etc/pm/config.d. # Multiple declarations of these environment variables will result in # their contents being concatenated instead of being overwritten. # If you need to unload any modules to suspend/resume, add them here. SUSPEND_MODULES="xhci_hcd" # If you want to keep hooks from running, add their names here. HOOK_BLACKLIST="99_fglrx 99lirc-resume novatel_3g_suspend"
if you’d like wake up with a usb device (usb keybd), then you need to find out the usb port where your device is connected. The easiest way might be to check dmesg output which would usually print this out. In my case, my wireless keyboard/trackball are connected on USB3
echo USB3 > /sys/proc/acpi echo enabled > /sys/proc/devices/usb3/power/wakeup
After that, the HTPC could be woken up with a keypress. Now I haven’t been able to find a way to do the same thing with only the keyboard (so that the system doesn’t wake up anytime anyone picks up the keyboard – so for now, have turned this off). The above change won’t persist over a reboot – so to make it persistent, put the two lines above to
/etc/rc.local before the
fglrxannoyances (ATI binary driver)
Not much point of a HTPC if the video isn’t top quality. And there are a lot of variables involved there – your computer hardware, software, drivers, type of connection (HDMI/DVI) and the telly itself. Also, video driver support on Linux for ATI leaves quite a bit to be desired. One of the reasons of going with XBMCBuntu was knowing that there’ll be large community support available on ubuntuforums.
Right off the bat, things started at the mildly irritating level. Catalyst control center in root mode won’t start even though there’s a big fat menu item there. Quick google and it says that the easiest way out is to use
gksu amdcccle in the run dialog (ALT-F2).
My telly (Panasonic 32″) came with a native resolution of 1360×768 – was pleasantly surprised when I found that with a HDMI connection it would let me go up to 1920×1080. Interestingly, even the telly on screen menu reports full HD resolution – so I’m not really sure if they did put a full HD panel and report it as 720p normally. In any case, I’m not complaining and Full HD videos do look better
After all this, it makes a sea change in the overall experience:
So this is a continuation to my last post on my effort to upgrade the media center at home. While I wait for hardware to come, I’ve been reading up through forums and blogs online and am finding it real hard to get some good advice. So, thought it might help to list down concisely the situation as it stands currently, in the hope that it will server other folks who’re trying to find similar answers.
Getting XBMC on Linux with AMD fusion APUs to work nicely and render hardware accelerated video. Also, while we’re at it, also do it by booting off a pendrive (ie hdd less system)
To get hardware accelerated video on ATI/AMD hardware on Linux, currently, there are two choices
OpenElec is covered in the earlier post – but essentially you have a Fusion optimized micro builds that can run off an SD card/flash drive. From a video perspective, this should be identical to XBMCBuntu. The upside is that everything is pre-configured while the downside is that it’s pretty limited.
Also covered in my previous post – lightweight Ubuntu based distro/LiveCD. XBMC Eden implements VAAPI and Catalyst Fusion APUs drivers can be used asa backend with these and provide hardware accelerated video. There are some cases where this bridging doesn’t/may not work well. On the other hand, since this is the officially supported method, its going to be around and improved upon, and likely to have more info available in public domain etc.
So this is an unofficial build by the community. THe promise is that instead of going the VAAPI route, this has direct support for XvBA api so, offes better performance. The forum thread tracking this is available here. While the build is supposed to be quite usable, from the thread activity, it seems its also heavily under development.The goal is to merge this back to the mainline once it stabilizes.
I plan to go the path of least resistance – OpenElec, then XBMC-XvBA and finally settle on XBMCBuntu – but things might change once I actually get down to it.
Time for the big fat disclaimer – Nothing in this post is guaranteed to be correct. this is my read of stuff on the net and it could be wrong. You’re welcome to correct it in the comments and I’d be more than happy to fix the post.
Vim installed by Cygwin’s setup project does not have Ruby/Python/Perl support enabled by default. As my list of must have vim plugins has a few which use Ruby and Python, thought that it might be good to build my own Cygwin build of Vim. Turned out a little more work than I thought – but that’s more due to the misleading (at least for me ) Make file in the vim source tree called Make_cyg.mak.
Here’s how to compile:
./configure --enable-pythoninterp --enable-perlinterp --enable-rubyinterp --enable-gui=no --without-x --enable-multibyte --prefix=/usr make && make install