Mixing Generics, Inheritance and Chaining

In my last post on unit testing, I had written about a technique I’d learnt forsimplifying test set ups with the builder pattern. It provides a higher level, more readable API resulting in DAMP tests.

Implementing it though presented a few interesting issues that were fun to solve and hopefully, instructive as well. I for one will need to look it up if I spend a few months doing something else – so got to write it down :).

In Scheduler user portal, controllers derive from the MVC4 Controller class whereas others derive from a custom base Controller. For instance, Controllers that deal with logged in interactions derive from TenantController which provides TenantId and SubscriptionId properties. IOW, a pretty ordinary and commonplace setup.

   class EventsController : Controller
    {
        public ActionResult Post (MyModel model)
        {
        // access request, form and other http things
        }
    }

    class TenantController: Controller
    {
        public Guid TenantId {get; set;}
        public Guid SubscriptionId {get; set;}
    }

    class TaskController: TenantController
    {
        public ActionResult GetTasks()
        {
            // Http things and most probably tenantId and subId as well.
        }
    }

So, tests for EventsController will require HTTP setup (request content, headers etc) where as for anything deriving from TenantController we also need to be able to set up things like TenantId.

Builder API

Let’s start from how we’d like our API to be. So, for something that just requires HTTP context, we’d like to say:

    controller = new EventsControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();

And for something that derives from TenantController:

    controller = new TaskControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .WithTenantId(theTenantId)
                .WithSubscriptionId(theSubId)
                .Build();

The controller builder will basically keep track of the different options and always return this to facilitate chaining. Apart from that, it has a Build method which builds a Controller object according to the different options and then returns the controller. Something like this:

    class TaskControllerBuilder()
    {
        private object[] args;
        private Guid tenantId;
        public TaskControllerBuilder WithConstructorParams(params object args )
        {
            this.args = args;
            return this;
        }

        public TaskControllerBuilder WithTenantId(Guid id )
        {
            this.tenantId = id;
            return this;
        }

        public TaskController Build()
        {
            var mock = new Mock<TaskController>(MockBehavior.Strict, args);
            mock.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

Generics

Writing XXXControllerBuilder for every controller isn’t even funny – that’s where generics come in – so something like this might be easier:

    controller = new ControllerBuilder<EventsController>()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();

and the generic class as:

    class ControllerBuilder<T>() where T: Controller
    {
        private object[] args;
        private Guid tenantId;
        protected Mock<T> mockController;

        public ControllerBuilder<T> WithConstructorParams(params object[] args ) 
        {
            this.args = args;
            return this;
        }

        public T Build() 
        {
            mockController = new Mock<T>(MockBehavior.Strict, args);
            mockController.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

In takes about 2 seconds to realize that it won’t work – since the constraint only specifies T should be a subclass of Controller, we do not have the TenantId or SubscriptionId properties in the Build method.

Hmm – so a little refactoring is in order. A base ControllerBuilder that can be used for only plain controllers and a sub class for controllers deriving from TenantController. So lets move the tenantId out of the way from ControllerBuilder.

    class TenantControllerBuilder<T>: ControllerBuilder<T>  
         where T: TenantController          // and this will allow to
                                            // access TenantId and SubscriptionId
    {
        private Guid tenantId;
        public TenantControllerBuilder<T> WithTenantId(Guid tenantId) 
        {
            this.tenatId = tenantId;
            return this;
        }

        public T Build() 
        {
            // call the base
            var mock = base.Build();
            // do additional stuff specific to TenantController sub classes.
            mockController.Setup(t => t.TenantId).Returns(this.tenantId);
            return mock.Object;
        }
    }

Now, this will work as intended:

/// This will work:
controller = new TenantControllerBuilder<TaskController>()
            .WithTenantId(guid)                             // Returns TenantControllerBuilder<T>
            .WithConstructorParams(mockOpsRepo.Object)      // okay!
            .Build();

But this won’t compile: :(

controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Compiler can't resolve WithTenant method.
            .Build();

This is basically return type covariance and its not supported in C# and will likely never be. With good reason too – if the base class contract says that you’ll get a ControllerBuilder, then the derived class cannot provide a stricter contract that it will provide not only a ControllerBuilder but that it will only be TenantControllerBuilder.

But this does muck up our builder API’s chainability – telling clients to call methods in certain arbitrary sequence is a no – no. And this is where extensions provide a neat solution. Its in two parts

  • Keep only state in TenantControllerBuilder.
  • Use an extension class to convert from ControllerBuilder to TenantControllerBuilder safely with the extension api.
// Only state:
class TenantControllerBuilder<T> : ControllerBuilder<T> where T : TenantController
{
    public Guid TenantId { get; set; }

    public override T Build()
    {
        var mock = base.Build();
        this.mockController.SetupGet(t => t.TenantId).Returns(this.TenantId);
        return mock;
    }
}

// And extensions that restore chainability
static class TenantControllerBuilderExtensions
{
    public static TenantControllerBuilder<T> WithTenantId<T>(
                                        this ControllerBuilder<T> t,
                                        Guid guid)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = guid;
        return c;
    }

     public static TenantControllerBuilder<T> WithoutTenant<T>(this ControllerBuilder<T> t)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = Guid.Empty;
        return c;
    }
}

So, going back to our API:

///This now works as intended
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Resolves to the extension method
            .Build();

It’s nice sometimes to have your cake and eat it too :D.

Unit Tests: Simplifying test setup with Builders

Had some fun at work today. The web portal to Scheduler service is written in ASP.NET MVC4. As such we have a lot of controllers and of course there are unit tests that run on the controllers.

Now, while ASP.NET MVC4 apparently did have testability as a goal, it still requires quite a lot of orchestration to test controllers. Now all this orchestration and mock setups only muddies the waters and gets in the way test readability. By implication, tests are harder to understand, maintain and eventually becomes harder to trust the tests.

Let me give an example:

 [TestFixture]
public class AppControllerTests  {
    // private
    /// set up fields elided
    // elided

    [SetUp]
    public void Setup()
    {
        _mockRepo = new MockRepository(MockBehavior.Strict);
        _tenantRepoMock = _mockRepo.Create();
        _tenantMapRepoMock = _mockRepo.Create();
        _controller = MvcMockHelpers.CreatePartialMock(_tenantRepoMock.Object, _tenantMapRepoMock.Object);

        guid = Guid.NewGuid();

        // partial mock - we want to test controller methods but want to mock properties that depend on
        // the HTTP infra.
        _controllerMock = Mock.Get(_controller);
    }

    [Test]
    public void should_redirect_to_deeplink_when_valid_sub()
    {
        //Arrange
        _controllerMock.SetupGet(t => t.TenantId).Returns(guid);
        _controllerMock.SetupGet(t => t.SelectedSubscriptionId).Returns(guid);
        var formValues = new Dictionary<string,string>();
        formValues["wctx"] = "/some/deep/link";
        _controller.SetFakeControllerContext(formValues);

        // Act
        var result = _controller.Index() as ViewResult;

        //// Assert
        Assert.That(result.ViewName, Is.EqualTo(string.Empty));
        Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
        //Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

        _mockRepo.VerifyAll();
    }
}

As you can see, we’re setting up a couple of dependencies, then creating the SUT (_controller) as a partial mock in the setup. In the test, we’re setting up the request value collection and then exercising the SUT to check if we get redirected to a deep link. This works – but the test set up is too complicated. Yes – we need to create a partial mock that and then set up expectations that correspond to a valid user who has a valid subscription – but all this is lost in the details. As such, the test set up is hard to understand and hence hard to trust.

I recently came across this pluralsight course  and there were a few thoughts that hit home right away, namely:

  1. Tests should be DAMP (Descriptive And Meaningful Phrases)
  2. Tests should be easy to review

Test setups require various objects in different configurations – and that’s exactly what a Builder is good at. The icing on the cake is that if we can chain calls to the builder, then we move towards evolving a nice DSL for tests. This goes a long way towards improving test readability – tests have become DAMP.

So here’s what the Builder API looks like from the client (the test case):

[TestFixture]
public class AppControllerTests {
    [SetUp]
    public void Setup()
    {
        _mockRepo = new MockRepository(MockBehavior.Strict);
        _tenantRepoMock = _mockRepo.Create();
        _tenantMapRepoMock = _mockRepo.Create();
        guid = Guid.NewGuid();
    }

    [Test]
    public void should_redirect_to_deeplink_when_valid_sub()
    {
        var formValues = new Dictionary<string, string>();
        formValues["wctx"] = "/some/deep/link";

        var controller = new AppControllerBuilder()
            .WithFakeHttpContext()
            .WithSubscriptionId(guid)
            .WithFormValues(formValues)
            .Build();

        // Act
        var result = _controller.Index() as ViewResult;

        //// Assert
        Assert.That(result.ViewName, Is.EqualTo(string.Empty));
        Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
        //Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

        _mockRepo.VerifyAll();
    }
}

While I knew what to expect, it was still immensely satisfying to see that:

  1. We’ve abstracted away details like setting up mocks, that we’re using a partial mock, that we’re even using MVC mock helper utility behind the AppControllerBuilder leading to simpler code.
  2. The Builder helps readability of the code – its making it easy to understand what preconditions we’d like to be set on the controller. This is important if you’d like to get the test reviewed by someone else.

You might think that this is just sleight of hand – after all, have we not moved all the complexity to the AppControllerBuilder? Also, I haven’t shown the code – so definitely something tricky is going on ;)?

Well not really – the Builder code is straight forward since it does one thing (build AppControllers) and it does that well. It has a few state properties that track different options. And the Build method basically uses the same code as in the first code snippet to build the object.

Was that all? Well not really – you see, as always, the devil’s in the details. The above code is’nt real – its  more pseudo code. Secondly, an example in isolation is easier to tackle. However, IRL (in real life), things are more complicated. We have a controller hierarchy. Writing builders that work with the hierarchy had me wrangling with generics, inheritance and chainability all at once :). I’ll post a follow up covering that.

And we’re back to windows

And back to Win 7

Well not really – but I have your attention now… So in my last post, I talked about moving my home computer from Win 7 to Linux Mint KDE. That went ok for the most part other than some minor issues.

Fast-forward a day and I hit my first user issue :)… wife’s workplace has some video content that is distributed as DRM protected swf files that wil play only through a player called HaiHaiSoft player!

Options

  1. Boot into windows – painful and slow and kills everyone’s else session.
  2. Wine – Thought it’d be worth a try – installed Wine and dependencies through Synaptic. As expected, it would’nt run haiHaiSoft player – crashed aat launch.
  3. Virtualization: so the final option was a VM through virtualbox. Installed Virtualbox and its dependencies (dkms, guest additions etc) and brought out my Win 7 install disk from cold storage.

Virtualbox and Windows VM installation

Went through installation and got Windows up and running. Once I got the OS installed, also installed guest additions and it runs surprisingly well. I’d only used Virtualbox for a linux guest from a Windows host before so it was a nice change to see how it worked the other way around.

Anyway, once the VM was installed, downloaded and installed the player and put a shortcut to virtualbox on the desktop. Problem solved!

Upgraded to Linux

So after suffering tons of crashes (likely due to AMD drivers) and general system lagginess, I finally decided to ditch windows and move to linux full time.
This is on my home desktop which is more a family computer than something that only I would use.
I was a little apprehensive with driver support as usual and tricky stuff like suspend to ram (s3) which always seems highly driver dependent and problematic on Linux (it is still a pain on my XBMCBuntu box). Anyway, nothing like trying it out.

After looking around a bit, downloaded Linux Mint 15 (default and KDE). Booted with the Live CD and liked the experience – though GNOME seems a bit jaded and old. I liked KDE much better – esp since it seems more power user friendly.

So after testing hardware stuff (Suspend, video drivers and so on) – all of which worked flawlessly, I must say, I decided to go ahead and install it on one of my HDDs. Unfortunately, installation was rocky a bit – I don’t know if it was just me – the mint installer would progress up to preparing disks and hang there for 10+ minutes without any feedback I’m assuming it is reading partition tables and so forth – but no idea why it took so long. A thought it’d hung a couple of times – so terminated it and it was only accidentally that I found that it was still working – when I left it on its own for sometime and got back. It presented my the list of options (guided partition on entire disk, co locate with another OS etc) – but things actually went worse after this.

What seems to have happened is that my pending clicks on the UI all were processed and it proceeded to install on my media drive before I had a chance … wiped out my media drive. Thankfully, before installation I had a backup of the important stuff on that drive and so it wasn’t a biggie…
At this point, I was having serious doubts of continuing with Mint and was ready to chuck it out of the window and go back to Kubuntu or just back to Windows. However, I hung on – given that I’d wiped a drive, might as well install it properly and then wipe it if it wasn’t good.

Anwyay, long story short, I restarted the install, picked my 1TB drive and partitioned it as 20GB /, 10Gb /var, 1Gb /boot and remaining as unpartitioned.
Mint went through the installation and seemed to take quite sometime – there were a couple of points where the progress bar was stuck at some percentage for
multiple minutes and I wasn’t sure if things were proceeding or hung. In any case, after the partitioning window, I was more inclined to wait. Good that I did since the installation did eventually complete.

Feedback to Mint devs – please make the installer be more generous with feedback – esp if the installer goes into something that could take long.

First boot

Post installation, rebooted and grub shows my windows boot partition as well as expected. I still haven’t tried booting into windows so I that’s one thing to check. Booted into Mint and things looked good. Set up accounts for my dad and my wife. one thing I had to do was edit /etc/pam.d/common-password to remove password complexity (obscure) and set minlen=1

     password   [success=1 default=ignore]  pam_unix.so minlen=1 sha512

Next was to set up local disks (2 ntfs and 1 fat32 partition) so that they are mounted at boot and everyone can read and write to them. I decided to go the easy route and just put entries in /etc/fstab

UUID=7D64-XXX  /mnt/D_DRIVE    vfat      defaults,uid=1000,gid=100,umask=0007                   0       2
UUID="1CA4559CXXXXX" /mnt/E_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007                0       2
UUID="82F006D7XXXX" /mnt/C_DRIVE ntfs rw,auto,exec,nls=utf8,uid=1000,gid=100,umask=0007                 0       2

That fixed the mount issue but still need to have them surface properly on the file manager (dolphin) – this was actually quite easy – I just added them as places, removed the device entries from the right click menu. This worked for me – I’d have liked to make this the default but didn’t find a way. Finally decided to just copy the ~/.local/share/user-places.xbel file to each local user and set owner.

Android

Other than that, I also need to be able to connect my nexus 4 and 7 as MTP devices. I had read that this doesn’t work out of the box – but looks like that’s been addressed in ubuntu 13.04 (and hence in Mint)
I also need adb and fastboot – so just installed them through synaptic. BTW, that was awesome since it means that I didn’t have to download the complete android SDK just for two tools.

General impressions

Well, I’m still wondering why I didn’t migrate full time to linux all these years. THings have been very smooth – but I need to call out key improvements that I’ve seen till now

  1. Boot – fast – less than a minute. Compare that to upto 3 mins till the desktop is loaded on Win 7
  2. Switching users – huge huge speed up. On Windows, it would take so long that we would most of the times just continue using other’s login.
  3. Suspend/Resume – works reliably. Back in Windows, for some reason, if I had multiple users logged in, suspend would work but resume was hit and miss.
  4. GPU seems to work much better. Note here though that I’m not playing any games etc. I have a Radeon 5670 – but somehow on windows even Google Maps (new one) would be slow and sluggish while panning and zooming. Given that on Linux, I’m using the open source drivers instead of fglrx, I was expecting the same if not worse. Pleasantly surprised that maps just works beautifully. Panning,zooming in and out is smooth and fluid. Even the photospheres that I had posted to maps seem to load a lot more quickly.

Well, that’s it for now. I know that a lot of it might be ‘new system build’ syndrome whereas on windows gunk had built up over multiple years. However, note that my windows install was fully patched and up to date. Being a power user, I was even going beyond the default levels of tweaking (page file on separate disk from system etc) – but just got tired of the issues. The biggest trigger was the GPU crashes of course and here to updating to latest drivers didn’t seem to help much. I fully realize that its almost impossible to generalize things. My work laptop has Win 7×64 Enterprise and I couldn’t be happier – it remains snappy and fast in spite of a ton of things being installed (actually, maybe not – the linux boot is still faster) – but it is stable.
And of course, there might be a placebo effect at some places – but in the end what matters is that things work.

Vimgrep on steriods – even on Windows

So I was looking at this vim tip for finding in files from within Vim – while it looks helpful, there are a number of possible improvements:

  1. Why a static binding? being able to tweak the patterns or the files to search is quite common – so much more value if you could have the command printed in the command line, ready to be edited to your heart’s content or just go ahead and execute the search with [Enter].
  2. The tip wont work for files without extensions (say .vimrc) – in this case, expand("%:e") returns empty string
  3. lvimgrep is cross platform but slow – let’s use Mingw grep too for vimgrep
  4. And make that Mingw grep integration work on different machines

It was more of an evening of scratching an itch (a painful one if you’re zero in vimscript :) ). Here’s the gist gist for it- hope someone finds it useful.

Feel free to tweak the mappings – I use the following:

  1. leader-f: normal mode: vimgrep for current word, visual mode: search for current selection
  2. leader-fd: Similar – but look in the directory of the file and below
  3. leader-*: Similar to the above, but use internal grep

Save the file to your .vim folder and source it from .vimrc

    so ~/.vim/grephacks.vim

A few notes:

  1. GNUWIN is an env variable pointing to some folder where you’ve extracted mingw findutils and grep and dependencies
  2. The searches by default work down from whatever vim thinks is your present working directory. I highly recommend vim-rooter if you’re using anything like subversion, mercurial or git as vim-rooter automatically looks for a parent folder that contains .git, .hg or .svn (and more – please look it up)

Here’s how it looks in action when I’ve selected some text and pressed leader-f:

2013-07-25_232222

 

Happy vimming!

Downloading over an unreliable connection with Wget

Rant – BSNL!!!

This is a part rant, part tip – so bear with me… My broadband connection absolutely sucks over the past week. I upgraded from 2Mbps with a download limit to a 4Mbps with unlimited downloads and since then it has been nothing but trouble… Damn BSNL!! I’ve probably registered about 30 odd complaints with them to no avail. If there was a Nobel for bad customer service, BSNL would probably win it by a mile. Some examples:

  1. They’ll call to find out what the complaint it and even when I explain what’s happening, they hardly hear me out at all.
  2. Either they call up and say ‘We have fixed it at the Exchange’ and nothing has changed
  3. They automatically close the complaints :)

Guess they find it too troublesome that someone who’s paying for broadband actually expects the said broadband connection to work reliably!

Anyway, Airtel doesn’t seem to be any better – they need 10 days to set up a connection and when I was on the phone with them, they didn’t seem too interested in increasing their customer count by 1 :).

I also tried calling an ISP called YouBroadband after searching some of the Bangalore forums for good ISP providers. They promised a call in 24 hours to confirm if they have coverage in my area and it was feasible for them to set up the connection and that was 48 hours ago!

At work, I’ve heard good things about ACTBroadband and they have some ads in TOI as well, but they said they don’t have coverage in my area :(.

So how do you download

Today I needed to download something and doing it from the browser failed each time since my DSL connection would blink out in between!

After ranting and raving and writing the first part above and still mentally screaming at BSNL, decided to do something about it… Time for trusty old wget – surely, it’ll have something?

Turns out that guess was a 100% on the money… it took a few tries experimenting with different options, but finally worked like a charm

wget -t0 --waitretry=5 -c -T5 url
    where
    -t0 - unlimited retries
    --waitretry - seconds to wait between retries
    -c resume partially downloaded files
    -T5 - set all timeouts to 5 seconds. Timeouts here are connect timeout, read timeout and dns timeout