Mixing Generics, Inheritance and Chaining

In my last post on unit testing, I had written about a technique I’d learnt forsimplifying test set ups with the builder pattern. It provides a higher level, more readable API resulting in DAMP tests.

Implementing it though presented a few interesting issues that were fun to solve and hopefully, instructive as well. I for one will need to look it up if I spend a few months doing something else – so got to write it down :).

In Scheduler user portal, controllers derive from the MVC4 Controller class whereas others derive from a custom base Controller. For instance, Controllers that deal with logged in interactions derive from TenantController which provides TenantId and SubscriptionId properties. IOW, a pretty ordinary and commonplace setup.

   class EventsController : Controller
    {
        public ActionResult Post (MyModel model)
        {
        // access request, form and other http things
        }
    }

    class TenantController: Controller
    {
        public Guid TenantId {get; set;}
        public Guid SubscriptionId {get; set;}
    }

    class TaskController: TenantController
    {
        public ActionResult GetTasks()
        {
            // Http things and most probably tenantId and subId as well.
        }
    }

So, tests for EventsController will require HTTP setup (request content, headers etc) where as for anything deriving from TenantController we also need to be able to set up things like TenantId.

Builder API

Let’s start from how we’d like our API to be. So, for something that just requires HTTP context, we’d like to say:

    controller = new EventsControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();

And for something that derives from TenantController:

    controller = new TaskControllerBuilder()
                .WithConstructorParams(mockOpsRepo.Object)
                .WithTenantId(theTenantId)
                .WithSubscriptionId(theSubId)
                .Build();

The controller builder will basically keep track of the different options and always return this to facilitate chaining. Apart from that, it has a Build method which builds a Controller object according to the different options and then returns the controller. Something like this:

    class TaskControllerBuilder()
    {
        private object[] args;
        private Guid tenantId;
        public TaskControllerBuilder WithConstructorParams(params object args )
        {
            this.args = args;
            return this;
        }

        public TaskControllerBuilder WithTenantId(Guid id )
        {
            this.tenantId = id;
            return this;
        }

        public TaskController Build()
        {
            var mock = new Mock<TaskController>(MockBehavior.Strict, args);
            mock.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

Generics

Writing XXXControllerBuilder for every controller isn’t even funny – that’s where generics come in – so something like this might be easier:

    controller = new ControllerBuilder<EventsController>()
                .WithConstructorParams(mockOpsRepo.Object)
                .Build();

and the generic class as:

    class ControllerBuilder<T>() where T: Controller
    {
        private object[] args;
        private Guid tenantId;
        protected Mock<T> mockController;

        public ControllerBuilder<T> WithConstructorParams(params object[] args ) 
        {
            this.args = args;
            return this;
        }

        public T Build() 
        {
            mockController = new Mock<T>(MockBehavior.Strict, args);
            mockController.Setup(t => t.TenantId).Returns(tenantId);
            return mock.Object;
        }
    }

In takes about 2 seconds to realize that it won’t work – since the constraint only specifies T should be a subclass of Controller, we do not have the TenantId or SubscriptionId properties in the Build method.

Hmm – so a little refactoring is in order. A base ControllerBuilder that can be used for only plain controllers and a sub class for controllers deriving from TenantController. So lets move the tenantId out of the way from ControllerBuilder.

    class TenantControllerBuilder<T>: ControllerBuilder<T>  
         where T: TenantController          // and this will allow to
                                            // access TenantId and SubscriptionId
    {
        private Guid tenantId;
        public TenantControllerBuilder<T> WithTenantId(Guid tenantId) 
        {
            this.tenatId = tenantId;
            return this;
        }

        public T Build() 
        {
            // call the base
            var mock = base.Build();
            // do additional stuff specific to TenantController sub classes.
            mockController.Setup(t => t.TenantId).Returns(this.tenantId);
            return mock.Object;
        }
    }

Now, this will work as intended:

/// This will work:
controller = new TenantControllerBuilder<TaskController>()
            .WithTenantId(guid)                             // Returns TenantControllerBuilder<T>
            .WithConstructorParams(mockOpsRepo.Object)      // okay!
            .Build();

But this won’t compile: 😦

controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Compiler can't resolve WithTenant method.
            .Build();

This is basically return type covariance and its not supported in C# and will likely never be. With good reason too – if the base class contract says that you’ll get a ControllerBuilder, then the derived class cannot provide a stricter contract that it will provide not only a ControllerBuilder but that it will only be TenantControllerBuilder.

But this does muck up our builder API’s chainability – telling clients to call methods in certain arbitrary sequence is a no – no. And this is where extensions provide a neat solution. Its in two parts

  • Keep only state in TenantControllerBuilder.
  • Use an extension class to convert from ControllerBuilder to TenantControllerBuilder safely with the extension api.
// Only state:
class TenantControllerBuilder<T> : ControllerBuilder<T> where T : TenantController
{
    public Guid TenantId { get; set; }

    public override T Build()
    {
        var mock = base.Build();
        this.mockController.SetupGet(t => t.TenantId).Returns(this.TenantId);
        return mock;
    }
}

// And extensions that restore chainability
static class TenantControllerBuilderExtensions
{
    public static TenantControllerBuilder<T> WithTenantId<T>(
                                        this ControllerBuilder<T> t,
                                        Guid guid)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = guid;
        return c;
    }

     public static TenantControllerBuilder<T> WithoutTenant<T>(this ControllerBuilder<T> t)
            where T : TenantController
    {
        TenantControllerBuilder<T> c = (TenantControllerBuilder<T>)t;
        c.TenantId = Guid.Empty;
        return c;
    }
}

So, going back to our API:

///This now works as intended
controller = new TenantControllerBuilder<TaskController>()
            .WithConstructorParams(mockOpsRepo.Object)  // returns ControllerBuilder<T>
            .WithTenantId(guid)                         // Resolves to the extension method
            .Build();

It’s nice sometimes to have your cake and eat it too :D.

Unit Tests: Simplifying test setup with Builders

Had some fun at work today. The web portal to Scheduler service is written in ASP.NET MVC4. As such we have a lot of controllers and of course there are unit tests that run on the controllers.

Now, while ASP.NET MVC4 apparently did have testability as a goal, it still requires quite a lot of orchestration to test controllers. Now all this orchestration and mock setups only muddies the waters and gets in the way test readability. By implication, tests are harder to understand, maintain and eventually becomes harder to trust the tests.

Let me give an example:

 [TestFixture]
public class AppControllerTests  {
    // private
    /// set up fields elided
    // elided

    [SetUp]
    public void Setup()
    {
        _mockRepo = new MockRepository(MockBehavior.Strict);
        _tenantRepoMock = _mockRepo.Create();
        _tenantMapRepoMock = _mockRepo.Create();
        _controller = MvcMockHelpers.CreatePartialMock(_tenantRepoMock.Object, _tenantMapRepoMock.Object);

        guid = Guid.NewGuid();

        // partial mock - we want to test controller methods but want to mock properties that depend on
        // the HTTP infra.
        _controllerMock = Mock.Get(_controller);
    }

    [Test]
    public void should_redirect_to_deeplink_when_valid_sub()
    {
        //Arrange
        _controllerMock.SetupGet(t => t.TenantId).Returns(guid);
        _controllerMock.SetupGet(t => t.SelectedSubscriptionId).Returns(guid);
        var formValues = new Dictionary<string,string>();
        formValues["wctx"] = "/some/deep/link";
        _controller.SetFakeControllerContext(formValues);

        // Act
        var result = _controller.Index() as ViewResult;

        //// Assert
        Assert.That(result.ViewName, Is.EqualTo(string.Empty));
        Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
        //Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

        _mockRepo.VerifyAll();
    }
}

As you can see, we’re setting up a couple of dependencies, then creating the SUT (_controller) as a partial mock in the setup. In the test, we’re setting up the request value collection and then exercising the SUT to check if we get redirected to a deep link. This works – but the test set up is too complicated. Yes – we need to create a partial mock that and then set up expectations that correspond to a valid user who has a valid subscription – but all this is lost in the details. As such, the test set up is hard to understand and hence hard to trust.

I recently came across this pluralsight course  and there were a few thoughts that hit home right away, namely:

  1. Tests should be DAMP (Descriptive And Meaningful Phrases)
  2. Tests should be easy to review

Test setups require various objects in different configurations – and that’s exactly what a Builder is good at. The icing on the cake is that if we can chain calls to the builder, then we move towards evolving a nice DSL for tests. This goes a long way towards improving test readability – tests have become DAMP.

So here’s what the Builder API looks like from the client (the test case):

[TestFixture]
public class AppControllerTests {
    [SetUp]
    public void Setup()
    {
        _mockRepo = new MockRepository(MockBehavior.Strict);
        _tenantRepoMock = _mockRepo.Create();
        _tenantMapRepoMock = _mockRepo.Create();
        guid = Guid.NewGuid();
    }

    [Test]
    public void should_redirect_to_deeplink_when_valid_sub()
    {
        var formValues = new Dictionary<string, string>();
        formValues["wctx"] = "/some/deep/link";

        var controller = new AppControllerBuilder()
            .WithFakeHttpContext()
            .WithSubscriptionId(guid)
            .WithFormValues(formValues)
            .Build();

        // Act
        var result = _controller.Index() as ViewResult;

        //// Assert
        Assert.That(result.ViewName, Is.EqualTo(string.Empty));
        Assert.That(result.ViewBag.StartHash, Is.EqualTo("/some/deep/link"));
        //Assert.That(result.RouteValues["action"], Is.EqualTo("Register"));

        _mockRepo.VerifyAll();
    }
}

While I knew what to expect, it was still immensely satisfying to see that:

  1. We’ve abstracted away details like setting up mocks, that we’re using a partial mock, that we’re even using MVC mock helper utility behind the AppControllerBuilder leading to simpler code.
  2. The Builder helps readability of the code – its making it easy to understand what preconditions we’d like to be set on the controller. This is important if you’d like to get the test reviewed by someone else.

You might think that this is just sleight of hand – after all, have we not moved all the complexity to the AppControllerBuilder? Also, I haven’t shown the code – so definitely something tricky is going on ;)?

Well not really – the Builder code is straight forward since it does one thing (build AppControllers) and it does that well. It has a few state properties that track different options. And the Build method basically uses the same code as in the first code snippet to build the object.

Was that all? Well not really – you see, as always, the devil’s in the details. The above code is’nt real – its  more pseudo code. Secondly, an example in isolation is easier to tackle. However, IRL (in real life), things are more complicated. We have a controller hierarchy. Writing builders that work with the hierarchy had me wrangling with generics, inheritance and chainability all at once :). I’ll post a follow up covering that.

Google Maps Navigation enabled in India!!

Just came across an awesome piece of news – Google Maps now has turn by turn, voice guided directions officially in India!!

Uptil now, I used to get the Ownhere mod for Google Maps that enables World navigation – It used to be available on XDA-Forums but got taken down once google frowned on it!

No more of that hassle – just go to Play store and install Maps.

Very cool! Thanks Google.

Coffeescript rocks!

I’ve been absent a few weeks from the blog. Life got taken over by work – been deep in the Javascript jungles and Coffeescript has been a lifesaver.

Based on my earlier peek at Coffeescript, we went ahead full on with Coffeescript and I have to say it has been a pleasant ride for the team with over 4.7KLoc of Javascript (with Coffeescript source weighing in around 3.7KLoc including comments etc) that now I can confidently recommend it for any sort of Javascript heavy development.

I’m going to list down benefits we saw with Coffeescript and hopefully someone else trying to evaluate it might find this useful:

  1. Developers who haven’t dove deep into Javascript’s prototype based model find it easier to get up to speed sooner. Yes – once in a while they do get tripped up and then have to look again into what’s going under the covers – but this is normal. The key point is that its much much more productive and enjoyable to use Coffeescript.
  2. The conciseness of the Coffeescript definitely goes a long way in improving readability. One of the algorithms implemented was applying a bunch of time overlap rules. We also used Underscore.js – and between Coffeescript and Underscore.js, the whole routine was within 20 lines, mostly bug free and very easy for new folks to pick upand maintain over time. Correspondingly, the generated JS was much more complicated (though Underscore helped hide some of loop iteration noise) – and it wouldn’t have been too different had we written the JS directly.
  3. Integrating with external frameworks – jquery, jquery ui etc was again painless and simple.
  4. Another benefit was that the easy class structure syntactic sugar helped quickly prototype new ideas and then refine them to production quality. With developers who’re still shaky on JS, I doubt the same approach would have worked since they’d have spent cycles trying to get their heads wrapped around JS’s prototype based model.
  5. Coffeescript also allows you to split the code to multiple source files and merge all of them before compiling to JS – this allowed us to keep each source file separate and reduce merges required during commits.
  6. Finally, performance is a non issue – you do have to be a little careful otherwise you might find yourself allocating function objects and returning them back when you don’t mean to but this is easily caught in reviews.

One latent doubt I had going into this was the number of times we’d have to jump in to the JS level to debug issues. With a larger Coffeescript codebase spread across multiple files, this is a real concern since the error line numbers wouldn’t match with source and if we have to jump through hoops to fix issues. Luckily, this wasn’t a problem at all – over time, in cases of either an error in JS or just inspecting code in the browser, its easy to map to the Coffeescript class/function – so you just fix it there and regenerate the JS. Secondly, the generated JS is quite readable – so even when investigating issues, it’s quite trivial to drop breakpoints in Chrome and know what’s going on.

The one minor irritation was if there was a Coffeescript compile issue, then when joining the file, the line number reporting.fails and then you have to compile each file independently to figure out the error. Easily automated with a script – so that’s just being nitpicky.

Anyway, if you got here looking for advice on using Coffeescript, then you’ve reached the right place and maybe this post’s helped you make up your mind!

Coffeescript looks promising

I’ve just ran across Coffeescript… can’t believe what sort of a hole I’ve been living in.

It’s a source to source compiler (ie when you ‘compile’ a coffeescript script, you get javascript source.)

So why would you want a source to source compiler for Javascript?
Well, as apps become more and more ‘front-end’ heavy with DHTML/Ajax bling bling, the javascript that holds all that together also becomes more and more complex. Yeah, sure you used Jquery (or ‘insert your favourite js framework’) – but that’s not even scratching the surface. You’re still writing tons of js code, and dealing with its idiosyncracies and tearing your hair apart.

Enter Coffeescript – clean syntax with elements of style borrowed from ruby and python, this is super clean and efficient. You write your code in coffeescript which is neat, clean and concise. What it generates is very idiomatic and clean javascript.

Let’s try something – take a guess at what the following does:

    var Animal, Mammal, animal, farm, _i, _len,
      __hasProp = Object.prototype.hasOwnProperty,
      __extends = function(child, parent) { for (var key in parent) { if (__hasProp.call(parent, key)) child[key] = parent[key]; } function ctor() { this.constructor = child; } ctor.prototype = parent.prototype; child.prototype = new ctor; child.__super__ = parent.prototype; return child; };
    Animal = (function() {
      function Animal(name) {
        this.name = name;
      }
      Animal.prototype.speak = function() {
        return console.log("I am a " + this.name);
      };
      return Animal;
    })();

    Mammal = (function(_super) {
      __extends(Mammal, _super);
      function Mammal() {
        Mammal.__super__.constructor.apply(this, arguments);
      }
      Mammal.prototype.speak = function() {
        Mammal.__super__.speak.apply(this, arguments);
        return console.log("and I'm a mammal");
      };
      return Mammal;
    })(Animal);

    farm = [new Animal("fish"), new Mammal("dog")];

    for (_i = 0, _len = farm.length; _i < _len; _i++) {
      animal = farm[_i];
      animal.speak();
    }

And now – see if you like this better:

    class Animal
       constructor: (@name)->
       speak: ->
          console.log "I am a #{@name}"

    class Mammal extends Animal
       speak:->
          super
          console.log ("and I'm a mammal")

    farm=[ (new Animal "fish"), (new Mammal "dog")]

    animal.speak() for animal in farm

The javascript version is the generated from the coffeescript version above . Head over to coffeescript.org page – they have an online interpreter where you can try out coffeescript code and it generates equivalent javascript source.

If you’re wow’ed with that (I am) – and just in case you’re saying good bye to javascript, here’s the nub.. since its a source to source compiler, unless you understand what’s going on under the covers, you’ll hit a problem soonish when you have to debug something.

So, Javascript isn’t optional – but if you have that bit covered, there’s no reason to have to ‘live’ with the iffy side of javascript. Take a look something like coffeescript and have a little fun along the way.

VIM macro super powers

So my affair with Vim continues – and I seem to have discovered VIM’s macro super powers. The obvious next step is to shout from the rooftops and hence this blog post (and there’s hardly anything original – apart from the fact that I’ve just had a ‘aha’ moment when it comes to macros and thought it might help other budding vimmers out there…

A little primer – Macros let you repeat a set of commands. The way to go about it is to press q<macro_letter> where <macro_letter> is between lowercase a-z. This starts recording a macro in Vim (and you see a recording message at the bottom). Now hit commands you want to repeat later and press q when done to finish recording. VIM records all the keystrokes you enter in the register you specified as the macro name. To now execute the macro, position the macro on the line and then hit @<macro_letter> and Vim will faithfully replay your commands.

Its a great time saver – especially for complex editing tasks where search/replace doesn’t cut it. But, if you’re feeling a dissappointed after coming this far (after all, I promised a aha moment), then hang on.

Today’s discovery was that you can edit macros that you’ve recorded quite easily and save them back!!! THIS IS HUGE. Why so? Because when you record a macro, its quite normal to jump around quite a bit or get one or two keystrokes wrong. In fact, its for this reason that I could never use Emacs’s macro facility and failed to just ‘get it’. However, in VIM, you could just open a scratch pad editor and hit "<macro_letter>p – that’s double quote-letter-p to paste the contents of register containing your macro. You see your macro keystrokes – so go ahead and edit them and then use "<register>y<movement> to save your edits back to the register. You can now execute the macro with a @<macro_letter> as if that’s the way it was recorded in the first place.

Another obvious tip – you can execute the contents of any register as if it were a macro with a @. Not sure when that could be helpful – but knowing that its possible is good.

Ubuntu, Console VIM – weird characters in insert mode

Now that I feel quite comfy with VIM, over the weekend I needed to edit a config file in my Ubuntu 10.10 Virtualbox machine quickly. Instead of GVim, I just opened the file in console VIM. As I hit i to get into insert mode, a bunch of weird character boxes were inserted. That was not good at all 😦 – just when you think you’re comfortable with something if it does something totally weird. In any case, I was in too much of a hurry to bother and went about editing my file with gVim. Also, backspace was wonky (same weird characters) – so I felt better. For some reason that I fail to understand, why must Linux make proper backspace and delete handling such a pain! In any case, it’s something that I’ve dealt with enough times to know that there’ll be something on Google.

Later on, tried to see what all the fuss was about. Googling around, I found :help :fixdel and that seemed simple enough. Alas, when I tried it out, it didn’t fix the issue at all. Also, I seemed to be getting weird characters just pressing i to get into insert mode – and the VIM wiki page didn’t have anything about that. Neither did Google turn up anything that seemed related.

So today early morning, on a whim, read up a little on VIm terminal handling. I have the following in my .vimrc

set t_Co=256

Maybe it was the color escape code that was coming in – so checked out :echoe &term which returned xterm under gnome-console and builtin_gui under gvim. So I’ve put the following bit in my .vimrc and it seems to have fixed things nicely:

if &term == "xterm"
    set term=xterm-256color
endif