Working at the right level of abstraction

At RemoteX we have been a bit unhappy with the code produced to create our REST-service using WCF.

The problem is that you have these contracts that you specify. These contracts then build up a web-service end-point using the WebGet and similar attributes, to create a REST-style endpoint. The contracts however often look like a SOAP contract, or an interface wrapping behavior. A typical contract often includes the following operations for a specific type of document:

  • Get Single (web get on href entity-type/id)
  • Get List (web get on href: entity-type)
  • Get List with modified since (entity-type with query-string)
  • and so on

The trouble is the URI’s. A single contract often specifies several endpoint’s and several behaviors. There is no Single-responsibility principal at work, or rather the single responsibility might be that a contract handles a specific entity. End result, a lot of code for simple things and a REST-service that is unnecessary hard to extend.

This setup also makes it tricky to see the structure of the REST-service, the URI’s are spread out in the code contracts, forcing you to look at each method to figure out the different URIs.

After working with Grails and Django, and finally when I read the documentation for the Kayak web server. I realized what we were doing wrong. A reference in the Kayak documentation was the wakeup call:

The server component is an intuitive, unobtrusive, no-gimmicks implementation of HTTP, and the framework simply maps HTTP transactions to C# method invocations with minimal syntax and configuration.

We’re not working on entities as an abstraction, we’re building a REST-based service. Our level of abstraction needs to be HTTP.

So I discussed this with a colleague Johan, the day after. After a while Johan set to work on a piece of the REST-server that needed rewriting and after a week of TDDing. He came back with this wonderful piece of infrastructure, that worked along side the “old style” infrastructure.

The idea is that we have one contract. This contract has 4 methods:

  • Get
  • Put
  • Post
  • Delete

Essentially all the methods we want to implement. We then have a route configuration, similar to that of Grails or Django. That routes requests to Controllers that will take care of them.

Johan set things up so there is a single server, that looks at the route-configuration when a request arrives and executes a command on the controller object. The controller object doesn’t have to add methods for the operations doesn’t support. In fact, the controller object is a plain POCO with no inheritance or interface on it. As long as it has a method that matches the operation it will be called.

Serialization is handled in the framework and by following conventions when writing the controller you can take different input automatically processed by the infrastructure.

Lets see an example:

public class LogController {     readonly DataContextFactory _contextFactory;     public LogController(DataContextFactory contextFactory)     {         _contextFactory = contextFactory;     }     public void Post(LogEntry entry, IIdentity user)     {         PostMessage( user, entry.Message );     }         public void PostMessage( IIdentity user, string href, string entry )     {         using (var context = _contextFactory.CreateContext())         {             var dbUser = context.GetTable().Where(u => u.UserName == user.Name).FirstOrDefault();             var dbEntry = new Applications.Data.LogEntry                 {                     Message = entry,                     Date = DateTime.UtcNow,                     User1 = dbUser                 };             context.GetTable<Applications.Data.LogEntry>().InsertOnSubmit(dbEntry);             context.SubmitChanges();         }     }     [return: Xml]     public Log Get()     {         Collection entries;         using (var context = _contextFactory.CreateContext())         {             var e = (from l in context.GetTable<Applications.Data.LogEntry>()                     orderby l.Date descending                     select Mapper.Map<Applications.Data.LogEntry, LogEntry>( l )).Take( 50 );                       entries = new Collection(new List(e));         }         return new Log { Href = "log/", LogEntries = entries };     }}

It’s not the prettiest of examples (and I apologize if the code doesn’t look that great).

What your looking at is a simple log controller. It handles the log/ endpoint. The endpoint has a Get and a Post operation. Post is used to add LogEntries, and Get gives the latest 50 entries.

You will notice that the Get operation is marked with [return: Xml] this means that the operation will serialize the return value using XML.

The Post operation takes two inputs. LogEntry and a IIdentity. Both these parameters are set by conventions.

IIdentity is the user, as authenticated by the login-module, the infrastructue simply gives the user to any IIdentity parameter in a web-method.

The LogEntry is the deserialized body of the request. The infrastructure looks at any parameter that doesn’t fit a convention, checks if its serializeable and if the body of the request can be deserialized into such an object.

Now a route configuration is another object, that specifies which urls which controllers are registered too. These structures are set up using Castle, and loaded from any assembly in the web directory. Allowing us to extend the REST-service by dropping in assemblies that implement our structure.

Since this was added the development of the REST-service has really taken off. We add roughly one-two endpoints per week, designed to solve specific pains and give views of the domain specific to the task at hand.

By working at the HTTP level of abstraction instead of the contract abstraction, we have reduced the cost for extending the rest-service tremendously. Even better is that the controllers are easy to test, the code displays its intent clearly, and you can see what each specific Uri in the REST-service does.

Special thanks to Johan, great work!