Friday, October 27, 2006

Finally getting around to using NMock

I don't like to admit it, but I really am quite conservative. I get set in my ways and even if I hear about a cool new way to do stuff it often takes me a while to get around to trying it. A great example is NMock. I've known about NMock for probably a couple of years, but I've stubbornly stuck with coding my own mock objects rather than giving it a spin. That is until today when I finally gave in and downloaded NMock2. It's amazing when you think that I've been a champion of TDD in at least three organisations and I've given presentations and mentored people in TDD techniques, but I haven't investigated such a core tool for doing TDD. But then again I only started using Test Driven this year which I couldn't possibly imagine working without now.

NMock is a mock object framework. You use Mock objects and dependency injection in unit tests to allow you to test a single component rather than the whole stack. It's probably one of most important core concepts behind TDD. NMock is a really neat OO framework that leverages the powerfull .net reflection API to create concrete instances of interfaces at runtime. Say you've got an interface like this:

public interface IFoo
{
	int DoSomething(int id, string name);
}

And you've got a client class that uses IFoo to do something:

public class Client
{
	IFoo _foo;

	public Client(IFoo foo)
	{
		_foo = foo;
	}
	
	public int DoSomething(int id, string name)
	{
		return _foo.DoSomething(id, name);
	}
}

You can use NMock to create a mock object like this (I love the name 'Mockery':-):

Mockery mockery = new Mockery();
IFoo mockFoo = mockery.NewMock();

And then set expectations for your unit tests, so that when you run the test, if the correct parameters
aren't passed an exception is raised:

Expect.Once.On(mockFoo).Method("CreatePipelineAudit").With(9, "the name").Will(Return.Value(4));
Client client = new Client(mockFoo);
int result = client.DoSomething(9, "the name");
Assert.AreEqual(4, result);

One thing got me for about half an hour before I was saved by my colleague Preet is that you can't mix bare arguments with 'Is' parameters in the 'Will' clause. In the 'Will' clause you pass the argument values that you expect to be passed to your mock method as I've done above. But also you can use the very convenient 'Is' class that returns a 'Matcher'. What I didn't realise was that 'With' is overloaded:

IMatchSyntax With(params Matcher[] otherArgumentMatchers);
IMatchSyntax With(params object[] equalArgumentValues);

So you can't mix bare values with 'Is' arguments. So this wont work:

Expect.Once.On(mockFoo).Method("CreatePipelineAudit").With(9, Is.Anything).Will(Return.Value(4));

The only other thing that disapointed me about an otherwise excellent tool, is that you can't mock objects, only interfaces. The code I'm currently working on uses several abstract base classes and it would be really neat if NMock could provide mocks for them. I see that it's already a feature request, and that it was available in NMock 1.0. Let's hope they add it soon.

Thursday, October 26, 2006

More than just code?

Jeff Atwood's Coding Horror is one of my favorite blogs. One of his recent posts argued that most of us are so involved with the detail of our applications that we don't step back enough and ask why we're writing a specific piece of software. He suggests that we should become business people and lawyers and concentrate less on coding, or maybe even stop coding all together. Sure what's the point in writing great code if it never ships or nobody ever knows about it. I often read good arguments that coders should become better writers, salesmen or business people, but the problem with that argument is that it misses two important points:

First is that often the reason people become coders is because they are much stronger dealing with abstract ideas than with other humans. It's widely noted that often the best coders are boderline autistic and in a way it's great that there is such a job as 'programmer' that means lots of these people can earn a good living without having too much painfull interaction with other humans:)

Secondly is the complexity of modern society that demands deep specialisation. There's simply too much knowledge out there for anyone to be a renaisance man in the 21st century. I have enough trouble just keeping up with what's going on in the world of .net development, let alone other languages and platforms. There's simply no way that I've got enough mental bandwidth to be a great lawyer or salesman too. So sure it's good to be at least dimly aware of the reason why you're writing that code, but I expect for the vast majority of corporate developers, the reason they're writing that code is because their boss told them to.

Now, if are one of those exceptional people that do have the mental bandwidth to be a great lawyer salesman and programmer, then you'll probably end up being very successfull anyway. One reads stories of Bill Gates out legalling (is that a word?) his lawyers and he's obviously a great salesman too, but I think for most of us lumpen programmeren just keeping up with our corner of the coding world is probably work enough.

Monday, October 23, 2006

Playing with Providers

'Inversion of control' (also known as 'dependency injection' or the 'dependency inversion principle') is a common OO pattern that allows you to decouple the layers of your application by removing the dependency of client class on a server class. It is not only good practice, but essential for writing unit tests. Rather than hard coding the server's concrete type in the client, you make the client refer to a server interface or abstract class and inject the concrete server at runtime. Sometimes you'll have the server injected by a co-ordinator or service class, but in many cases you want to be able to configure the server so that you can change the server's type without having to recompile the application. This becomes essential if you're writing a reusable framework where you want to allow your users to provide their own servers and indeed, the .NET base class library uses this pattern extensively. In previous versions of .NET you had to roll your own code to read the type information from a config file and then load the correct assembly and instantiate the correct type, but now in .NET 2.0 there's a Providers framework that makes it child's play to load in servers at runtime. The classes live in the System.Configuration.Provider namespace.

OK, my client uses this interface with a single method 'DoIt()':

public interface IServer
{
	void DoIt();
}

I have to define a base provider that implements this interface and extends System.Configuration.Provider.ProviderBase:

public abstract class ServerProvider : ProviderBase, IServer
{
	public abstract void DoIt();
}

I also need a custom configuration section to place my providers in. Note that we need to provide a 'DefaultProvider' property and a 'Providers' property. The DefaultProvider tells us which provider we should use, but allows us to keep multiple providers on hand if we, for example, want to allow the user to select from them at runtime. The Providers property is of type ProviderSettingsCollection that's supplied for us in the System.Configuration namespace. The new custom configuration section feature of .NET 2.0 is also really nice, but that's another post...

public class CustomConfigSection : ConfigurationSection
{
	[ConfigurationProperty("DefaultProvider")]
	public string DefaultProvider
	{
		get { return (string)this["DefaultProvider"]; }
	}

	[ConfigurationProperty("Providers")]
	public ProviderSettingsCollection Providers
	{
		get{ return (ProviderSettingsCollection)this["Providers"]; }
	}
}

Now, in our client we just grab our custom config section and use the System.Web.Configuration.ProvidersHelper class to load our providers, it's that easy. You can then just select the default provider or maybe provide a list for the user to select. I've left out all the error handling code to make it simpler, but you really should check that the stuff gets loaded that you're expecting

Configuration configuration = ConfigurationManager.OpenExeConfiguration(
    ConfigurationUserLevel.None);
CustomConfigSection section = configuration.Sections["CustomConfigSection"] as CustomConfigSection;
ProviderCollection providers = new ProviderCollection();
ProvidersHelper.InstantiateProviders(section.Providers, providers, typeof(ServerProvider));
ServerProvider provider = (ServerProvider)providers[section.DefaultProvider];
...
provider.DoIt();

Here's a sample provider called XmlServerProvider. Note the Initialize method that you have to implement. It takes the name of the provider and a name value collection 'config' that contains any properties that you might require to be set for your provider. In this case, apart from the common name and description properties, the provider also requires a 'filePath' property. You should also check that there aren't any superfluous properties in the configuration.

public class XmlServerProvider : ServerProvider
{
	string _filePath;

	public override void DoIt()
	{
		....
	}
	
	public override void Initialize(string name, System.Collections.Specialized.NameValueCollection config)
    {
        if(config == null) throw new ArgumentNullException("config");
        if(string.IsNullOrEmpty(name))
        {
            name = "XmlServerProvider";
        }
        if(string.IsNullOrEmpty(config["description"]))
        {
            config.Remove("description");
            config.Add("description", "A xml based server");
        }
        base.Initialize(name, config);

        // test that each property exists
        _filePath = config["filePath"];
        if(string.IsNullOrEmpty(_filePath))
        {
            throw new ProviderException("filePath not found");
        }

        // throw an exception if any unexpected properties are present
        config.Remove("filePath");
        if(config.Count > 0)
        {
            string propertyName = config.GetKey(0);
            if(!string.IsNullOrEmpty(propertyName))
            {
                throw new ProviderException(string.Format("{0} unrecognised attribute: '{1}'",
                    Name, propertyName));
            }
        }
    }	
}

And last of all, here's a snippet from the App.config or Web.config file. You have to define your custom config section in the configSections section. Here we're loading the XmlServerProvider, note the name, type and filePath properties.

<configSections>
<section name="CustomConfigSection" type="MyAssembly.CustomConfigSection, MyAssembly" />
</configSections>
<Operation DefaultProvider="XmlServerProvider">
<Providers>
<add
name="XmlServerProvider"
type="MyAssembly.XmlServerProvider, MyAssembly"
filePath="c:/temp/myXmlFile.xml" />
</Providers>
</Operation>

Thursday, October 19, 2006

What is AzMan?

Does your application require a finer grained level of control than simply authorizing users to access a particular web directory or windows form? Do you have complex roles with overlapping tasks that consist of multiple operations? Do you want to be able to disable or enable individual user interface elements according to the user's role definitions? Are your roles complex and likely to change during the operational life span of your application?

The .net framework has a nice API for managing role based security which works on a simple but effective mapping of users to roles:

[User] -- has a --> [Role]

But with complex business requirements where different roles have overlapping tasks within the application and you need to be able to modify roles without recompling, it's often neccessary to have a more complex model that maps operations (individual functions within the application like 'Add order line' for example) to tasks (like 'Order product for user') and tasks to roles (like 'Sales advisor'):

[User] -- has a --> [Role] -- is allowed to execute --> [Tasks] -- are made up of --> [Operations]

This means that the application can simply ask if a given user has permission to execute a certain operation and it can be left to an administration function, with a nice GUI, to assign the operations to tasks and the tasks to roles rather than baking it into the application code.

It's quite common for people to spin their own security sub systems that have this more complex model. I've seen some pretty involved home made security frameworks out in the wild and it creates a considerable development overhead. What's needed is a built-in API for managing this more complex authorization model.

AzMan is a COM based API for managing application security that originally shipped with Windows Server 2003, but is now also available for XP (with the Windows Server 2003 Administration Tools Pack). It allows you to define fine grained operations that can be grouped into tasks that can in turn be assigned to roles as I explained above. The backing store can be either a xml file or Active Directory (can also use ADAM a stand alone Active Directory that can be created for individual applications). AzMan also adds a nice GUI MMC plugin for user/group/role management.

Unfortunately it's a COM based API and as yet it's not supplied with a convenient wrapper, you have to use the interop and there's a good MSDN article here on how to do that (Use Role-Based Security in Your Middle Tier .NET Apps with Authorization Manager).

AzMan can also be used without any extra coding in the ASP.NET 2.0 security model, but since that model is role based you can't leverage any of the operation based features, for that you need to write to the Interop API. To use AzMan in ASP.NET 2.0 simply configure your authorization role provider as the AuthorizationStoreRoleProvider class that's a supplied with the framework (How To: Use Authorization Manager (AzMan) with ASP.NET 2.0)

Wednesday, October 11, 2006

log4net

Today I've been playing with a logging framework called log4net. It's a port of a java logging framework (log4j would you believe), that's really popular. The nice thing about log4net is that it's really simple to use and configure and comes with a huge range of log sinks (known as 'appenders' in log4net speak) straight out of the box, you can even log to telnet. I think the sign of a good framework is one that lets you get up and running really quickly without having to study the full model, but has the extensibility to allow you to do the more complex stuff if you need to as well as being fully configurable without having to recompile you application and log4net seems to be that kind of framework. I haven't used the logging application block from the P&P group, so I can't compare it to that. For me, moving up from writing directly to a file or the event log is a big move forward.

Friday, October 06, 2006

No to #region!

The #region directive in C# was invented so that a code file can be split up into collapsable regions to aid navigation. I can stand them, and here's my list of region irritations...

  1. They're like comments, they don't execute so it's easy to have regions which tell you something completely wrong. How about the region '#region public properties' that contains nothing but private methods. Yeah, I've seen that enough. Martin Fowler in his excellent book Refactoring says "When you feel the need to write a comment, first try to refactor the code so that any comment becomes superfluous". The same goes for regions, if you feel a region comming on, maybe you need to refactor, which brings me to...
  2. If you're code file is so large that it needs regions to keep it organised, maybe your code file is too large. Visual Studio likes to have one class per file and classes shouldn't be so large that they need to be split into regions. Maybe when you feel a region comming on you should try refactoring your class into more smaller classes. What about regions that split up a method into easier to understand segments? You have regions inside methods???? That's too far gone!
  3. What's wrong with the great tools that come with Visual Studio for helping you navigate around your code. I think one look at the Class View is worth a million stupid regions and if what you see in the class view doesn't make any sense then you should really get a copy of 'Refactoring'. Well named methods and classes in a well designed object model should make your code easy to understand and navigate.

So just say no to regions, I'm sick of clicking on those stupid little + signs!

Wednesday, October 04, 2006

Nullability Voodoo

We had a good discussion in the office today about nulls. People often use null in a database to mean something in business terms. For example if 'EndDate' is null it means that the task hasn't ended yet. But this kind of 'nullability voodoo' is bad, you're not being explicit about meaning and someone looking at your database schema has to know more implicit rules beyond what the schema itself can provide. Of course that's always going to be the case, but keeping explicitness (is that a word?) to a maximum will save you lots of time and money later. Nullability usually means something in a business sense that is better represented in some other way. Rather than using the nullability of EndDate to mean that the task hasn't completed, consider giving the task a status instead. I've maintained systems where complex rules about various attributes had to be interpreted to mean some kind of status to know how painfull this can be.

If you must represent nullable types in managed code, avoid using the SqlTypes. I've found numerous problems with them, they don't implicitly cast or behave like the basic value types and in any case, who wants to drag a reference to System.Data up into their domain layer. I haven't used the new nullable types in .net 2.0 so I can't really comment on them, but effectively it's a way of giving nullability to value types and has a nasty hackish smell about it. In any case you should be very carefull of equating TSQL null (an empty value) with C# null (a zero pointer) they mean different things and it can make code very tricky when you constantly have to test for null everywhere.

It's worth checking out the 'null object' pattern if you've got a business case for an entity that has to represent itself as a null value. It means that you can factor all your null processing into one class.