Archive

Archive for the ‘dotnet 4’ Category

Quartz.Net

November 29, 2013 Leave a comment

In this post I am going to talk about yet another open source project Quartz.net which is simple but powerful way of scheduling jobs in .NET. In any enterprise wide development I am sure you have come across situation where you need to run jobs at specific interval,frequency, at a specific day of the week etc.

Quartz.net API is pretty simple and all it cares about is any class which implements the IJob interface and when you wire that up to the Quartz scheduler class and it will take care of the scheduling.

Lets start cutting some and I am going to create a simple ToDoJob class which implements the IJob interface.

ToDoJob.cs

    using System;
    using Quartz;
    
    public class ToDoJob : IJob
    {
        public void Execute(IJobExecutionContext context)
        {
            Console.WriteLine("Job is executing - {0}.", DateTime.Now);
        }
    }

Now I am going to create my own interface which will encapsulate the Quartz scheduler and decouple the Quartz component.

IJobScheduler.cs

using System;
using Quartz;

public interface IJobScheduler
{

    void ScheduleJob<T>(string groupName, string jobDetails, 
                        string triggerName, TimeSpan repeatInterval)
                        where T : IJob;
}

The implementation of this interface is pretty simple as you can see I am using the Standard Scheduler factory provided by Quartz and from there we can get the scheduler object. After that you just need to create a simple trigger object and wire up a few setting like when it needs to be started, how long the job needs to run for etc.

JobScheduler.cs

using System;
using Quartz;
using Quartz.Impl;
using Quartz.Impl.Triggers;


public class JobScheduler : IJobScheduler
{
    private readonly IScheduler scheduler;
    private readonly ISchedulerFactory schedFact;

    public JobScheduler()
    {
        this.schedFact = new StdSchedulerFactory();

        // get a scheduler
        this.scheduler = this.schedFact.GetScheduler();
        this.scheduler.Start();
    }

    public void ScheduleJob<T>(string groupName, string job, 
                               string triggerName, TimeSpan repeatInterval) 
                                where T : IJob
    {
        var trigger = new SimpleTriggerImpl
        (
            triggerName,
            groupName,
            DateTime.UtcNow, 
            null, 
            SimpleTriggerImpl.RepeatIndefinitely, 
            repeatInterval
        );

        var jobDetail = new JobDetailImpl(job, groupName, typeof(T));
        this.scheduler.ScheduleJob(jobDetail, trigger);
    }
}

As you can see from the above code I am configuring a trigger which start immediately and runs indefinitely at an interval specified in the parameter.

So our Job scheduler class is ready and we have define the schedule job method in such a way that it takes any IJob interface along with necessary scheduling parameters to schedule a job. Now the only thing left is use the class and schedule a job.

Below is the main program where I am asking the ToDoJob class to execute every 10 seconds and this is just for illustration in real life enterprise application this could be inside a config or whatever requirements you have.

Program.cs

using System;

public class Program
{
    public static void Main(string[] args)
    {
        IJobScheduler scheduler = new JobScheduler();
        scheduler.ScheduleJob<ToDoJob>("FileGroup", 
                                        "FilePollingJob", 
                                        "FilePollingTrigger", 
                                        TimeSpan.FromSeconds(10));
    }
}

And here is the output of the ToDoJob.

Quartz Scheduler Running

As you can see this is pretty neat and if you guys remember a long time ago I blog about TopShelf if you haven’t read it or not heard of it then I highly recommend to read about it here. If you wireup the Quartz Scheduler with the TopShelf component then you get a pretty robust scheduling platform for your enterprise.

I just love doing these small project where I integrate various component and see these lego blocks coming up together and have fun with it and I hope you guys will enjoy playing with these lego blocks as much as I do.

Advertisement

Using Output Cache Profile in ASP.NET MVC

September 29, 2013 2 comments

Caching is a quintessential part of any web application, as it improves the performance and load on the web server. The simplest way to add caching to your ASP.NET MVC application is to decorate your action method with the Output Attribute as following :-

 

[OutputCache(Duration=60,VaryByParam="none")]
public ActionResult Index()
{
    var employees = db.Employees;
    return View(employees.ToList());
}

However it is not a good practice to hard code such parameter values and specially when you have to apply different values for different set of caching. For example 60 second for long, 30 seconds for medium and 10 seconds for short caching.First problem with this approach is that you have to be very careful where you are making these changes and second problem is that for some reason if you have to change these values then you have to do search and replace, which is a very bad thing and totally against the DRY ( Don’t Repeat Yourself ) principle.

So overall it is a good practice to use the output cache profile and lets see how we can declare it in our web.config file.

Web.config

<caching>
   <outputCacheSettings>
     <outputCacheProfiles>
       <add name="Long" duration="60" varyByParam="none"/>
       <add name="Medium" duration="60" varyByParam="none"/>
       <add name="Short" duration="10" varyByParam="none"/>
     </outputCacheProfiles>
   </outputCacheSettings>      
 </caching>

And here is how we use the cache profile in our action method.

[OutputCache(CacheProfile="Long")]
public ActionResult Index()
{
    var employees = db.Employees;
    return View(employees.ToList());
}

 

This is very straight forward and nothing unusual but whats the point of blogging about something which hasn’t got any gotchas. And the gotcha is that you can’t use the cache profile on a child action and MVC will through this strange exception “Duration must be positive number.”

 

The above exception is actually misleading and basically output cache profile are not supporting for child action, partial views etc,however the fix for this problem is very easy and all we have to do is just extend the OutputCacheAttribute class. So we will write our own Partial cache attribute class .

PartialCacheAttribute.cs

   1:  public class PartialCacheAttribute : OutputCacheAttribute
   2:  {
   3:      public PartialCacheAttribute(string cacheProfileName)
   4:      {
   5:          var cacheSection = (OutputCacheSettingsSection)WebConfigurationManager
   6:                              .GetSection("system.web/caching/outputCacheSettings");
   7:   
   8:          var cacheProfile = cacheSection.OutputCacheProfiles[cacheProfileName];
   9:   
  10:          Duration = cacheProfile.Duration;
  11:          VaryByParam = cacheProfile.VaryByParam;
  12:      }   
  13:  }

 

As you can see nothing special we are using the GetSection method of WebConfigurationManager class and from that section we are getting the cache profile by its name at line 8. Now our child action method is all set to be decorated with this custom attribute and this is how the action method looks like :-

[ChildActionOnly]
[PartialCache("Short")]
public string GetEmployeesCount()
{
    return db.Employees.Count().ToString();
}

 

This was an issue with MVC 3 and still it’s an issue with MVC 4, but I hope this will get fixed in in MVC 5 😉

Refactoring with LINQ

January 25, 2013 Leave a comment

In this post I am going to talk about a simple refactoring technique using LINQ and I hope it will give you some insight into the power of LINQ.

Recently I came across with a very specific requirement where as soon as we persist our aggregate root (Code First POCO) into the database, the entire object need to be stored in an audit table as a key value pair (property and values).

I know it may sound a bit strange but hey requirements are requirements and if it adds value to the business then why not.

I thought it would be easy as I can use JavaScript Serialzer and store it as a JSON object in SQL Server database but that was not the case as the requirement specifically said it has to be just a key value pair.

Also the JavaScript Serializer solution would not have worked as I had one-to-many and then many-to-many relations in EF Code first POCO objects and JavaScript Serializer was giving exception due to circular objects.I could have tried fixing the seriliazer problem but I didn’t pursue it too hard as this technique was not a fool-proof solution.

So I googled it and found this reflection helper class

ReflectionHelper.cs

 public static class ReflectionHelper
{
    public static string DisplayObjectProperties(Object o)
    {
        StringBuilder sb = new StringBuilder();
        System.Type type = o.GetType();
        foreach (PropertyInfo p in type.GetProperties())
        {
            if (p.CanRead)
            {
                object obj = p.GetValue(o, null);
                if (obj != null)
                {
                    sb.AppendLine(String.Concat("-Property name: ", p.Name ));
                    sb.AppendLine(String.Concat("-Property value:", obj.ToString()));
                    sb.AppendLine();
                }
                else sb.Append(String.Concat(p.Name, " # Value is null"));

            }
        }
        return sb.ToString();
    }
}

The code is pretty slick and does what it is supposed to do and worked in my scenario, no problem at all.

I then decided how can I make this code even better and leverage some of the new language feature. For me foreach loop with lots of if else conditions is a code smell and had to do something about it. So here is the refactored code using LINQ.

ReflectionHelper.cs

public static class ReflectionHelper
{
    public static string DisplayObjectProperties(this Object o)
    {
        var sb = new StringBuilder();
        var type = o.GetType();
        var query = from property in type.GetProperties()
                    where property.CanRead &&
                          property.GetValue(o, null) != null
                    select property;

        foreach (var propertyInfo in query)
        {
            sb.AppendFormat("{0}:{1}\n", propertyInfo.Name,
                                propertyInfo.GetValue(o));
        }
        return sb.ToString();
    }
}

As you can see nothing fancy but all the ifs conditions have been converted into a LINQ query,a simple technique but looks good.

Happy Clean Coding !!! 🙂

Getting Started with TopShelf

December 31, 2012 3 comments

I am a huge fan and follower of open source project and there have been so many great open source projects which tend to solve our complex problem with such an ease. In fact I love the way open source is becoming main stream and even Microsoft has come to the party. However I have notice not every .net developer is across it at least in Canberra.

Today I am going to talk about an underrated open source project called TopShelf which in my opinion deserve a lot of more recognition in the community and decided to blog about it

So TopShelf is a windows service framework which allows to build windows services with out the pain of what .net/visual studio provides. The project is hosted at Github and can be found here.

In fact if you remember in April of this year I blogged “NServiceBus: The Magic of Generic Host” and showed you how NService Bus installs your handler as a windows service, but the real magic behing Generic host is TopShelf and Enterprise Service Bus like NServiceBus and Mass Transit both use TopShelf internally to install handlers as windows service.

Lets get stated and write some code to use TopShelf. Create a new console application and type the following NuGet Command.

PM> Install-Package TopShelf

NuGet will install ‘TopShelf 3.1.0’. Lets create an interface which will encapsulate the windows service start and stop method and TopShelf itself requires a start method and stop method

IService.cs

public interface IService
{
    void Start();
    void Stop();
}

For a real life example lets pretend we are writing an Email service which polls database every 10 seconds and based on some domain logic processes and sends the email. I am not going to go into the details of polling and threading etc and this is just a demo code for the Email Service.


EmailService.cs

public class EmailService : IService
{
    public void Start()
    {
        Console.WriteLine("Starting Service ...");
    }

    public void Stop()
    {
        Console.WriteLine("Stopping the service ...");
    }
}

Now in our main program we just have to wireup the TopShelf Host factory.

Program.cs

class Program
{
    static void Main(string[] args)
    {
        HostFactory.Run(x =>                                 
        {
            x.Service<IService>(s =>                        
            {
                s.ConstructUsing(name => new EmailService());   
                s.WhenStarted(tc => tc.Start());             
                s.WhenStopped(tc => tc.Stop());              
            });
            x.RunAsLocalSystem();

            x.SetDescription("Email Service to send emails 
                              and proudly hosted by TopShelf"); 
            x.SetDisplayName("Email Service to send emails");                       
            x.SetServiceName("EmailService");                       
        });               

    }
}

 

and that’s it there are no other moving parts or special type of projects or files to include. There are many other useful settings like under which account the windows service should run and on what other services it depends on etc can be easily configured using the fluent apis.The best thing about TopShelf is that as part of your development and debugging you can run it as a normal console application and when you are ready you can just install it as a windows service.

Lets look into installing it as windows service. Launch command prompt as Administrator

and type the following command :-

Topshelf will install the service and will output the following result and which means you have successfully installed your application as a windows service.

Now to verify it open up windows services snap in MMC and you will see your service installed properly with the service name and description. Remember that the display name and description can be anything you want however the windows service name cannot have any spaces as you can see in the above code.

Now you can go ahead and start the windows service. To uninstall it’s again the name of your executable with uninstall option.So for our demo app it will be

TopShelfDemo.exe uninstall

That’s it folks and have a happy coding new year !!!

Entity Framework: Viewing SQL

July 16, 2012 1 comment

In this post I am going to show you a cool trick I learned regarding ObjectQuery class and how it can be useful when working with Entity Framework.

In order to see what SQL statement Entity Framework is executing we generally tend to run the SQL profiler and set custom filters like the database name or id, the current user under which our code is going to execute the linq query etc so that we can isolate our query execution.

I find this approach very ugly as you have to start-stop your profiler trace in order to capture the exact time when the SQL is executed and in multi developer environment when everyone is developing against the same database it becomes quite challenging.

Well it’s not painful if we use a better approach using the ObjectQuery class and an extension method which extends the IQueryable interface. So here is the code for the ObjectQuery Extension class.

ObjectQueryExtension.cs

public static class ObjectQueryExtension
{
    public static string ToTraceString(this IQueryable t)
    {
        var objectQuery = t as ObjectQuery;
        return objectQuery != null ? objectQuery.ToTraceString() : 
                                     String.Empty;
    }
}

So now I’ll generate the EF Model from the publisher database and the entities I am interested in are Author and Title as shown below.

EF Publisher Data Model

and the following code to display the SQL statement in the console window.

Program.cs

class Program
{
    static void Main(string[] args)
    {
        using (var context = new PublisherContext())
        {
            var authorId = "998-72-3567";
            var query = context.Author
                            .Where(x=>x.au_id == authorId)
                            .SelectMany(ta=> ta.titleauthors,(au,ti)
                            => new {
                                    au.au_id,
                                    au.au_fname,
                                    au.au_lname,
                                    ti.title
                                    });

            var sql = query.ToTraceString();
            Console.WriteLine("Generated SQL is :");
            Console.WriteLine(sql);
            Console.ReadKey();
        }
    }
}

and this is how the output looks like.

SQL Output to Console Window

As you can see this small extension methods has lots of potential as you can use it along with logging frameworks like log4net and output the SQL to a file.

Just Decompile

In this post I am going to talk about a free tool called JustDecompile by Telerik and I have been using it for sometime. I know a lot of people were upset when .NET Reflector tool was not free anymore and In my opinion it is a good and inexpensive replacement for .NET reflector.

The download link for JustDecompile is here.

Setup is a very easy 4 step process.

Just Decompile Setup

You can load different version of .Net framework as shown below.

Just Decompile UI
This is how the view looks when you open a .Net assembly.
Viewing an assembly

It has a nice search functionality

Searching assemblies using Just Decompile

One thing I use the most is language conversion between C# and VB.NET. I often have to work with legacy systems or other old projects where I have to either read VB.NET Code to understand the logic or write some code in VB.NET.And at times it is hard to switch gears between the two languages specially with each version of .NET adding complex language features like lambda expression etc.

I don’t know about you guys but this is how my brains works and I am not good at instantly switching between different languages and I find JustDecompile so productive as it allows you view the code in C#,VB.NET or IL Code.

Switching between VB.NET and C# Code

That’s it folks and Happy De Compiling !!!

Layer Supertype Pattern

October 25, 2011 11 comments

In this post i am going to show another of my favorite pattern called the Layer Supertype Pattern. It’s again a very simple but very useful pattern and was described by Martin Fowler in his famous book Patterns of Enterprise Application Architecture. If you haven’t read this book I strongly recommend you to read this as it will help you in building better Enterprise Applications.

Anyway this is what the definition in the book is :-

It’s not uncommon for all the objects in a layer to have methods you don’t want to have duplicated throughout the system. You can move all of this behavior into a common Layer Supertype.

 

In the past I have implemented various different implementation of this pattern but the common design pattern is that for a layer or suite of component with specialized behaviour we can abstract some behaviour or attribute to a base class so that we can remove some of the duplications.

For example say we are designing a business layer and we have set of business objects the purpose of this layer is to validate domain objects before it can be persisted in a data store. As we can see these business object will encapsulate this functionality from other layers, and its main functionality will be indicate whether the business object itself is valid or not and what the different violation it has from a business rules.

Well that’s a good starting point and things we have identified using our Object Oriented Analysis and Design ( OOAD ) are :-

  • Business Rule (holds the rule definition)
  • A base entity (to state whether the object is valid and what are the violations
  • Business Object ( will implement the Entity Base )

Lets start writing some code :-

BusinessRule.cs

public class BusinessRule
{
   public string Property { get; set; }

   public string Rule { get; set; }

   public BusinessRule(string property, string rule)
   {
      Property = property;
      Rule = rule;
   }
}

EntityBase.cs

public abstract class EntityBase<T>
{
   private readonly List<BusinessRule> _brokenRules = new List<BusinessRule>();
   protected abstract void Validate();

   public IEnumerable<BusinessRule> GetBrokenRules()
   {
      _brokenRules.Clear();
      Validate();
      return _brokenRules;
   }

   protected void AddBrokenRule(BusinessRule businessRule)
   {
      _brokenRules.Add(businessRule);
   }
}

Lets implement a customer class which inherits from the EntityBase class.

Customer.cs

public class Customer : EntityBase<Int32>
{
   public Int32 Id { get; set; }
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public DateTime DOB { get; set; }

   protected override void Validate()
   {
      if (string.IsNullOrEmpty(FirstName))
      {
         AddBrokenRule(new BusinessRule("FirstName","First name cannot be empty"));
      }

      if (string.IsNullOrEmpty(LastName))
      {
         AddBrokenRule(new BusinessRule("LastName", "Last name cannot be empty"));
      }

   }
}

And to complete the task lets write a unit test to check whether we get the desired result or not.

UnitTest.cs

[TestClass]
public class When_using_customer
{
   [TestMethod]
   public void Should_ab_able_to_validate_customer()
   {
      //Arrange
      var customer = new Customer();

      //Act
      var brokenRules = customer.GetBrokenRules().ToList();

      //Assert
      Assert.IsTrue(brokenRules.Count > 0);
   }
}

And now when we run our unit test we get the two broken rules for customer,also if you look at the EntityBase class closely we are using Template method pattern in the GetBrokenRules method which I blogged last year and can be found here.

Getting started with NServiceBus

September 3, 2011 9 comments

In this post I am going to give a quick introduction into NServiceBus and how this can change the way you think or design distributed application for an enterprise.

Before we get into specifics of NServiceBus lets see what is an Enterprise Service bus and how does it fits into the realm of an enterprise. So the Wikipedia definition is

   An enterprise service bus (ESB) is a software architecture model used for designing and    implementing the interaction and communication between mutually interacting software    applications in Service Oriented Architecture. As a software architecture model for    distributed computing it is a specialty variant of the more general client-server software    architecture model and promotes strictly asynchronous message oriented design for    communication and interaction between applications. Its primary use is in Enterprise    Application Integration of heterogeneous and complex landscapes.

So here are some of the key features of NServiceBus :-

  • It’s not a centralized broker like BizTalk or IBM Websphere Message Broker
  • It’s not a Service Communication Framework like WCF.
  • Build on top of MSMQ.
  • Focuses on messaging and publish/subscribe model.
  • It’s very robust and reliable
  • There is no synchronous communication
  • The Community Edition of NServiceBus is absolutely free and can handle up to 30 messages/sec which comes out to be around 2.59 million message a day on a decent quad-core server machine.
  • The licensed version of NService is merely $2000 per license and can handle 100 messages/sec or 86.4 million messages.
  • Highly extensible
  • Written by the software simplest the great Udi Dahan.

Okay that’s was like the product overview but if you are like me, you might be itching to go ahead and use this stuff. So here is how you can jump into the NServiceBus wagon (oops the pun ).

  • Go to http://nservicebus.com
  • Click on “Download v2.5 SP1”
  • Unzip the file and run RunMeFirst.bat as an admin
  • Install or configure MSMQ
  • Go to the samples folder and open the Full Duplex project

Just to get a feel of what the project structure looks so here it.

Full Duplex Project Structure

As you can see the project structure is self-explanatory, there is a client and a server and client is going to send a message and the server will take process the message and return it back to client.Lets run the project and see what happens.

Full Duplex Console Output

and as you can see in the above figure that client and server are launched and ready to exchange messages. Lets hit enter in the Client console window and you will see the message gets send to the server with a Guid, which is received by the server and server sends back the message back to the client.

Full Duplex message send and received

So till now nothing special but it’s time to go deeper and start tinkering with NServiceBus to understand some more. What I’ll now do is from the solution this time I’ll run only the client and not the server to mimic “a real life distributed application with reliable messaging” kind of scenario. So lets run the client project in isolation.

Running the client in isolation

and we see that the client console windows gets launched and let hit enter to send a message to the service.So this is how it looks when we do that.

Full Duplex Sending message from client

Now we will look how this message gets routed by NServiceBus. Lets open MSMQ and see where this message is routed and stored in MSMQ.

msmq client and server message queues

If you remember from the client console window that a message with a guid 3e8eeb98-0130-429a-862c-a56f5a02d95b but the message id in the server queue is something different and you must be wondering about this mismatch.

Lets open the message and see what’s in there.

MSMQ message body

So here in the message you can see the Data Id which matches the guid and this is how the server can correlated the incoming messaging and outgoing message to the client.

Now lets run the solution again and see how the NServiceBus will pick up the message process it and send it back to client and client will display the returned message from the server.

Full Duplex - Offline processing

Viola !!! We see the NServiceBus picks up the message and processes it and return the message back to client and the client receives the processed message.If at this point you open the MSMQ and you will see that the message has been processed from the Server Queue and since the client has received the message the message disappears from the client queue. This is a good project to understand the core of NServiceBus and getting started with it.

One of thing I like about NServiceBus is the documentation is superb and all the sample projects are really good to understand. Hopefully I will able to bring more real-life & complex implementation of NServiceBus as I go my implementing in my current project.Till then …..

Enjoy !!!

Why program against an interface ?

July 19, 2011 8 comments

I have done a lot of post regarding interface based programming and we have seen some good benefits of programming against an interface as compare to its concrete implementation. One of the best things it does it that it promotes SOLID design principles and allows us to have a level of abstraction or separation of concerns.

If you haven’t read my previous post about the benefits of interface I recommend reading them and here are some of those posts.

In this post I am going to show another benefit of using interfaces and that is how easy it is to add additional behavior (decorators) without changing the code. With interfaces you are able to provide nice extensibility point and keeping our code SOLID ( yes it is the big ‘O‘ in SOLID) and as usual Dependency Injection plays a key role in accomplishing the task with minimal effort.

Let’s start with a simple interface like this one.

ICustomerRepository.cs


public interface ICustomerRepository
{
   List<Customer> GetCustomers();
}

And let’s write the CustomerRepository class which implements this interface.

public class CustomerRepository : ICustomerRepository
{
   public List<Customer> GetCustomers()
   {
      return new List<Customer>
                {
                   new Customer(1, "Joe", "Blogg"),
                   new Customer(2, "John", "Doe")
                };
   }
}

And this is our client code that calls the CustomerRepository class, it’s a standard example I have used in most of my posts depicting a real world example of business layer calling the repository layer.

CustomerBl.cs

public class CustomerBl : ICustomerBl
{
   private readonly ICustomerRepository _customerRepository;
   public CustomerBl(ICustomerRepository customerRepository)
   {
      _customerRepository = customerRepository;
   }

   public List<Customer> GetCustomers()
   {
      return _customerRepository.GetCustomers();
   }
}

I will add a simple class diagram for people who love to see the class diagram,as I personally favor class diagram over code.

Customer Repository Injected into Customer Business Layer

As you can see nothing special, we are just using constructor injection in our CustomerBl so that through IoC I can decide which class will be injected into the CustomerBl as long as the class implements the ICustomerRepository interface (contract).

So far so good the code is running fine and it’s in production, your customer is happy and you feel good about it.

After some time the customer says that I want to do logging when this function is called and the first thing which comes to your mind is okay I will write a logging code in the CustomerRepository GetCustomers() method.

But wait lets put some thought into this. First as soon as you are modifying the CustomerRepository class you are breaking the Single Responsibility Principle as this class has now 2 reasons to change.

  • If the GetCustomers() method logic changes
  • If the logging mechanism changes

Lets think “does this logging really the CustomerRepository’s responsibility” and at the same time we don’t want to put the logging code into CustomerBl class as it’s not even its responsibility. And what about the logging it could change in the future.

So let’s abstract the logging responsibility into its own interface like this.

ILogger.cs

public interface ILogger
{
   void Log(string message);
}

And the class which implements this interface is

DebugLogger.cs

public class DebugLogger : ILogger
{
   public void Log(string message)
   {
      Debug.WriteLine(message);
   }
}

Here I have chosen a simple class which writes to the output window to keep the example as simple as possible and since we are implementing through interface we can easily change it into file based or database based logger using the Dependency Injection Principle.

Here is the interesting part as you can see from our ICustomerRepository interface that the CustomerBl class just care for this interface and has no knowledge of what the actual class does.

This gives us a good opportunity to implement a new class which has the same interface signature but does the logging and then calls our actual CustomerRepository class as is. With this philosophy nothing has changed and the new class would act as a proxy.

Hmmm let me show you the code and see if I can make more sense so I will create a new class called LogCustomerRepository which will implement the ICustomerRepository interface and will have dependency on it as well as on the ILogger interface.

LogCustomerRepository.cs

public class LogCustomerRepository : ICustomerRepository
{
   private readonly ICustomerRepository _customerRepository;
   private readonly ILogger _logger;
   
  public LogCustomerRepository(ICustomerRepository customerRepository,
         ILogger logger)
   {
      _customerRepository = customerRepository;
      _logger = logger;
   }

   public List<Customer> GetCustomers()
   {
      _logger.Log("Before the get customer method");
      var result = _customerRepository.GetCustomers();
      _logger.Log(string.Format("Total results found {0}", result.Count));
      return result;
   }
}

And an elaborated class diagram.

Log Customer Repository Injected into Customer Business Layer

Customer Repository Injected into Log Customer Repository
So as you can this class just decorates the CustomerRepository and it shouldn’t make any difference to the client code(CustomerBl) as both the class implement the same interface (contract).

Now only thing left is wire this up in our IoC and I am going to use Castle Windsor Installer to have a nice separation of the configuration from the actual container.

If you are not familiar with Castle Windsor installer class please read my previous post about it.

CustomerInstaller.cs

public class CustomerInstaller : IWindsorInstaller
{
   public void Install(IWindsorContainer container,
                       IConfigurationStore store)
   {
      container.Register(
         Component.For<ILogger>()
            .ImplementedBy<DebugLogger>(),

         Component.For<ICustomerRepository>()
            .ImplementedBy<CustomerRepository>(),

         Component.For<ICustomerRepository>()
            .ImplementedBy<LogCustomerRepository>()
            .Named("LogCustomerRepository"),

         Component.For<ICustomerBl>()
            .Named("CustomerBl")
            .ImplementedBy<CustomerBl>()
            .ServiceOverrides
            (
               ServiceOverride
                  .ForKey("customerRepository")
                  .Eq("LogCustomerRepository")
            )
      );
   }
}

This is the core of how registration and components works in Castle Windsor. The important thing to notice here is I am using named component registration as I want to make sure that when my CustomerBl class asks CastleWindsor to resolve for an implementation of type of ICustomerRepository it will return the LogCustomerRepository class instead of CustomerRepository and this way I am making sure that the decorators are invoked in the right order.

So in the ServiceOverrides method I am telling CastleWindsor that CustomerBl has a Dependency on ICustomerRepository and the name of the variable which is passed into the constructor is “customerRepository” and when you come across this variable then invoked the registered component which has been tagged as “LogCustomerRepository” rather than the default registered component CustomerRepository.

Although this syntax is very specific to Castle Windsor but the concept is quite uniform across all the IoC containers.

Let’s write a unit test to see how this all comes together.

UnitTest.cs

[TestClass]
public class When_using_CustomerBl
{
   private IWindsorContainer _container;
   [TestInitialize]
   public void Setup()
   {
      _container = new WindsorContainer();
      _container.Install(new CustomerInstaller());
   }

   [TestMethod]
   public void Should_be_able_to_get_customers()
   {
      var customerBl = _container
                        .Resolve<ICustomerBl>
                        ("CustomerBl");

      var result = customerBl.GetCustomers();
      Assert.IsTrue(result.Count > 0);
   }
}

And when I run the test it passes with the desired result and in the output window I do see the messages I expect.

Unit Test

As you can see from the Watch window that how these interfaces are resolved by Castle Windsor and how I am getting the correct type when I need it.

This way of decorating interfaces and creating decorators is a nice and easy pattern but it also gives us some insight into Aspect Oriented Programming as these decorators are merely cross cutting aspects.

For example we can easily create Logging,Security,Caching,Instrumentation etc decorators and Inject them with Castle Windsor.

Writing Rule Specification for the Composite Pattern

In the last post we build a nice fluent interface to use composite pattern with Linq Expression. The idea was to use composite pattern to chain these conditions (specification) and Linq helped us a lot in accomplishing that.

We also unit tested some of these expression to see how the linq expression are built for And,Or, Not conditions and had that flexibility to apply them using lambada expression.

So in this post I will show you how we should write these specification like a rule specification to mimic a real life application.

As a good designed of any framework or API our main goal is to hide the inner complexity and workings from consuming code and try to expose them with easy interfaces and write some unit test to show how these API’s will be consumed.

Lets look at the employee class we created in our previous post and build some rule specifications.

Employee.cs

public class Employee
{
   public int Id { get; set; }
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public virtual IList<Address> Address { get; set; }
}

And say the rule specifications for granting a leave are as follows :-

  • If an employee has an address then he/she is preferred for taking a leave
  • If an employee’s First Name starts with the letter ‘T’ then they are considered highly experienced
  • And an employee satisfies the above 2 condition then only they can take a leave

I know these rules sounds really ridiculous but I just wanted to show the gravity of the rules, as I have always seen developers complaining that they couldn’t write a Clean Code or SOLID code because the business rules were way too complex and weird, or the client couldn’t make up their mind and kept on chopping and changing.

Yes I have faced the same situation before and in my opinion it is these confusing complex rules and inadequate unit test are the main culprit of bad design and failure of a system.Anyway that’s my personal opinion and let’s get back to writing these specification.

Employee Rule Specification

public class EmployeeRuleSpecification
   {
      public CompositionWithExpression<Employee> IsPreferred
      {
         get
         {
            return new CompositionWithExpression<Employee>
               (
                  e => e.Address.Count > 0
               );
         } 
      }

      public CompositionWithExpression<Employee> IsExperienced
      {
         get
         {
            return new CompositionWithExpression<Employee>
               (
                  e => e.FirstName.StartsWith("T")
               );
         } 
      }

      public CompositionWithExpression<Employee> And(
                     CompositionWithExpression<Employee> other)
      {
         return 
            new CompositionWithExpression<Employee>
                                    (
                                       other.Predicate
                                    );
      }

      public ISpecificationWithExpression<Employee> PreferredAndExperienced
      {
         get
         {
            return (
                     this.IsPreferred.And(this.IsExperienced)
                   );
         }
      }
   }

[Pardon my indentation as I wanted to fit the long names and parameters with in the code box.]

So here what we have done is we have moved the responsibility of specifying the rules into its own class and which is good design practice (Single Responsibility Principle) and also the code looks very neat.

First If I am working on a huge project with the above code I don’t have to crawl though 1000 line to find out what this if else condition is trying to do.

Second if the rules changes or more rules need to be added I have to change only this class and that too because of the specification interface I am just adding a new specification and using composition compose a complex(compound) business rules.

Third important thing is that since I can do isolation testing on each expression as well as compound condition, I don’t have to go through 1000 lines of debugging to find out what condition is being invoked by the current state of the employee object. Otherwise we all have heard this thing “why it works in dev environment but fails in production” etc.

Lets plug this into our original Employee class by adding a new method CanTakeLeave.

public class Employee
   {
      public int Id { get; set; }
      public string FirstName { get; set; }
      public string LastName { get; set; }

      public virtual IList<Address> Address { get; set; }

      public bool CanTakeLeave()
      {
         var specification = new EmployeeRuleSpecification();
         return specification.IsExperienced
                             .And(specification.IsPreferred)
                             .IsSatisfiedBy(this);
      }
   }

As you can see the class still looks neat and compact and there is no ugly if/else/switch etc.Lets write a unit test to see how this method will be called and test what result are expected under what circumstances.

UnitTest.cs

[TestClass]
public class When_using_Employee_Rule_Specification
{
   [TestMethod]
   public void Should_be_able_to_take_leave()
   {
      var employee = BuildData.GetEmployeeWithAddress();
      var result = employee.CanTakeLeave();

      Assert.AreEqual(true, result);
   }
}

As you can see for this test to verify correctly I am calling the GetEmployeeWithAddress static on the static BuildData class which return an employee with his/her address and in this case my test passes with flying colors.

I hope this post will help you write complex specification and rules for a real life project as you can specify any number of these rules specification and compose them together to build complex rules.

Last but not the least is if you like you can this further and decouple the dependency of rule specification object and the business object i.e pass an interface to your business object, something like this.

public class Employee
{
   public int Id { get; set; }
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public virtual IList<Address> Address { get; set; }

   public Employee(IEmployeeRuleSpecification employeeRuleSpecification)
   {

   }
}

and then you can use Dependency Injection to inject the rule specification which you might have boot strapped it your project.( May be I should leave that up to you guys 😉 )