Refactoring to CQRS

Introduction

The purpose of this post is document my recent journey I had moving from a typical web based (layered) architecture to an architecture based around CQRS and Event Sourcing. I am not going to describe CQRS and event sourcing as there are great blog posts already on the web that do the job perfectly well. Instead, this post is about the difficulties that I faced, planning, keeping the project on track during the transition.

Background

I hope to give you an understanding of my project and the existing architecture and the reasons why we moved to CQRS.

The product is a platform  for driving networks of digital signage players/broadcasting video and generating playlists from its content that is also managed by the platform. Many distributed components make up the platform that are controlled by workflows. We have developed a UI plus we have developed a REST/SOAP based API to allow for integration (SaaS). This product has been designed as an enterprise level product and requires many servers. So it needs to tick all the non-functional requirement boxes (scalability, availability, reliability etc etc)

Technology

Being a C# guy, its written in .NET 4.0. The UI has been developed using Silverlight 4 (for our sins) and we plan to move to HTML5 in the very near future (thats another story). On the server-side, as mentioned above we provide an REST API, we host our web services on IIS7.5 through WCF. The database is SQL server 2008 R2. We have a number of windows services and lots of MSMQ.

Original architecture

When I started on this project, a prototype which served its purpose was already development using Silverlight 4, RIA services and Entity Framework and SQL server.  When we started on the production version, I choose to start from scratch. Many because we are agile and we wanted to drive out the code through TDD, we wanted to drive out our build and deployment processes as well. At the time it was believed that the size of networks we would support was around 10,000 digital signage players. Taking other things into consideration like the skills of the team, cost of ownership it seemed the best thing to do as to “keep it simple” and go for a typical web based layered architecture (CRUD) which suited the technology stack. Our layers were.

  1. From the top, Silverlight 4 using Prism 4.
    1. MVVM pattern working with presentation models and service proxies.
    2. Proxies that communicate with web services asynchronously using WCF channel factory (binary encoding over http).
  2. Web services exposed over IIS using WCF.
  3. Stateless services that interact with the domain and map WCF data contracts using  AutoMapper.
  4. Domain (POCOs with logic)
  5. Persistence layer using NHibernate 3 and Fluent NHibernate.
  6. SQL Server 2008 R2.

The windows services communicated with the Web services and listened on MSMQs.

All in all, this is a typical architecture, it was nicely decoupled, The ORM was abstracted from the domain, high level of code coverage from over 600 odd unit tests and many integrations and all in all its was a code base to be proud of.

Why Change?

Why change? We had a valid architecture in place and its well understood. Our non-functional requirements  changed, we were originally talking about our biggest network being 10,000 which grow to 200,000+ up to 1 million. Our requirements were that we needed to scale, be available for 99.99%  and be auditable. We could be dealing with thousands of web requests per second.  Amongst these changes we were also starting to fill the pain with the current architecture. Now, I am not going to speak ill of this architecture, but we did find parts of the code base that started to smell.

  • Fat view models.  Its common to implement CRUD behaviours for the entities in your model and expose them through your web services. This is fine, but what we were finding is that our application was being created in our view models. A view model was being injected with many proxies to get the data from the various web services that were then mashed up to form the UI.
  • Fat services. With all the will in the world,  logic would find its way out of the domain and into a service, even with peer code reviews.
  • Multiple data mappings. This issue is were we read data from the database using an ORM into the domain (transform 1), then a requesting web service will map the domain object into a message object (DTO) (transform 2). The UI takes the message object and transforms into a presentation object (transform 3). Although this might not seem a lot, AutoMapper made this easier, its mundane code, needs testing as it can go wrong.
  • Interacting with the database. I have been pro ORM and most of the projects I have worked on over the last 4 years mainly used ORMs. I have been happiest when its NHibernate or Castle ActiveRecord. But dealing with the volumes of data and performance being paramount, we needed to have finer control over the SQL. So back to crafting our own sprocs. Mainly to respect the database and have the ability to tune it accordingly. Also we have a complex domain model, we could without knowingly easily fetch half the database back through one query. yes we could lasy load, but we could then end up with a “select n + 1” issue and getting this balance right is a problem that we could do without.

Why CQRS

I first came across CQRS about 2 years ago. At first, I was sceptic. It was about 6 months later and it just clicked and now I am completely sold on the concept.

More importantly, implementing an architecture with the CQRS concept and Event Sourcing rectified our current issues and satisfied all the non-functional requirements. It also gave us a lot of benefits.

Solving the current issues:

  • Fat view models: We would be returning the data needed for a view in one query straight from a stored procedure. This means that our view models need only know one proxy to get its data. This meant that we don’t need to shape the data in the view model.
  • Fat services: Our services are just facades over our command handlers. Each command handler deals with a single behaviour.
  • Multiple data mapping: This is reduced. The results from our query store are mapped straight into a message object (DTO). We still need to map in the UI though.
  • The queries are specific, we bring data only the data we need. Even before changing the query store to a denormalised tables we were seeing the benefits while stilling querying our relational database.

Beyond solving our current issues, the concept gave us some much more.

Planning the change

Before stating the move, I had to plan out many things. But first I have to present this to my team and get them to buy into the idea. I also needed to get the nod from management so I had to plan how the change was going to happen. I had to present the architecture and how we would move to it, while still delivering new features without knocking timescales into another year.

Most of the refactoring would be on the server side, so we could change the implementation for each service call. The calls you start with are important. We chose to get a tackle the lookup data and administration service calls changed first, knowing that parts of the domain and database would give us a good foundation moving forward.

So intercepting a single service call one by one seemed the best approach because the risk seems small and easy to revert. Doing the intercepting would be easy but driving out the CQRS based architecture was not, so we allowed a block of time to create the new databases and domain etc. We have to logically group the features to try and do a feature per sprint.

Although we were planning to move to a new architecture, we still had to ensure that old database was updated and that the existing features still worked. We was not a big issue as we could listen to events coming though the event bus and update the old the database accordingly. This felt good, we were using a clear benefit of the new architecture to keep the old stuff in sync. Once we had the whole system moved over we would remove this bridge and the old database.

Hurdles that got in way

Although I planned this, things happen that you didn’t see coming.

During the beginning, I worked in isolation to craft out the implementation on a different branch, while the rest of the team carried out the normal daily development. I tried at various points, walk the team through the new architecture to bring up their levels of understanding. I say I tried, CQRS was new to them and they needed more exposure to really get it. The mistake I was not getting the team involved more frequently.

I got to point where I had to refactor the core domain and I could not pussy foot around it no longer, so i had to just go for it. I was blessed that the team got a fairly stable build out, to relieve pressure, so I had to merge my branch back into the main trunk. This was painful, I had been working on the branch for 3 months and allow I planned out what parts of the code base to work on, changes and conflicts still happened.

Once the code was merged, we tackled the core domain which took another 3 months. Progress was slow because I hard to educate the team and the team had to make mistakes as part of this learning. Our deadlines slipped the board needed to be updated, and this effected the marketing and product launch.  This was caused by not educating the team in the beginning and not enough stakeholder management.

Standing in the light at the end of the tunnel

After 6 months hard work, the team and I managed to complete the refactoring. If I had to do it again (which I hope I do), I would have produced the software architecture description document and other documentation sooner and better stakeholder management.

Our product has massively benefitted from CQRS. Was it worth it? Absolutely.

Implementing an architecture with ASP.NET MVC (Part 4) – data access

Continuing on from part3 – the business layer. In this post the focus is on data access. In part 3, I created an interface called “ICustomerRepository” and a class “CustomerRepository” which will change to actually do something. Using the repository interface, our business code can interact with it without being coupled to the technology that is used to communicate with the data store.

This post is part of a series of posts about creating an architecture when creating a line of business application with ASP.NET MVC. The business and persistence layers have nothing to do with ASP.NET MVC or any other UI technology so this approach is relevant to any .net application. The next and last post of the series will be placing the business and persistence behind a WCF endpoint.

What technology?

We have many choices in the data access area. You could have either a ODBMS or a RDBMS. If you are using RDBMS than is a pretty good assumption to say you are using MS SQL server. So what choices do you have:

ADO.NET – Simple to understand, but as your application grows you will end up writing the same code over and over unless you create your own abstraction over the top. You have to write the SQL yourself, which means you will end up with many stored procedures (sprocs). On a positive note, you have control of the SQL and you can use sprocs as a API to your database. You can shape the result set of data in your sproc so its easier to handle in the application. This is the traditional approach, personally i had been using this approach since SQL 7.0 before .NET and also back in .NET 1.0 and 1.1. days. From experience, you do write more code mapping the results of the sproc to your classes. Other ways of using ADO.NET would be to provide the SQL inline with .NET code or use typed DataSets (poor mans attempt of ORM), but these are just wrong on many counts.

Object Relational Mappers (ORMs)

ORM’s have been around for years and have got more popular of the last 2-3 years in .net world. You have two types of ORMs, the first type are based on the Active Record pattern and the next type based on the Data Mapper pattern.  The active Record pattern in short is where your database table and domain class have a 1 to 1 mapping. So your domain classes mirror the database. Works well when you have a good database schema. The Data mapper is where your domains class are different from the data tables. Their are lots of ORMs available, here is a summary I the ones I know well.

NHibernate – IMHO the most powerful ORM to date, supports the data mapping pattern, been around for a while and got a big community around it and a number of tools like NHProf and Fluent NHibernate that improve the experience. Being as powerful as it is, its a bigger learning curve. It has it own methods for querying data using HQL, detached criteria as well as Linq (not fully implemented). Also NH supports many different RDBMS’s. Negatives for me are, session management and that it requires initialising when you start your application. By default uses XML mapping files (I hate writing XML, but enjoy writing XAML. Work that one out). Need a handful of DLL’s that you need to ship with your application.

Castle ActiveRecord – Personal favourite. Built on top of NHIbernate and supports the Active Record Pattern. You add attributes to your classes and properties to set up your mapping. Its simple, but you have the power of nhibernate under the hood if you need it. The session management is simplified, still requires initialisation when your application starts.  The ActiveRecordMediator is so simple to use. Negatives are that it requires shipping the same DLL’s that nhibernate needs plus the the castle ones. Borrows the querying functionality from nhibernate.

Linq for SQL (DLINQ) – Very simple to use, supports the Active Record pattern. Can you attributes or XML to define the mapping. Does come with a designer/SQL metal. Personally i hand craft my domain classes as the designer gives you a lot of boilerplate code that in more cases in not needed, plus i model the domain first and not the database. Its built into the .NET framework so no external DLL’s to manage. No session management, the data contexts only needs a connection string. The data context uses transactions by default. The only querying mechanism is Linq which is fully implemented. Negatives are: Limited to SQL server and SQL CE, have to include properties in your classes that relate to foreign keys in the db.

Linq for Entities – (ADO.NET Entity framework) – Waiting for the next version of this, as the current version is data driven instead of domain driven. So for me its not an option at the moment.

Object Database Management Systems (ODBMS)

I only have experience in using one ODBMS being db4o. Using an object database in a change in mindset and is not that common across the .NET developer community, although products like db4o does have a massive community across .net and java developers. I think the reason for slow uptake is down to the fact that RDBMS have been around for 30 odd years and they have got better and vendors have changed their products to keep supporting the current market trends like XML.

Db4o – Really easy to use, requires a tiny amount of code. Uses a file either locally or on a remote server. No mapping required, you use OO in your domain so why not store it OO in the database. Supports Linq and also has other ways to query the database. Great documentation and support. Negatives… wish it was more a mainstream so i wouldn’t have to use RDBMS’s any more.

While developing a project with Db4o, you realise that you don’t think rows in a table or association tables. And what is even better, when you already have a database in place that contains data and you make changes to your classes like adding properties, do you need a database migration script? no. Some smart stuff inside db4o knows that the type has changed an handles it. You don’t lose data, it just works. This is a big positive for me. When using a RDBMS, tracking database changes in development is a pain and you need a process in place not only for development but when you deploy to other environments. if you have a bug in script it stops you from deploying your release as you need keep your scripts in sync with your code. Needless to say, when managing SQL scripts you need to make changes within a transaction so it can rollback in the event of failure and you must also write your scripts so they can be run more than once. With Db4o its a non-issue, no scripts, no process, no problems.

So what technology?

Ok enough rumbling. With my agile head on. I will chose the simplest option, which i believe its db4o.

Implementing the Repository

At the end of part 3, I created a domain class called “Customer”, the ICustomerRepository interface and a concrete implementation called CustomerRepository.

Using db4o

  1. So if you haven’t got db4o you can download the msi from here. I am using version 7.4 for .net 3.5.
  2. In the Web.Business project, add references to:
    • Db4objects.Db4o.dll
    • Db4objects.Db4o.Linq.dll
    • System.Configuration
  3. One difference between using SQL server and db4o is the connection lifetime, in SQL server connections are opened and closed as quickly as possible and the connection is returned to a pool (if configured to). In db4o, this works differently. When you start your application you open the connection and keep it open until the application ends.  Their are a few ways to do this, one of the common approaches i see in web applications that require initialising a database component like nhibernate and Castle ActiveRecord is that in the Global.asax, its configured in the Application_Start() method. I think this stinks, why does UI application need to know about the persistence. My preferred way is to make it happen where its needed. Because i need to ensure the lifetime is the same as the application, I use a singleton to hold the reference. Here is that class.
    using System;
    using System.Configuration;
    using Db4objects.Db4o;
    
    namespace Web.Business.Persistence.Db4o
    {
        internal class DatabaseContext : IDisposable
        {
            private static DatabaseContext context;
    
            private IObjectContainer database;
    
            static DatabaseContext()
            {
                context = new DatabaseContext();
            }
    
            private DatabaseContext()
            {
                database = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(),
                    ConfigurationManager.AppSettings["DatabaseFileName"]);
            }
    
            public void Dispose()
            {
                database.Close();
            }
    
            public static DatabaseContext Current
            {
                get { return context; }
            }
    
            public IObjectContainer Client
            {
                get { return database; }
            }
        }
    }
  4. I have added an application setting into the web.config called “DatabaseFileName” which as you might as guessed is the path to the db4o database file.<appSettings><add key=”DatabaseFileName” value=”C:\Web\WebDb.yap”/>

    </appSettings>

  5. Now to make the CustomerRepository use the DatabaseContext to fetch the data. The finished code looks like this.
    using System.Collections.Generic;
    using System.Linq;
    using Db4objects.Db4o.Linq;
    using Web.Business.Domain;
    
    namespace Web.Business.Persistence
    {
        internal class CustomerRepository : ICustomerRepository
        {
            public List<Customer> FindAll()
            {
                return (from Customer customer in DatabaseContext.Current.Client select customer).ToList();
            }
        }
    }
  6. That’s it, done. Only the database is empty. Ideally in the real world you might have screens in your application that you can use to populate the database. As we don’t, i have created a test that can be run to insert data in the database.
    using NUnit.Framework;
    using Web.Business.Domain;
    using Web.Business.Persistence;
    
    namespace Web.Business.Specs
    {
        [TestFixture]
        [Category("Integration")]
        public class DataIntegrationFixture
        {
            [Test]
            [Explicit]
            public void PopulateCustomers()
            {
                Customer customer = new Customer
                {
                    AccountManagerName = "Mr A Manager",
                    AccountNumber = "ABC123",
                    City = "Some big city",
                    Country = "UK",
                    Name = "Big city customer"
                };
    
                DatabaseContext.Current.Client.Store(customer);
            }
        }
    }

If you run the application, the data will be pulled from the database and displayed in the view. If this was a real application, i would create an generic abstract EntityRepository class that took a domain class as its generic type. I would make this base class use the DatabaseContext and that way i would not be repeating code.

Implementing an architecture with ASP.NET MVC (Part 3) – The business layer

Introduction

The focus of this post it to describe how I go about developing the business layer. This post follows on from my previous post  ASP.NET MVC – Creating an application with a defined architecture. In my previous post, I was fulfilling a requirement to fetch a list of customers and display them on a page with ASP.NET MVC. So I will continue on with that as an example.

The Plan

  • At the end of the previous post, i had an object called “CustomerAgent” that just created two instances of the “customer” object. This is going to be replaced with a call the business layer to fetch a list of customers. The business layer will be returning the customers as a message type. The CustomerAgent will map the message type to the customer object that is already defined in the “PresentationProcesses” assembly. We will drive this out with a test.
  • In the business layer, we will need to respond to the call for fetching a list of customers. Our business layer will ask a “repository” to fetch customers from a datastore. The business layer will take a list of customers and map them into a message that will be returned to the caller.
  • In the next post to continue on from this one, the repository will need to get the customers from somewhere and map them into instances of objects that represent a customer in the domain model. An ORM tool will simplify this process.

While implementing this plan, we will be testing the interactions between layers. We will also be registering the more types in our IOC container.

Putting the plan into action

The CustomerAgent is not under test and currently returns fake instances, so we will create a new test assembly and place the “customer agent” under test and start driving out the interaction with the business layer.

  1. Create a new class library in your solution called “Web.PresentationProcesses.Specs”.
  2. Add a project reference to the “Web.PresentationProcesses” project.
  3. Add file references to “Nunit.framework”, “Rhino.Mocks” and “NSpec” (or whatever unit testing framework and mocking tool use want to use).
  4. Add a new class (test fixture) to this new assembly called “Fetching_a_list_of_customers” and decorate it with the “[TestFixture]” attribute.
  5. Add a test called “Should_fetch_and_return_customers”. This test will return a list of “customers” and assert that the result was not null. At this point the test will pass as the “customer agent” is still just returning two made up instances. Here is the test (its not the final test, its going to change)
    [TestFixture]
    public class Fetching_a_list_of_customers
    {
        private ICustomerAgent agent;
    
        [SetUp]
        public void SetUp()
        {
            agent = new CustomerAgent();
        }
    
        [Test]
        public void Should_fetch_and_return_customers()
        {
            List<Customer> customers = agent.GetCustomerList();
    
            customers.ShouldNotBeNull();
        }
    }
  6. Currently the “Customer Agent” is a public class, i don’t want my implementations to be public. The interface will be only way that the above layer can communicate with this. But we will still want our tests to be able to work with the concrete implementation and also so will our mocking tool. In the “Web.PresentationProcesses” assembly, open up “AssemblyInfo.cs”. Add the following lines and save and close the file.
    [assembly: InternalsVisibleTo("Web.PresentationProcesses.Specs")]
    
    [assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")]
  7. Next step, change the “CustomerAgent” class to be “internal” instead of “public”. Run our tests to verify that it all still works.

Driving out the business layer

As mentioned earlier the service is going to return data as a message type. This is going to use a very common message pattern being “Request/Response” which is also known as the “request/reply” pattern.

The business layer is going to be in its own assembly, it will run in-process to the MVC application. We could in the future without to much effort, place the business layer behind a WCF endpoint and host the business layer in another process. I personally would not to this by default, the reasons for making the business layer remote must be because you either have more than one application that interacts with the business layer tier or because of scalability. Scalability over performance its down to your applications needs, availability and size of the user base. Moving the business layer to run out of process is another blog post that i will write as the final post of this series as optional approach. Although the its not that different.

At the moment we have no business layer, so from the test i above, we start defining the interface (contract) in the business layer.

I am a big fan of Resharper, it makes my world a better place. It saddens me to think that developers are out their coding without the fruits that resharper gives.

  1. Back into the unit test, cutting a not such a long story to be a shorter story. I am going to set up an expectation on an interface and return a response object. I have also driven out the properties that are in the response object. The test is also asserting that the “customer” UI object is populated with the values from the response  Here is the test fixture. I have created the new types within the same code file as the fixture. I do this sometimes while i am cutting new code and creating new types, then i will (“with the help of resharper”) move the classes into there own files and move them into the correct assemblies. Which i do as the next step.
    using System.Collections.Generic;
    using NBehave.Spec.NUnit;
    using NUnit.Framework;
    using Rhino.Mocks;
    using Web.PresentationProcesses.Customers;
    
    namespace Web.PresentationProcesses.Specs
    {
        [TestFixture]
        public class Fetching_a_list_of_customers
        {
            private ICustomerAgent agent;
            private ICustomerService service;
    
            [SetUp]
            public void SetUp()
            {
                service = MockRepository.GenerateMock<ICustomerService>();
    
                agent = new CustomerAgent();
            }
    
            [Test]
            public void Should_fetch_and_return_customers()
            {
                FetchCustomerResponse response = GetResponse();
    
                service.Expect(x => x.FetchCustomers()).Return(response);
    
                List<Customer> customers = agent.GetCustomerList();
    
                customers.ShouldNotBeNull();
                customers.Count.ShouldEqual(response.Customers.Count);
                IsMappingCorrect(response.Customers[0], customers[0]).ShouldBeTrue();
    
                service.AssertWasCalled(x => x.FetchCustomers());
            }
    
            private FetchCustomerResponse GetResponse()
            {
                return new FetchCustomerResponse
                {
                    Customers = new List<CustomerInfo>
                    {
                        new CustomerInfo
                        {
                            AccountManagerName = "Mr Account Manager",
                            AccountNumber = "ABC 123",
                            City = "Some Town",
                            Country = "Some Country",
                            Name = "Happy Customer"
                        }
                    }
                };
            }
    
            private bool IsMappingCorrect(CustomerInfo customerInfo, Customer customer)
            {
                return customerInfo.AccountManagerName == customer.AccountManagerName &&
                       customerInfo.AccountNumber == customer.AccountNumber &&
                       customerInfo.City == customer.City &&
                       customerInfo.Country == customer.Country &&
                       customerInfo.Name == customer.Name;
            }
        }
    
        public class FetchCustomerResponse
        {
            public List<CustomerInfo> Customers { get; set; }
        }
    
        public class CustomerInfo
        {
            public string Name { get; set; }
            public string AccountNumber { get; set; }
            public string AccountManagerName { get; set; }
            public string City { get; set; }
            public string Country { get; set; }
        }
    
        public interface ICustomerService
        {
            FetchCustomerResponse FetchCustomers();
        }
    }
  2. At the moment, the code will compile, but the test will fail. As mentioned in the last step, I am going to move the “FetchCustomerResponse”, “CustomerInfo” and “ICustomerService”  into another assembly.
    1. Add a new class library to the solution called “Web.Business”
    2. Create a folder called “Contracts” and move the “ICustomerService” into this folder and change the namespace to match it new location.
    3. In the Contracts folder, add a new folder called “Messages”. Move the CustomerInfo and FetchCustomerResponse types into this new folder and change the namespaces.
  3. In both the “Web.PresentationProcesses” and “Web.PresentationProcesses.Specs”, add a project reference to “Web.Business”.
  4. The CustomerAgent needs to be able to talk to the “ICustomerService”, change the constructor of the “CustomerAgent” to be passed a reference of “ICustomerService” and hold the reference in a variable in the “CustomerAgent”. The “SetUp” on the unit test will change to pass in the service into the constructor of the “CustomerAgent”.
  5. Now to make the test pass, the code in the method ”GetCustomerList” in the “CustomerAgent” has been replaced to make the test pass. Here is that code for the modified “CustomerAgent” as well as the changes to the SetUp method in the test.
    // Unit test
    
    [SetUp]
    public void SetUp()
    {
        service = MockRepository.GenerateMock();
    
        agent = new CustomerAgent(service);
    }
    
    // Customer agent
    
    using System.Collections.Generic;
    using System.Linq;
    using Web.Business.Contracts;
    using Web.Business.Contracts.Messages;
    
    namespace Web.PresentationProcesses.Customers
    {
        internal class CustomerAgent : ICustomerAgent
        {
            private readonly ICustomerService service;
    
            public CustomerAgent(ICustomerService service)
            {
                this.service = service;
            }
    
            public List<Customer> GetCustomerList()
            {
                List<Customer> result = new List<Customer>();
    
                FetchCustomerResponse response = service.FetchCustomers();
    
                result.AddRange((from custInfo in response.Customers
                    select new Customer
                    {
                        AccountManagerName = custInfo.AccountManagerName,
                        AccountNumber = custInfo.AccountNumber,
                        City = custInfo.City,
                        Country = custInfo.Country,
                        Name = custInfo.Name
                    }).ToList());
    
                return result;
            }
        }
    }
  6. All the tests, now pass. Now to create a concrete “CustomerService”. Create a new folder in the “Web.Business” assembly called “Customers” and add a new internal class called “CustomerService”.
  7. Make the “CustomerService” implement the “ICustomerService” interface. We will come back to the method “FetchCustomers” later, so just throw a NotImplemented exception for minute.
  8. We need to register the types in the IOC container. We need to pass the IOC container to the “Web.Business” assembly so that it can register its types.
    1. Add a reference to Unity or what ever IOC container that you are using.
    2. Create a public class called “BusinessModule” and add a public method below.
      public class BusinessModule
      {
          public void Configure(IUnityContainer container)
          {
              container.RegisterType<ICustomerService, CustomerService>();
          }
      }
    3. In the “Web.PresentationProcesses” assembly, add a reference to the “Web.Business” assembly.
    4. In the “PresentationProcessesModule”, in the configure method, create a new instance of the “BusinessModule” and call the “Configure” method passing in the container.

The Customer Service

Now to implement the CustomerService. The service itself is just a facade and brings its internals together to provide a simple API. The service will delegate to objects that have the responsibility to carry out required actions. In the case of the CustomerService, it will ask a repository to return a list of customers. The customers will be instances of a domain entity called “Customer”. The service will map the domain type to the message type.

I keep the domain isolated from the outside world. The only way to interact with the domain from the outside world is through a service. The service does not contain much logic, if it did it would mean that logic is not in the domain and the domain would be not be rich. A thin domain and rich services would the “anemic domain model” anti-pattern. In this example, I am pulling data out of a repository, so their is no business logic.

  1. Firstly, create a new class library assembly called “Web.Business.Specs” which you may have guessed is going to hold the tests for the business assembly. Add references to NUnit and Moq/RhinoMocks or what ever is your preferred mocking tool.
  2. We are going to be testing internal objects within the web.business project. As described earlier on in this post. Add this two lines to the assemblyInfo.cs in the “web.business” project.
    [assembly: InternalsVisibleTo("Web.Business.Specs")]
    [assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")]
  3. Create a new test fixture called “Fetching_customers”. Our is going to ask the service to provide a list of customers. This information will be provided a list of “CustomerInfo” objects contained in the “FetchCustomerResponse”. Here is the test.
    using System.Collections.Generic;
    using NBehave.Spec.NUnit;
    using NUnit.Framework;
    using Rhino.Mocks;
    using Web.Business.Contracts;
    using Web.Business.Contracts.Messages;
    using Web.Business.Domain;
    using Web.Business.Persistence;
    using Web.Business.Services;
    
    namespace Web.Business.Specs
    {
        [TestFixture]
        public class Fetching_customers
        {
            private ICustomerService customerService;
            private ICustomerRepository customerRepository;
    
            [SetUp]
            public void SetUp()
            {
                customerRepository = MockRepository.GenerateMock<ICustomerRepository>();
    
                customerService = new CustomerService(customerRepository);
            }
    
            [Test]
            public void Should_return_a_list_containing_customer_information()
            {
                List<Customer> customers = new List<Customer>
                {
                    new Customer
                    {
                        AccountManagerName = "Mr A Manager",
                        AccountNumber = "ABC 123",
                        City = "Some Place",
                        Country = "Some Island",
                        Name = "Some Customer"
                    }
                };
    
                customerRepository.Expect(x => x.FindAll()).Return(customers);
    
                FetchCustomerResponse response = customerService.FetchCustomers();
    
                customerRepository.AssertWasCalled(x => x.FindAll());
    
                response.Customers.Count.ShouldEqual(customers.Count);
    
                IsMappingCorrect(response.Customers[0], customers[0]).ShouldBeTrue();
            }
    
            private bool IsMappingCorrect(CustomerInfo customerInfo, Customer customer)
            {
                return customerInfo.AccountManagerName == customer.AccountManagerName &&
                       customerInfo.AccountNumber == customer.AccountNumber &&
                       customerInfo.City == customer.City &&
                       customerInfo.Country == customer.Country &&
                       customerInfo.Name == customer.Name;
            }
        }
    }
  4. The above test drove out the customer Repository interface and a domain object called customer. At the moment i have added two new folders “Persistence” and “Domain” to the “Web.Business” project and place the customer object in the domain folder and repository interface in the “Persistence” folder. Getting a step ahead, i have created a concrete implementation of CustomerRepository. Here is the code for the new types:
    namespace Web.Business.Domain
    {
        internal class Customer
        {
            public string Name { get; set; }
            public string AccountNumber { get; set; }
            public string AccountManagerName { get; set; }
            public string City { get; set; }
            public string Country { get; set; }
        }
    }
    
    //
    
    using System.Collections.Generic;
    using Web.Business.Domain;
    
    namespace Web.Business.Persistence
    {
        internal interface ICustomerRepository
        {
            List<Customer> FindAll();
        }
    }
    
    //
    
    using System.Collections.Generic;
    using Web.Business.Domain;
    
    namespace Web.Business.Persistence
    {
        internal class CustomerRepository : ICustomerRepository
        {
            public List<Customer> FindAll()
            {
                return new List<Customer>();
            }
        }
    }
  5. Lastly,  wire up the repository in IOC container, the configure method in “BusinessModule” class should look like this.
    public void Configure(IUnityContainer container)
    {
        container.RegisterType<ICustomerService , CustomerService>();
        container.RegisterType<ICustomerRepository , CustomerRepository>();
    }

That’s it for this post. the next step is to use an ORM to fetch the data from the database, which will be the responsibility of the repository.

Simplicity equals elegance

Introduction

I have recently had to solve a massive problem do with architecting a next generation platform for a commercial product that will steer the development team now and into the future. This task has a history of failure and now I have stepped up to try a make things right. Some of the problems I needed to address was:

  • Architecture (to SOA or not to SOA)
  • Technology
  • Migration plan
  • Deployment

Retrospective

While brainstorming ideas about how to solve my particular problems, I went through a period of reflection. I questioned my self on all the practices that I would promote and the ones I would discard. I found this to be very healthy. Any good developer/architecture should regularly question their motives and approaches.

A lot of what we do its based on opinions, and sometimes based on other peoples opinions. I went about  eliminating the opinions that were not by own. For example “Persistent arrogance”:

  • Is this just an idealism?
  • What benefit does it give me?
  • What do I lose if I don’t have it?
  • Is this something at sounds great in theory but in practice it not achievable?
  • Is this something that someone has promoted to make them sound clever?

I did this over and over till I was happy that I had opinions that were my own. I reviewed programming and the industry in general and reflected on my own previous experience. One of the lessons I learnt was, it does not matter how clever someone is, what technologies you know or what pretty pieces of paper you have. Simple is best.

Just because its a dirty job doesn’t mean you need to get dirty!

I also find that top class developers generally make things complex and perceive that it is simple. Its worth having less junior developers around and present your code to them and get their feedback. Another trait that senior developers do it try to design and develop an elegance system, this leads to complexity, complexity leads to confusion, confusion leads to failure. The right thing to do it design and develop a system to be simply, then you get elegance.

What is Simple

Simple is relative, how do you measure simple? This is measured at many levels. The common place would be at code level.

Source code

Many books have been written about writing “Good” (again relative) code and writing Quality (relative again) code. Their are many things that make code “better” which makes it “simple” to understand and maintain: For example: (I have thought of many more points but a couple will do to illustrate the point)

  • Name classes, methods and variables correctly and meaningful.
  • Write methods that are small and concise.
  • Follow the OO principles (SOLID).
  • Use TDD with Test first design.

The next view would be how your code is structured and layered. If you have complex system that is all tackled up and depends on things that it shouldn’t, may be a few god objects and is a big sticky ball of mud. Then stretch it like a rubber band, pull the code about and separator it by the context that its used in. Of course, if you have a big ball of mud then I am pretty certain that you won’t have any unit tests that give you the feedback needed while you refactor. If you, like me have been unlucky and come to work on a code base that is a ball of sh** mess, then master refactorings and design patterns.

Technology

  • Is the technology the right tool for the job?
  • Are you using this tool, cos its someone above you tells you to?
  • Do you have alternatives?
  • How easy is it to recruit people with this skill?
  • Its the technology you are using, just geeky and looks got on your CV?

Technology has a big part to play in simple/complex contest. Its worth reviewing what you are are using.

Architecture

Does the architecture guide you or does it constrain you. Do you have read a book to understand the architecture that you are using? if an architecture is not flexible and you need to read a book about it then is not simple and not right.

Trends and fashion

This is linked to technology and my retrospective stated above. Developers my nature enjoy learning new technologies. Developers also try to find new smarts for the way they work. This is all great, but does need to be managed, otherwise your code base will be littered with spikes and random technologies. Before long your developers will not know what the standard tools or right the processes are. Technology and processes are in and out of fashion very quickly. A big dollop of common sense is required, review technologies and process in isolation and actually use the technology and try out a process rather than reading some blog and believing it.

Its easier to understand what is not simple.

Make it simple

“Simple” does have some concrete measures but their are other measures that are unique to you and your team. You need to identify what is important to you and analyse your code to see how it scores.  Making things simple it in my opinion, the most important design goal.

A point about Design patterns

I have heard people slate design patterns, i personally i love them, but some/alot of people misuse them and thats whats has started the negative press. The rule with design patterns to identify a pattern in the code not to put a patterns in the code that don’t fit. I find the power of patterns is the vocabulary. Its great describe a class’s purpose in life when stating its responsibility, for example “its an adapter” or “its a factory”. All things in moderation.

Conclusion

Keep it Simple. A easy principle to understand, but often the hardest to implement because out actions are to learn bigger more complex technologies and processes that keep pushing our careers to supposed new heights. It doesn’t matter how big someone’s brain is, can you develop code that is simple?

ASP.NET MVC – Creating an application with a defined architecture

Introduction

This post continues from this previous post. In this post I will be creating a ASP.NET MVC application using the architecture described in the previous post. I will start by laying out the requirements, driving out the behaviour through tests, implement Unity and create the layers for the UI, controller and Presentation Processes.

Since writing my previous post, I have came across a blog which contains some really good tips / best practices for ASP.NET MVC, which I recommend you read.

http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx

http://weblogs.asp.net/rashid/archive/2009/04/03/asp-net-mvc-best-practices-part-2.aspx

Requirements

So to start, we need requirements as development should be driven by requirements. In business analyst language, this is the requirement that i will be working against.

  1. As a user when browsing the customer feature within the application I expect to see a list of customers, each customer will contain the following information:
    • Customer name
    • Account number
    • Account Manager
    • City
    • Country

Where to start

ASP.NET MVC is focused around controllers and actions. So its a natural place to start describing how you implement this architecture. On a day to day basic, I write tests first that drive out the behaviour. As my application grows, my layers become cemented. The name of classes denote the responsibility and context.

Creating the Project

I am starting with a new “Asp.Net MVC web application” project called “Web” and a test project called “Web.Specs”. My references are:

Web application

Test Project

  • NUnit – “nunit.framework.dll”.
  • NBehave (using NSpec only) – “NBehave.Spec.NUnit.dll”.
  • Rhino Mocks – “Rhino.Mock.dll”.
  • MVCContrib – “MvcContrib.TestHelper.dll” and “MvcContrib.dll”.

Test first

Here is the test for the requirement defined above, putting the controller under test.

using System.Collections.Generic;
using System.Web.Mvc;
using MvcContrib.TestHelper;
using NBehave.Spec.NUnit;
using NUnit.Framework;
using Rhino.Mocks;
using Web.Controllers;
using Web.PresentationProcesses.Customers;

namespace Web.Specs
{
    [TestFixture]
    public class Browsing_a_list_of_customers
    {
        private CustomerController controller;
        private ICustomerAgent customerAgent;

        [SetUp]
        public void Setup()
        {
            customerAgent = MockRepository.GenerateStub<ICustomerAgent>();
            controller = new TestControllerBuilder()
                        .CreateController<CustomerController>(new object[] { customerAgent });
        }

        [Test]
        public void Should_pass_a_list_of_customers_to_the_view()
        {
            var customers = new List<Customer>();

            customerAgent.Expect(x => x.GetCustomerList()).Return(customers);

            ViewResult result = controller.List();

            result.ShouldNotBeNull();
            result.AssertViewRendered().ForView(CustomerController.ListViewName);

            customerAgent.AssertWasCalled(x => x.GetCustomerList());
        }
    }
}

This test has driven out the “CustomerController”, an interface called “ICustomerAgent” and two objects called “Customer” and “CustomerListViewModel”.

I have created another class library called “Web.PresentationProcesses” which doesn’t have any additional references. I have placed the Customer, CustomerListViewModel and the ICustomerAgent interface under a new folder called “Customers” in the Web.PresentationProcesses assembly.

The code created so far is:

//Customer.cs

namespace Web.PresentationProcesses.Customers
{
    public class Customer
    {
        public string Name { get; set; }
        public string AccountNumber { get; set; }
        public string AccountManagerName { get; set; }
        public string City { get; set; }
        public string Country { get; set;}
    }
}

//CustomerListViewModel.cs

using System.Collections.Generic;

namespace Web.PresentationProcesses.Customers
{
    public class CustomerListViewModel
    {
        public List<Customer> Customers { get; set; }
    }
}

//ICustomerAgent.cs
using System.Collections.Generic;

namespace Web.PresentationProcesses.Customers
{
    public interface ICustomerAgent
    {
        List<Customer> GetCustomerList();
    }
}

//CustomerController.cs
using System.Web.Mvc;
using Web.PresentationProcesses.Customers;

namespace Web.Controllers
{
    public class CustomerController : Controller
    {
        private readonly ICustomerAgent customerAgent;

        public const string ListViewName = "List";

        public CustomerController(ICustomerAgent customerAgent)
        {
            this.customerAgent = customerAgent;
        }

        public ViewResult List()
        {
            var viewModel = new CustomerListViewModel
            {
                Customers = customerAgent.GetCustomerList()
            };

            return View(ListViewName, viewModel);
        }
    }
}

At this point the solution looks like this:

AspNetMvcBlogSolution

At the moment, the unit test will pass but if you run the application it will be broken because we don’t have a view and the customer controller doesn’t have a default constructor.

Creating the View

  1. Create a new folder called “Customer” inside the “Views” folder
  2. In the newly created folder, create a strongly typed view called “List” using the type “CustomerListViewModel”.
  3. Open up the site.master view. In the “menucontainer/menu” div, add the following under the link to the “Home/About” page.
    <li><%= Html.ActionLink("List", "List", "Customer")%></li>
  4. Add the following to the List view.
<%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Web.PresentationProcesses.Customers.CustomerListViewModel>" %>

<%@ Import Namespace="MvcContrib" %>

<asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server">
	<title>Customer List</title>
</asp:Content>

<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">

    <h2>Customer List</h2>

    <table cellpadding="1" cellspacing="0" border="1">
        <thead>
            <tr>
                <th>Name:</th>
                <th>Account Number:</th>
                <th>Account Manager:</th>
                <th>City:</th>
                <th>Country:</th>
            </tr>
        </thead>

        <tbody>
            <% Model.Customers.ForEach(customer => { %>
		<tr>
                <td><%= customer.Name %></td>
                <td><%= customer.AccountNumber %></td>
                <td><%= customer.AccountManagerName %></td>
                <td><%= customer.City %></td>
                <td><%= customer.Country%></td>
		</tr>
            <% }); %>
        </tbody>
    </table>
</asp:Content>

The next stage is to have an IOC container resolve the dependencies for as at runtime.

Implementing Inversion of Control

Inversion of control / dependency injection has become very popular over the last couple of years and is a great practice to use. Their are a quite a few containers on the market at the moment, all are very good and apart from the common functionality that they all share. Some of the containers have unique qualities. I have mainly use Castle Windsor, Unity and Ninject. My personal favourite is Ninject because it has a simple fluent interface and contextual binding. At work we are using Unity mainly because its from Microsoft, but we do have applications that use Castle Windsor and Spring.net. I find that once you know one container its really ease to use another. Some of my fellow developers and I do experiment with different containers, although it doesn’t take long to swop them, using the Common Service Locator will make thing easier.

But before i start registering types, I common anti pattern that i have seen is that the container is defined in the web application being at the top of layer stack and then references all of the layers below it so it can add the types to the container. The solution to this is to pass via interface the container reference to each layer and allow a each layer to register its types.

  1. Starting from the top in the web application, new up the container and register the types that are in the web application only like the controllers.
  2. Then from the web application, pass the container reference to the PresentationProcesses layer via an interface. The PresentationProcesses assembly will expose a module object that accepts the container reference.
  3. The module in the PresentationProcesses layer will register its types and then could pass the container reference to the layer below it. If you are using WCF services to expose your service layer then the chain stops here. If your application doesn’t have external services and runs in-process then this pattern continues going down the layer stack.

Some IOC containers allow you to register types in config, code or both. Although i don’t like config anyway, using config to register types can cause problems. Typically when you are refactoring code, like renaming or moving types into other namespaces, that you miss out changing the config files. This results in exceptions at runtime.

I am going to stick with using Unity here and I am going to reference the unity assemblies and use MVCContrib unity assembly:

  • Microsoft.Practices.Unity.dll
  • Microsoft.Practices.ObjectBuilder2.dll
  • MvcContrib.Unity.dll

The best place to get the container created and configured is from within Application object in the Global.asax. If you have used MonoRail and Castle Windsor together then this has the same usage pattern.

using System.Web;
using System.Web.Mvc;
using System.Web.Routing;
using Microsoft.Practices.Unity;
using MvcContrib.Unity;

namespace Web
{
    public class MvcApplication : HttpApplication, IUnityContainerAccessor
    {
        private static UnityContainer container;

        public static IUnityContainer Container
        {
            get { return container; }
        }

        IUnityContainer IUnityContainerAccessor.Container
        {
            get { return Container; }
        }

        protected void Application_Start()
        {
            RegisterRoutes(RouteTable.Routes);

            ConfigureContainer();
        }

        public static void RegisterRoutes(RouteCollection routes)
        {
            routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

            routes.MapRoute("Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" });
        }

        private void ConfigureContainer()
        {
            if (container == null)
            {
                container = new UnityContainer();

                new PresentationProcesses.PresentationProcessesModule().Configure(container);

                ControllerBuilder.Current.SetControllerFactory(typeof(UnityControllerFactory));
            }
        }
    }
}

Creating the Agent

The next step to create a concrete class that implements the ICustomerAgent interface called “CustomerAgent”. We didn’t need to make it do any at the moment as we are still trying to get the application working at runtime. Plus when we do start making the “CustomerAgent” do something we will drive it from a test first. We will create this class next to where the interface lives.

using System;
using System.Collections.Generic;

namespace Web.PresentationProcesses.Customers
{
    public class CustomerAgent : ICustomerAgent
    {
        public List<Customer> GetCustomerList()
        {
            throw new NotImplementedException();
        }
    }
}

I have created another class assembly called “Web.Container.Interfaces” which contains just one interface called “IModule” which looks like this.

using Microsoft.Practices.Unity;

namespace Web.Container.Interfaces
{
    public interface IModule
    {
        void Configure(IUnityContainer container);
    }
}

This class assembly only references Unity and will be referenced by other assemblies like “PresentationProcesses”.

Now to create the PresentationProcessesModule.

using Microsoft.Practices.Unity;
using Web.Container.Interfaces;
using Web.PresentationProcesses.Customers;

namespace Web.PresentationProcesses
{
    public class PresentationProcessesModule : IModule
    {
        public void Configure(IUnityContainer container)
        {
            container.RegisterType<ICustomerAgent, CustomerAgent>();
        }
    }
}

Now to run the app, and now when you click the “list” link on the home page, you should get an “The method or operation is not implemented.” exception page, which is expected at this time. What this does prove it that the IoC is working correctly.

The next stage in this process would be drive out getting some real data from somewhere and will get returned from the CustomerAgent. Which is going to be in my follow on post, but for now we can simply new up a collection with some new’d up customer objects as shown below.

using System.Collections.Generic;

namespace Web.PresentationProcesses.Customers
{
    public class CustomerAgent : ICustomerAgent
    {
        public List<Customer> GetCustomerList()
        {
            return new List<Customer>
            {
                new Customer
                {
                    Name = "company1",
                    AccountNumber = "12345",
                    AccountManagerName = "mr account manager1",
                    City = "Some Town",
                    Country = "England"
                },
                new Customer
                {
                    Name = "company2",
                    AccountNumber = "54321",
                    AccountManagerName = "mr account manager2",
                    City = "Some other place",
                    Country = "England"
                },
            };
        }
    }
}

Run the app

Now when you run the application, click on the “list” link in the menu. You should now get this following page.

listView

Moving on

What we have got is an ASP.NET MVC that has unity in place to resolve types at runtime. We have types (customer and customer ViewModel) defined in the presentation processes layer that the view are bound to. The customer agent returns instances of these types.

The next steps will be to change the Customer agent to get the data from somewhere.  This will be driven out via tests. This is going to be the focus on my next post.

ASP.NET MVC – Defining an application architecture

Introduction

ASP.NET MVC is creating some buzz at the moment and has been long awaited. Being a Microsoft product, MVC will be launched into the mainstream development arena and like webforms, could and hopefully will be a popular web development platform that will be around for years to come . Microsoft and some of the key players in this technology and has already got a good community going. The official ASP.NET MVC site contains some very light and quick code tutorials to illustrate the features of MVC.

Although this is all good, I do have some concerns. It starts with what developers learn, the tutorials on the official asp.net mvc site are aimed at developers at different levels and are for illustration only. They are not samples of production quality code. What is missing at the moment are the better practice and guidelines for developing line of business application which I am sure 99% of applications developed with ASP.NET MVC are going to be for.

Separating concerns

On the official ASP.NET MVC site, you will find code examples that are directly fetching data using linq for entities with linq queries directly in the controllers. Its not the responsibility of the controller to fetch data in this manor, their is at least a layer or two missing between controller and data access. My concern is that we will end up with same situation that is present in webforms where applications could be developed with most of the application logic ending up in the controllers (like code behind in webforms).  ASP.NET MVC already is enforcing the separation of concerns for the view but not the controller and model. This is where the design focus is needed.

Layers and tiers

Layering software is probably one of the most basic concepts in the software development that I think is under estimated. I have seen on various projects in the past that its easy to get wrong with either too few or to many layers. I find that logical layers can be defined by separating out the concerns at a high level.

A layer is a logical grouping of code that have a common concern. A tier is typically a physical boundary like a server or client PC. A tier will contain at least one or more layers. Layers that are designed to run in the same tier typically communicate in a finely grained manor (chatty interface) where commutating across tiers should be coarse grain (chunky interface). 

You must either be new to software development have lived in a cave or something if you have not heard of n-tier architecture.  I mention tiers here because i one of the principles that is usually forgotten about in that, when communicating across tiers, do so in a coarse grain mannor.

High above the ground

The architecture that I am defining is nothing new, its a very common scenario using some Domain Driven Design concepts. An alternative variant of this architecture is an SOA implementation. The SOA variant would be the right choice an enterprise level application. The simpler variant of this architecture can be migrated to the SOA variant.

Conceptual logical layers (simple variant)

conceptual logically layers

UI tier – This is the end users computer which will access your application via an internet browser. So the user interface layer will contain (x)html and CSS. Plus this layer could contain JavaScript, silverlight, Flex etc.

Presentation tier – In this tier, you have two layers, the controller and presentation processes. The controller layer will be invoked by the ASP.NET MVC framework in response to an http request. Your controller should do nothing more than delegate any processing to logic in the Presentation Processes layer. The Presentation Processes layer will typically request information from the service layer and process the service response by mapping the information to an object that both Presentation Processes and the controller layer know about. One of the benefits that this gives, is that you could develop your application that is decoupled from the service layer.

Business tier – This tier is very flexible in its implementation. In its most simplest form, it could be run in process on web server that will actually remove the physically boundary between the business and presentation tiers. For enterprise level implementations, this tier could be made up my many separate applications that could be distributed across many application servers. Regardless of the physically boundary, the Server layer should be thin as logic goes. The service layer provides an API to consumers. Beneath the service layer you have the domain layer that contains the business logic and represents the business model. The repository layer is bridge between your application logic and data persistence.

Persistence tier – Does what it says on the tin, this would typically be your database server(s). 

SOA variant

 

Here is the conceptual logically layers using an SOA approach.

image

The key difference here is that the service layer is broken up into multiple services that provide a distinct function. SOA is a massive subject that i would not do it any justice trying to sell it, but using this approach gives you many benefits that the development investment is worth it.  If you are have your business tier running out of process or on different services then WCF is the right technology for the job.

Although I have used ASP.NET MVC in the title of this post, this architecture is relevant for most technology implementations. Everything below the controller layer is plain .net code that you could put any .net view technology at the top of this architecture.

Architectural Rules

Rules are needed for any architecture, these are the common rules that I like to stick to.

  • A layer can only see what is beneath it, never above it.
  • A layer must only see the layer directly below it and not the layers further down the layer stack.
  • Use message objects to communicate between layers that are across tiers (although, using message objects between any layer is also good).
  • Limit your cross cutting concerns. (logging and security are typical candidates as a cross cutting concern. Loads of utilities classes/assemblies are not).
  • Strive for low coupling – By enforcing a layer policy where you can only see the direct below helps here. Using interfaces that your concrete classes implement gives you a recipe of low coupling as with so many other benefits.
  • No leaky abstractions.

Next steps

At this point while writing this post, i started to create an ASP.NET MVC application using this architecture. This post was starting to get to big so i have split it up. Here is the next post to continue on this topic.