.NET and me Coding dreams since 1998!

8Jan/1014

Asking the right questions while interviewing developers

Just another fizzbuzz interview question

I really hate interviews regardless on which side of the table I am sitting during them(Ok, it is a bit easier when you interview candidates :) )

One of the main reasons why I hate them is the stupidity of the programming trivia and questions being usually asked: all sorts of binary tree related traversals, converting numbers from one base to another, number sequences  etc… I mean, who really cares about that stuff in everyday's work?? Am I applying for mathematician job or for a developer one?
I have a feeling those questions are created to fresh grads because they don’t have much real world experiences but that’s just me guessing..,

But, not all of the interview questions are stupid and the most famous example of what I find to be a good type of interview question is famous “fizzbuzz problem” described by Jeff Atwood and Scott Hanselman which is an example of very simple task which resolution is purely related on persons real-world like thinking skills and not related at all to any other type of knowledge.

Couple of years ago I’ve stumbled on a video recording of a session made by Brad Adams regarding his excellent Framework Design Guidelines book where he mentioned an example which looked so trivial to me that I quickly discarded it as very obvious to literally everyone. (I couldn’t find that link for this blog post :()

Couple of months after that I was interviewing a candidate for one senior position and due to the fact he gave fuzzy answers making me wondering if he gets heap and stack so I took Brad’s example idea and came out with a simple question looking something like this

namespace InterviewTrivia
{    
	public class ClassA    
	{        
		public override string ToString()        
		{            
			return "Hello from class A";        
		}    
	}    
	
	public class ClassB : ClassA    
	{        
		public override string ToString()        
		{            
			return "Hello from class B";        
		}    
	}    
	
	public class ClassC : ClassB    
	{        
		public override string ToString()        
		{            
			return "Hello from class C";        
		}    
	}
}

Nothing fancy here just 3 classes inheriting from each other and each one overriding object ToString() method

Now the console application Program class can be done something like this:

using System;
namespace InterviewTrivia
{
	class Program
	{        
		static void Main(string[] args)        
		{            
			ClassA first = new ClassC();                        
			ClassC second = new ClassC();            
			ClassB third = (ClassB)second;            
			ClassA fourth = (ClassA)third;            
			object fifth = (object)fourth;            
			
			Console.WriteLine("1: " + first.ToString());            
			Console.WriteLine("2: " + second.ToString());            
			Console.WriteLine("3: " + third.ToString());            
			Console.WriteLine("4: " + fourth.ToString());            
			Console.WriteLine("5: " + fifth.ToString());            
			Console.ReadLine();        
		}    
	}
}

As you can see there, first 5 lines are doing different variants of casting class c instance and then printing out the result of ToString() method.

I wrote this sample on a whiteboard (to avoid having R# helping him with the answer) and ask the candidate what would be the resulting output of an app.

Why is this fizzbuzz question?

Because it is:

  • very simple problem and code sample
  • focused on concrete programming scenario pinpointing the (IMHO) important OOP concepts which need to be understood by senior developers
  • in order to solve it one doesn’t need any knowledge other then programming skills.
  • it is white board friendly – very important attribute for DEV interviews

So, what happened?

To my surprise that candidate bluntly failed the test reporting as a result:

  1. “Hello from ClassA”
  2. “Hello from ClassC”
  3. “Hello from ClassB”
  4. “Hello from ClassA”
  5. “InterviewTrivia.ClassA”

I was done with the interview convinced this candidate was exceptionally bad and I’ve stayed convinced in that until accidentally I’ve run the same sample with couple of other developers and in most cases got the same answer as the candidate gave. That fact is maybe just the result that we .NET developers are really spoiled and allowed to be blissfully ignorant about how stack and heap in .NET works (which is btw the thing I strongly disagree), but that is not the point of this blog post.

The point is that with properly chosen trivia and programming questions we are using in interviews we can avoid loosing good candidates just because they don’t know for sure how to convert –1234.56 to hexadecimal base number (even on paper) and focus on getting information on really important attributes a new member of the team would be bringing to the team (or not) once joining the company.

(Sample code of today’s post can be found here)

Technorati Tags: ,,

Share this post :

Filed under: Uncategorized 14 Comments
22Nov/092

Couple of PDC 2009 thoughts too big to fit in 140 characters – praise to Microsoft

PDC 2009 is over and what a conference that was… For folks there tablet for free, ton of excitement experienced first hand and a lot of valuable social networking… For us others not being able to attend there’s a ton of prime time learning material I am really looking forward to see.

In general, amount of positive vibes from community was so high that even surprised the usual MS haters so we had to wait couple of days to start with their choir attacks on Microsoft. I don’t work for Microsoft, I am not an MVP in fact I don’t have any relationship with Microsoft but I still find as not fair a bunch of things I hear considering how much investment we see from MS in the development space and thus I wanted to speak in Microsoft favor in a very opinioned manner … Not claiming things in this post being universal truths – they just reflect my takes on various subjects related to Microsoft development ecosystem.

The glass is half empty…

“Microsoft Silverlight 4 sucks because it is not any more cross platform orientated”

Where it all started: http://www.theregister.co.uk/2009/11/20/silverlight_4_windows_bias/.

One opinion I agree with in general: http://blogs.silverarcade.com/silverlight-games-101/21/silverlight-is-silverlight-4-moving-away-from-cross-platform/

Here are couple of thoughts of my own on that subject:

  • I never heard anyone from MS saying that SL 4 goodies won’t be supported on Mac.
    Judged on a size of Windows (4 Mb) and MacOsX (20+ Mb) Silverlight installers, Silverlight on those two platforms is probably different code base with the same API. Even if not, I can imagine thadst a custom Mac implementation of clipboard access, web cam etc can be done using custom coding for Mac OS API.
  • Does Microsoft really need to spend its own money for Mac users?
    In my personal opinion not really because of couple of things:
    • Number of users with MacOS is insignificant from market share perspective so ROI is not as high as for the investments made for Windows users.
    • Even with that low market share investing in MacOS might have sense only if Mac zealots are not so brainwashed and hate everything from Microsoft. Just check out their comments on how they despise the sites asking them to install “Microsoft bloat ware called Silverlight”, how they would sooner cancel their Netflix subscription then install Silverlight to watch HD
    • Before Windows 7, Microsoft maybe had to play on “works on Mac too” card to reduce the impact surface for the Apple ads, but now with Windows 7 premier OS they don’t have to. Table got turned and IMHO it is now in Apple interest to support Silverlight on Macs because in couple of years it would be everywhere and their users would simply have to have it in other not to be cut of from the most of internet sites. I really believe in that.
  • What about Linux?
    Well, Moonlight development is in Novel hands anyway so it would be up on them to determine the speed when this would come. Considering the fact that MonoTouch looks like their primary interest I don’t think it would happen soon which considering the reasons similar to Mac (low market share, blind hate toward anything coming from Microsoft) is (IMHO) not a big deal.

"Microsoft Silverlight 4 would break the security model of Windows 7”

So, the pitch of this attack is that due to the fact that Silverlight now provides “full access” mode of sandbox,  we are back basically to the ActiveX era where in order to run a web site (which you want) you have to allow elevated rights to that app. In other words, they said that everyone would “OK" that and the hell gates would be open by that.

Here’s a good blog post summarizing what SL 4 full trust mode means for real (http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2009/11/18/silverlight-4-rough-notes-trusted-applications.aspx)

And here are couple of thought of mine related to this subject:

  • The biggest difference between COM and Silverlight is that Silverlight application running in browser is ALLWAYS sandboxed. In order for user to elevate sandbox rights Silverlight application has to be INSTALLED on desktop (which is first safety switch not a lot of users would pass) and during installation there is clean warning informing users about implications.
  • It is “elevated” not “full”. One example (as Mike explained in his blog) is that even in elevated trust you still can not access any folderfile on user hard drive.
  • I have a feeling that the same guys praising Adobe AIR for its tweeter clients never questioning attack surface AIR has due to ITS own full trust and mocking Silverlight for its too secure sand box preventing developers to do cool apps are the same bragging about the sandbox.
  • Microsoft had to listen loud community demands for cross domain web service calls, enable premier full screen experience for media players and kiosk applications.
  • Enabling access to COM in this trusted mode enabled integration with a bunch of software and opened some really interesting and productive implementation ideas. If you would have so much applications of your own including the prime time Office applications, would you seriously pass the card of enabling their integration. I wouldn’t skip that chance for sure :)

“Microsoft DeveloperDesigner workflow is just a myth and thus XAML instead of code based approach is wrong way to go”

Well, in a way I agree that right now most of the creative folks are working with Photoshop, Illustrator, Flash, CSS, HTML and that not a lot of them won’t laugh when BlendSL would be mentioned to them as a career option. But that is understandable only short term.

There are couple of facts why I think that won’t be the case anymore in a year or two when the demand for MS designers would grow due to next things:

  • Silverlight reach the 45% adoption rate in such a short time without real commitment and investments from industry. That number would just grow with continues investment  and alliances Microsoft is making, so rather sooner then later the tipping point would be reached.
  • Silverlight code is the same .NET which enables reusing of the skills .NET developers have. In other words, I bet good part of millions of .NET developers would step into RIA world in upcoming years which would improve the desirability of Silverlight for big companies.
  • Blend 1 sucked, Blend 2 was ok, Blend 3 was fine (with PhotoshopIllustrator import)… Watching the speed and the trend how RIA tools are improved in MS space one could expect that in a year or two Blend would become very capable tool. If not then, then in next version. The key point is that behind Blend there is MS with endless supply of cache pumping into it on demand.

“Microsoft Silverlight is just next Web Forms”

I agree with this one because I too can see the attempt of MS to bring RIA web development to masses the same they did with desktop developers in 2002 when ASP NET bring them to web world. I agree and I don’t see anything wrong with it. The whole stateless response/request pipeline was never supposed to handle RIA scenarios we have now so abstracting that with some API (what web forms did) is not possible. Regardless of how many layers you put on top of web, it is web down bellow and sooner or later it would pop up.

That’s why I think MS did the right thing and break up with the web completely. Now the web browser is just used to host sandbox plug-in, and web just to bring to plug-in data required for it work. After that, it is full desktop model application not having anything to do with the web. Yes, you heard me well – Silverlight is for me desktop application deployed through browser and that is good…

“There’s no thing done by Microsoft Silverlight which I can not do with MVC/HTMLx/CSS/jQuery respecting the web standards at the same time”

Well, there are two type of web properties: web sites and web applications. I guess there is no need to spend a lot of time explaining how cost ineffective would be doing web application using jscript RIA approach. Silverlight web application utilizing some of the concepts prism and other framework have allows effective work of multiple development teams on the same site with clean separation of concerns different teams and team members have in that process utilizing server side .NET skills and XAML designer skills. With jscript implementation is lengthy, cross-browser compatibilities crawl their ugly head sooner or later, performance problems with DOM size and different browsers etc, standard .NET developers are not very usable in client side scripting coding etc.. It can be done, but I strongly believe performance, implementation period, maintenance and cost effectiveness would all be on Silverlight side.

As far of standard web sites, until Silverlight don’t solve the SEO problem, I agree that jQuery is better SEO solution for pages visible by prospect user. If your site has a authorized portion requiring authorization then SEO is not so important because crawlers won’t be anyhow able to access it.

“There’s too much magic in RIA service – it is just another demo ware framework”

I looked at RIA services in its early preview version at the beginning of 2009 and sure there is something in the code gen based approach a lot of us would find “not pure” (things might change from there). So what? I can see a lot of normal sized web sites benefit a lot from it. Even the my bellowed NHIbernate works with RIA Services . The way I see it is that they gave as as an option a bunch of prebuilt code we would need to build anyhow in almost every web siteapplication we do. For a lot of folks replacing the total control on your code base in favor of productivity boost is perfectly valid option. At the end of the day, if you don’t like it – don’t use it do it yourself or buy some other framework like IdeaBlade etc :)

What could be wrong with free lunch?

The glass is half full…

“Microsoft listen us”

In case you haven’t seen it already go checkout http://silverlight.uservoice.com/pages/4325-feature-suggestions where users voted for features which they wanted to be included in Silverlight 4. MOST OF THEM ARE INCLUDED. We asked, Microsoft responded. Thank you Microsoft!

“Silverlight won’t die.”

Most of the people I follow or read their blogs kind a gravitate toward the feeling that Silverlight is destined to fail, that it doesn’t have a chance against Google backed up HTML 5, that JScript is more then sufficient, that comparing to Flash SL is just a toy…

I admit that recently I started being in doubt regarding my decision to bet my career on SilverlightWPF based presentation layers and even started questioning that decision (bought couple of jQuery books :))

After PDC 2009 I don’t have any doubts and that is totally not related to fabulous presentation Scott Gu did – we all expected from that anyhow so that doesn’t count. For me the much more importance is in the fact that almost every slide in Ray Ozzie Key note on day 1 had something related to Silverlight. As soon I heard him about 3 screens vision and I was sure that Microsoft is betting its future on Silverlight and that they won’t step back in the future and ditch Silverlight. And that makes all the sense on this world and makes me smile.

Another important thing coming from Ray’s vision of 3 screens driven by Silverlight is that I won’t be spending time with the MonoTouch andor other iPhone related technologies. What I am going to do instead is just to wait for Windows Mobile 7 which I’m sure (based on recent great design coming from MS kitchen - ZuneHD last example) would be cool looking and HW rocking (based on Win 7) so I expect to see it well and alive even beside Android and iPhone platforms. If that happens, I’ll just reuse my Wpf/Silverlight skills and start developing for WiMo 7. Another thing I hope to finally be able to do on WiMo 7 ship date is to replace my iPhone with WiMo phone and say goodbye to the Apple proprietary things.

(I’ll still read the jQuery books but I am not going to followread jQuery related RSS feeds)

“WPF is not dead.”

A lot of folks (me included) questioned the commitment Microsoft have in Wpf after seeing the state of SL 4.

I don’t think wpf is going anywhere due to next couple of reasons:

  • Much better development experience in WPF then Silverlight (debugging, tools etc)
  • Visual Studio 2010 is WPF application which shows strong commitment Microsoft has toward further enhancing the WPF.
  • This would sounds weird: WPF has bigger user baseadoption then Silverlight 
    Silverlight is at 45%, WPF is at 90% (every windows XP SP2, Vista, Win 7, Server 2008 has WPF).
    In case you care about windows platform only (like me) WPF is perfect platform to build upon.
  • WPF integrates with OS features (jump lists, progress bars etc), can use sync framework, has direct DB access (for corporate intranets) etc
  • Size of .NET Framework 4.0 Client profile is just 30 Mb (in case of 32 bit version) which is really not a big deal to be downloaded in 2009. In other words, on a PC not having .NET at all, downloading of 30 Mb could make the computer fully capable to execute your app.
    More details here: http://blogs.msdn.com/jgoldb/archive/2009/10/19/what-s-new-in-net-framework-4-client-profile-beta-2.aspx

“Entity Framework is very viable option”

We all remember the EF vote of no confidence which pinpointed the reasons why Entity Framework 1 was not a good option for developers: lack of POCO, DB orientated modeling only etc, etc… Just a year after we are having EF 4 which (judged by PDC 09 Entity Framework session) it looks like they fixed them all. They even work on Code Only (“FluentNHibernate for EF”) feature.

I’m pretty sure that gurus like Ayende would find in lest then 10 seconds 27 NHibernate important features missing in entity framework but to me picking a technology to invest my time (among other things) is based in highest perceived ROI on my incomes. I had similar dilemma in 1995 when I had to pick between Microsoft Visual Basic 3.0  and Borland Delphi 1.0 and I pick Microsoft VB 3.0. Why? Not because I thought it was better, but because of three things:

  • I estimated that demand for VB in upcoming period would be much higher then for Delphi
  • I estimated that Borland is more likely to loose that fight with Microsoft (who has endless cash reserves)
  • I estimated that amount of knowledge (books, articles, dev community etc) would be much bigger and easier to get in VB case

Almost the same questions I can ask today: NHibernate VS Microsoft EF4  and with a lot of sadness (I really love NHibernate) I have to conclude that EF is the way to go for me because:

  • In version EF4 is at least “good enough” alternative: POCO and model first are possible.
  • Looks like that is Microsoft long term data strategy which means its knowledge would be very valuable in MS oriented development shops
  • Tooling is ok (I don’t buy that designer crawls when you have hundreds of entities because I never have them all thanks to bounded context and it is nice to do all of your development not leaving the VS 2010)
  • There’s a lot of technologies building on top of EF (ADO NET Data Services, RIA Services, Azure)
  • There are some really good booksblogsvideo material already for EF while for NHibernate there are no so much sources of knowledge. Microsoft would for sure outperform NHibernate in this are many times and directly impact the level of adopting,
  • The most important: It would get better and better with every next release. (If they get so far in just a year where they would be next year and the year after the next year?)

I’ve been working last couple of months on my own personal pet project based on FluentNHibernate <–> NHibernate combo, but I am probably going to migrate to EF4 now. If nothing else, I would at least get better understanding on how it stands today for real against NHibernate based development.

Conclusion

I am really happy these days being a developer in Microsoft world seeing so much initiatives, innovations and energy on Microsoft side. I have a feeling that Microsoft was sleeping somehow until a year ago when it woke up and started again doing great things (Bing, Natal, Courier, VS 2010, Windows 7, Silverlight 4, Zune HD etc). No more easy points for Apple adds and no more easy points from haters on things Microsoft was honestly really doing bad mainly due to ignoring the community feedback.

Good work Microsoft!

Filed under: Uncategorized 2 Comments
4Nov/093

Fluent NHibernate samples – Auto mapping (Part 1/2)

In my previous blog post, I have announced the sample solution with which I try to provide code sample for very comprehensive documentation which can be found on http://fluentnhibernate.org/..

The project is hosted on CodePlex (http://fnhsamples.codeplex.com/). Right now it contains just a small sample which I would use in this blog post but in the future I intend to grow it until I wouldn’t cover most of the interesting features FNH offers.

The purpose of project is just to demo how fluent nhibernate mappings work so you can quickly “get them” and NOT:

  • to teach on how NHibernate works (buy this book for that)
  • to teach you in detail about fluent nhibernate (go to http://www.fluentnhibernate.org for that)
  • to teach about best practices in abstracting NHibernate dependencies (you can find that here)
  • to teach about best practices in domain modeling etc

I am also by no means an expert in neither nhibernate nor fluent nhibernate. In fact I am just a grunt like many of you reading this blog post who was searching for similar blog post when he was banging his head against the wall trying to learn how to use it - 1+ year ago very little docs) so take EVERYTHING I say with a grain of salt.

I am also aware of the state of my English, but I am really doing my best and if anyone wants to help editing the blog post I would put that version in code plex :)

So, let’s start…

An example of Fluent NHibernate auto mapping in action

So, here’s the domain model I’ll be using in this blog post to show couple of FNH auto mapping aspects

image

And here’s the database model which would be created after the sample would be executed based on fluent nhibernate auto mapping conventions.

image

(To get more details about the domain design check the use case description from previous blog post)

Convention over the configuration

imageFNH auto mapping works based on applying the convention over the configuration which I usually try to explain like this…

“There is a set of rules which would be applied to your domain during its mapping to database model.

Here are couple of examples of rulesconventions:

  • Database table would be named used plural form of the entity name.
  • Primary key of every table would be named following the rule “entity name” + “ID”

Fluent NHibernate comes with a default set of rules which you can customize to your own preferences on a very cool and easy way.

Sounds simple? It is THAT simple.”

Let' see quickly how the fluent nhibernate magic happens.

How the Fluent NHibernate Auto mapping magic works (explanation for us others)?

I won’t go deep into the details (for that go to http://fluentnhibernate.org or How FNH works?) but in general it is based on two design patterns: Proxy and Visitor.

If you are not familiar with the patterns here’s a simple way for you to get them in the context of fluent nhibernate magic.

The role of proxy design pattern in fluent nhibernate

If you check out some of the entities in Vuscode.FNHSamples.Domain project you would see that all of them are non sealed classes with all of the members being virtual…  So, the code like this

namespace Vuscode.FNHSamples.Domain
{
    public class Address
    {
        public virtual string Email { get; set; }

        public virtual string City { get; set; }
        
        public virtual string Country { get; set; }
    }
}

The reason why we HAVE TO respect this rules designing our domain classes is related to the fact that FNH uses castle dynamic proxy functionalities to (in overly simplified version) create an “in memory” proxy child class by inheriting the real one and overriding the properties in the proxy child class in order to enable intercepting their calls.

In this example therefore inside of the FNH engine there would be created a “ProxyAddress” class inheriting from the Address class but which would have implemented additional interfaces and override members effectively adding during the run time behaviors and shapes to original Address class. That’s how a POCO can be achieved: we don’t need any attributes, no special base class or base interface etc..

Now, in order to understand why FNH needs this we need to take a quick peek at how FNH “mapping rules” (properly called conventions) are implemented.

How the fluent nhibernate conventions are implemented?

Always on the same way (thanks to brilliant work of FNH authors): always inherit and implement a special interface which has a method accepting a parameter of certain interface type.

    public class ClassConvention : IClassConvention 
    {
        public void Apply(IClassInstance instance)
        {
            // do something with instance
        }
    }

The reason why  our convention has to implement certain interface is due to the fact that during the run time fluent nhibernate would iterate over all of the types of the given assembly and collect “all of the types implementing the IClassConvention” thus “collecting the rules”.

The reason why a IClassInstance parameter was passed is that the Apply method would get SOMETHING implementing that interface without knowingcaring about what that something really is.

This design approach where you enable your class to work with various entities without coupling to its concrete implementation can be roughly called visitor pattern.

If I use gain the same sample of Address and “AddressProxy” class, imagine that AddressProxyClass created on the fly implements the IClassInstance interface. Wouldn’t that enable us to pass an instance of addressProxy class to ClassConvention Apply method so it could perform its functionality on it? :)

The end result of those two patterns is that instance Address class ends inside of the Convention Apply method without adding anything to domain other then virtual keywords on properties.

Iterating, iterating, iterating…

During the runtime mapping process FNH (overly simplified) iterates over all of the types and creates their proxies.

Every proxy gets added certain interfaces which define methods allowing “outer world” to alter the state of proxy. 
Fluent NHibernate (or user) then creates a collection of conventions defining the rules how certain aspects of proxies should be altered.

Now the code iterates every proxy and for every proxy it iterates over all of the conventions and applies the one matching the proxy.

As the end result of that iteration, we gat all of the proxies with their state sets up in proper state.

Then FNH iterates all of the proxies again and translates states of each one of them to NHibernate required XML representation.

Brilliant, isn’t it?

Back to reality

Now when I (hopefully) explained how NHibernate works in layman’s terms, we can just go over the conventions I have in my sample and provide explanation what is the relationship between them with certain pieces of my sample.

Class Convention

This is convention which tells to Fluent NHibernate how to map entities to database tables.

I have only one rule related to that:”A table name should be plural form of the entity name”, so the code doing that is pretty simple

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ClassConvention : IClassConvention 
    {
        public void Apply(IClassInstance instance)
        {
            instance.Table(Inflector.Pluralize(instance.EntityType.Name));
        }
    }
}

IClassInstance has two key members (key in sense of this sample):

  • EntityType – providing access to a type being mapped to database table
  • Table(name) – method which sets the value of the table name.

(To explore capabilities outside of my sample use intellisense which in case of fluent interfaces is your best friend)

Inflector is just a helper class I took from Castle.ActiveRecord project (been a long time ago) and which purpose is to get plural form of a given string. I put it also in sample project so you can check it out if you wish.

So, now when we know all of the pieces we can read the implementation inside of Apply method like this

“Make sure that whatever the proxy was sent it would be mapped in a data table which name would be plural form of the original class proxy was created from”

(Due to the fact every convention is implemented in same manner implement a interface and get something injected I’ll skip repeating that in other conventions)


DefaultStringPropertyConvention

This is convention which tells how class string properties should be mapped in database column.

My rules are simple: default length 100 and every string can be null value.

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class DefaultStringPropertyConvention : IPropertyConvention
    {
        public void Apply(IPropertyInstance instance)
        {
            instance.Length(100);
            instance.Nullable();
        }
    }
}

Foreign Key Convention

This is convention which defines how fluent nhibernate should behave while mapping association properties to foreign keys.

My rule is simple:”Name of the foreign key is name of the table being referenced + ID suffix”

Here’s the code from sample

using System;

using FluentNHibernate.Conventions;
using System.Reflection;

namespace Vuscode.Framework.NHibernate.Conventions
{
    public class CustomForeignKeyConvention : ForeignKeyConvention
    {
        protected override string GetKeyName(PropertyInfo property, Type type)
        {
            return property == null 
                    ? type.Name + "ID" 
                    : property.Name + "ID";
        }
    }
}

As you can see from the code not exact translation of my rule so it needs some additional clarification…

The GetKeyName method accepts two parameters:

  • property (holding a pointer to a property in entity referencing the “parent” entity
  • type (holding a pointer to a  “parent” table)

If we check out the resulting DB diagram

image

and how the domain implementation of Blogs looks like

using System.Collections.Generic;

namespace Vuscode.FNHSamples.Domain
{
    public class Blog : Entity
    {
        public virtual Author Author { get; set;} // component

        public virtual BlogRoll Roll { get; set; } // references
        
        public virtual IList Posts { get; set; } // has many

        public virtual string BlogTitle { get; set; }

    }
}

We can clearly see that due to the fact that Blog has a Roll property pointing to parent BlogRoll, FNH took that property value + "ID" and created RollID foreign key in Blogs data table. So, that explains how FNH works in FK convention when property being sent is with non null value and leaves us with the question “how come that value can be null”?

To answer that, let we check out the other part of the resulting UML diagram representing DB interpretation of the Author class inheritance

image

As you can see, GuestAuthor and RegularAuthor have their foreign keys named “AuthorID” even none of those two has an explicit “parent” property like in previous case.

In other words There is no GuestAuthor.Author and RegularAuthor.Author properties but we still have the need for defining FK in their tables.

In this type of cases, ForeignKeyConvention would get a null propertyInfo value (because there is no property) and a pointer to a entity to be used with defining of database FK (in this example Author). That’s how in my sample foreign key of those tables become

Many to Many convention

This is convention which defines how fluent nhibernate should behave while mapping the N – N relationships where a class A has a collection property of class B type and the class B has a collection property of class A.

Here’s the code sample from the Vuscode.FNHSamples.Domain –> Post and Category classes

namespace Vuscode.FNHSamples.Domain
{
    using System.Collections.Generic;

    public class Post : Entity
    {
        public virtual IList Categories { get; set; }  // has many to many

        public virtual string Title { get; set; }

        public virtual PostStatus Status { get; set; }

    }
}

A Post can have many Categories

namespace Vuscode.FNHSamples.Domain
{
    using System.Collections.Generic;

    public class Category : Entity
    {
        public virtual IList Posts { get; set; } // has many to many

        public virtual string Name { get; set; }
    }
}

One Category can be used in many samples.

I prefer mapping Many to many relationships using the Association Table Mapping pattern where additional table is created with two foreign keys columns matching the primary keys of the associated tables.

Here’s how it looks in resulting database model with PostsToCategories table being association table.

image

I guess after seeing the diagram it is quite obvious to get the many to many convention I have:

Name of the association class should be like “TableNameA” + “To” + “TableNameB”.

How NOT to implement many to many relationship

As you can read in great detail here, many to many is in general combination of two 1 – N relationships which if it would be done like this

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ManyToManyConvention : IHasManyToManyConvention
    {
        public void Apply(IManyToManyCollectionInstance instance)
        {
            instance.Table(
            	string.Format("{0}To{1}",
                        Inflector.Pluralize(instance.EntityType.Name),
                        Inflector.Pluralize(instance.ChildType.Name))
		);
        }
    }
}

Would result in next database diagram state

image

So, we have 2 associations tables where each one of them covers their own path.

Clearly this is an overkill from Db perspective due to the fact that the same table can be used in both paths.

In NHibernate parlance that is achieved using Inverse relation type (again, you can read about it in great detail here) which in layman’s terms can be explained “when you need to map the Post –> Categories relation, pleas use the already defined Categories –> Post and just revert it”

And that’s exactly what the actual implementation in my code sample does

Here’s the code

Let see how this was implemented in the sample code

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ManyToManyConvention : IHasManyToManyConvention
    {
        public void Apply(IManyToManyCollectionInstance instance)
        {
            if (instance.OtherSide == null)
            {
                instance.Table(
                    string.Format(
                        "{0}To{1}",
                        Inflector.Pluralize(instance.EntityType.Name),
                        Inflector.Pluralize(instance.ChildType.Name)));
            }
            else
            {
                instance.Inverse();
            }
        }
    }
}

The instance.OtherSide is null in situation when there is no already defined relationship between EntityType and ChildType. In that case code uses Table method to set the name of association table respecting my naming convention TableA name + “To” + TableB name. That first if would result with PostsToCategories association table being created.

Now the FNH would iterate more (as described above) and the Post –> Category relation would be processed. When this happens, instance.OtherSide would NOT be null so instead of creating new association table FNH would map that relation as inverse to the original one.

Conclusion

This blog post become really long so I have decided to split it in two posts. In next post I’ll show a few more conventions and process of fluent configuration with switches I’ve been using in my sample

Stay tuned :)

Filed under: Uncategorized 3 Comments
3Nov/090

Fluent NHibernate Samples on CodePlex

I’ve been using Fluent NHibernate for more then a year now and I am big fan of it.

The were only two things bothering me in FNH for all that 1+ year:

  1. frequent API changes (which made my fluent mapping and auto mapping blog post pretty quick completely obsolete – not to mention my code :)), but when I saw how polished FNH get in 1.0 version I don’t mind any more – simply beautiful code.
  2. lack of documentation. Don’t get me wrong - http://groups.google.com/group/fluent-nhibernate was REALLY useful most of the the time but I was allways dreaming about something like the current fluent nhibernate wiki which is simply awesome concentrated amount of useful data.

So, to me there’s no more problems with FNH left (beside a few minor bugs no one cares to answerfix) but in last couple of days I’ve accidentally find out ( here on the fnh mailing list and here on stackoverflow) that a lot of folks would like to see a sample code working with fluent nhibernate 1.0 (beside the wiki etc)

That’s why I decided to loop in some of my time today and create a sample project illustrating on a same example both fluent and auto mapping and that’s how I came with

CodePlex project - Fluent Nhibernate samples

The intention of the project is to initially focus on single sample illustrating all of the major use cases in the ‘real world’ manner and then to slowly grow so the sample would start covering more and more corner cases. Hopefully in a long term it would become a c# solution illustrating in one place all of the major FNH usage aspects.

Project can be found on this location http://fnhsamples.codeplex.com/ together with the source code downloadable as zip or SVN-checkout.

Starting solution

So, in order to start the project and based on the questions I am usually hearing mentioned regarding auto mapping I’ve came up  with this simple domain model

image

So a domain space covers imaginary blog engine where:

  • a blog roll is a group of blogs (e.g. codebetter.com),
  • a blog contain one or many blog posts where every blog post can have one or many categories
  • a blog is owned by a single author which can be regular author (that blog is his primary blog) or guest (where he cross post to this blog)

As you can see in this very simple domain I have cases of:

  • References (N – 1 relation) where many Blogs belong to one BlogRoll (in this sample Blog mimics aggregate root)
  • HasOne  (1 – 1 relation)  where author can have only one blog in a blog role and all blog posts of a blog are written by a single author only.
  • Component where author has a complex Address value property of a Address type which (due to the fact it is not entity) we would like to map to same DB table as Author.
  • Subclass (inheritance)where GuestAuthor and RegularAuthor are children of the abstract Author with each one of them having its own custom properties
  • HasMany (1 – N relation) where a blog has one or many blog posts
  • ManyToMany (N – N relation)  where a post can be tagged with many categories and every category can be used in many posts
  • Enumeration  - where a post has status enumerated value column
  • All entities share the same Entity base class which (contrary to Author entity) is not supposed to be mapped in a separate table

Note: I know that this domain model is far from perfect in modeling sense but that is not the point here. The point here is having a meaningful sample which would be used to showcase all the fluent nhibernate magic.

End result

Here’s how the DB model would look like created using both fluent and auto mappings

image

As you can see, DB diagram match pretty much the domain model and all that (in case of Auto Mapping) just by utilizing conventions without any manual defined mappings.

What next?

For the folks curious to check it out now, go to http://fnhsamples.codeplex.com/  where you can download the source currently containing only auto mapping based solution. I guess, it is so simple that most of us could get it just by looking at the small code base there.

For the rest of the people, my next blog post would present the fluent mapping based solution to get from model to DB diagram and after that auto mapping based solution. In that two blog posts, I would comment “line-by-line” so even people new to fluent nhibernate would (I hope) get it.

After that two blog posts, I would have to switch gears and (finally) spit out couple of things I came out with doing Prism development.

I hope at that moment there would be some suggestions what needs to be added to the sample so the sample would start to grow from its starting trivial size.

Filed under: Uncategorized No Comments
2Nov/093

Me, myself and design patterns

Looks like the Silverlight related posts I’ve announced would have to wait one more blog post due to the event which happened to me today and which made me thinking about a few of the architecture related things which at the end resulted with a few of (at least for me) interesting thought I felt sharing with the community.

The event

So, a fellow blogger @RadenkoZec made a nice blog post about Facade design pattern in which comments we had a discussion if the example is appropriate or not, and if the implementation is Facade design pattern  or some other pattern. I won’t repeat here in detail what we were discussed there (go to the blog post to read comments) but there were couple of things in that discussion I was thinking about during the evening…

Patterns should be explored in 3D

It is very interesting that the sites on the net (like the dotfactory.com) seems to be primarily focused on GoF patterns while I was not able to find sites covering at the same time PoEAA patterns andor DDD patterns.

That fact might look irrelevant but if you would check out the content of today’s discussion you would se that the same example code used in blog post

looked to Radenko as Facade (GoF)

FacadeDesignPattern.png

and to me more like a Gateway (PoEEA)

gatewaySketch

 

If you just take a look at above pictures, you would find for sure at least some similarity and that’s why one should look at all of the patterns in whole and not just be exclusive in picking “The book”. Another example of this interpretation conflict can be found between the PoEAA and DDD where patterns such as repository, factory etc some times have different usages if not different meanings.

In other words, IMHO every developer should persist knowledge in three dimensions:

  • X– DDD, PoEAA, GoF
  • Y – specific patterns and their implementations, implications; anti patterns too.
  • Z – time

For the ones of you probably asking what time has to do with this subject - here’s an answer. Software landscape changes significantly in last 20 years so some of the “scriptures” should be taken with a grain of salt.

Here are couple of examples illustrating my point:

  • Observer pattern doesn’t have a lot of sense being implemented out of box with .NET events,
  • Singletons (btw, would have a dedicated post showing how evil they are) are kind of obsolete with the usage of the IoC containers etc

Another aspect of the need for considering the time dimension in design patterns exploration is the fact that in last few years we are all witnessing both the rise of the dynamic languages and the enhancement of static languages where they are getting aspects not existing in the time when “pattern Bibles” were made (lambdas, generics, C# 4.0 dynamic etc). Every of the patterns therefore should be heretically challenged in the lieu of current state of technology.

Context is the king

By mare looking at above diagrams (which for my point can be both pure UML diagrams) it would be really hard to tell what is the difference..

In both cases we have multiple classes encapsulated in one class orchestrates their calls and expose simple API to be consumed.

But if you take a look at the context you can see based on the provided code sample that the packages in case of facade are interconnected, each one of their classes is dependable in it’s own way dependable on other classes and none of them can exist without others. I could imagine refactoring merging those 3 classes in one. In other words, even physically we have multiple classes caring their own implementation they are so connected that on architectural level there’s just one entity which is hiding behind the facade.

That’s why for me Facade feels like “1 – 1” relationship type pattern where a facade hides the complexity of the class (similar in a way to adapter but let’s not digress with that here)

The second case is with clearly different context where all of the packages behind the “facade” (Gateway) are quite independent. They do not depend on each other at all, they cooperate as partners.

The purpose of the PricingGateway is also to expose a simple API to pricing Package but this time when the API would be used Gateway would perform a role of a conductor, orchestrating the calls to the "hided" elements.

That’s why for me Gateway fells more like a “1 – n” relationship type design pattern

To summarize: In order to distinct patterns one would have to understand the problemimplementation context before picking the right flavor – appropriate pattern.

Patterns as a matter of feelings

As you have probably noticed in the text above I am using the “feels like to me” which I’m pretty sure is looking quite unrelated to explicit and scientific thing such are the design patterns so let me clarify that for you :)

As many other, I’ve stepped through couple of phases in my Pattern life

  1. Discovery of patterns (we hear about them from some of the cool kids and all we do at this stage is trying to pretend as best as we can to look like we know what they are talking about)
  2. Trying to get it (AKA “I foundread the GoF book”)
  3. Becoming a believer (including making the presentations to people in phase #1)
  4. Getting it for real (reading every patterns book, blog post etc you can find and memorizing the UML diagrams, use cases etc)
  5. Seeing patterns everywhere and doing them as much as we can
  6. Paying the price from highly complex code base .
  7. Starting to understand the price of patterns application and start using it only when a real design pain is identified which can not be solved on simpler (but still clean) solution
  8. Instead of patterns catalogs starting to focus on basic principles.

So, as far I can tell I am in that phase #8 where all of the patterns somehow melted with each other and all of them “look similar”. I forgot half of their UML diagrams and reference implementations. The only thing in my head left is the name of the pattern, notion of the use case when it is useful and (very blurry for most of them) how it works.

Reading my last statement someone could make a conclusion that I just got more stupid (possible option I admit :)) and to lazy to remember things but I would disagree with him because every one of the patterns I faced is based on the same set of plain logic principles which in case you know and feel them you are good to go with doing “your own” solution which would somehow match some of the patterns.

Here are some of examples of the design principles I have on my mind: SOLID, open-close, DRY, KISS, orthogonal code,  OOP principles (encapsulation, abstraction), TDD, dependency injection and the list could go on with pages

To summarize: I strongly believe that if the developer is feeling natural regarding the design principles (spits out the code following those principles without even thinking) the knowledge of the design pattern can be at much more abstract level (in a way like knowing a page index of a book) without memorizing every bit of their reference implementation (that’s why we have bing aren’t we?) and the created code would still comply to all of those patterns.

Conclusion

I hope all of my rumblings I shared with you my dear reader would help me now to make up the point of this blog post and that is to explain how come when Radenko pulled (completely rightfully) on his blog post couple of referent examples from sites and books of folks far smarter then I am proving that his example implementation is the same as the one they provided, the only thing I could tell him is that his example doesn’t feel like a case for Facade to me”.

I was not claiming neither that the guy who wrote a book did it wrong nor that the dotfactory.com site got it wrong in their sample.

It was just a gut filling of a pragmatic “duct tape architect” which I would have about that code if it would be mine which I shared with him and the community.

Filed under: Uncategorized 3 Comments
1Nov/096

Design for testability – WCF proxies

Recently I spent some time participating in projects involving Silverlight, Prism etc and there are couple of interesting things I’ve came up with during that period which I would like to share with the community in a form of a couple of blog posts showing the “enhancements” I did in our Prism based implementation.

I was thinking a lot where to start and as a result of that I have decided to start with something small but useful. I’ll focus in this blog post on Silverlight, but this simple trick is usable in any other client technology making WCF calls. So, here we go…

How to have testable Silverlight code depending on WCF calls

Why reinventing the wheel?

In order to make sure that trivial thing I’ll be writing about today was not already covered by someone I did my Bing homework and came out with two approaches you might want to check out:

While I find both of them to be perfectly acceptable solution I think they are suboptimal due to different reasons.

In case of first solution (which based on screen cast comments is the way a lot of people do) there is one more layer of abstraction to be maintained.
In typical DDD style application we have DB tables mapped to domain entities which are (usually) flattened in web server layer where the WCF service contracts behave as application level services with client centric shapes and behaviors. With this approach we need another interface with either its own behaviors or just copy pasting the service contract and additional adapter class which delegates the calls to proxy. IMHO, layer of abstraction down –> maintainability level up :)

In case of second solution, my objections are similar to the one I have regarding service locator. Abstracting the “locator” (servicehost) leads to opaque dependencies on a consumer level which are much harder to be unit tested and understood. On top of that, I find this solution also to introduce additional complexity with defining factories, providers etc is IMHO are overkill for simple “TDD enable WCF proxy dependable code”

Duct tape programmer solution

Regardless of how much I disagree with Joel on the value of the duct tape programmer, I couldn’t get away from the fact that my solution compared with other two (full of big patterns and cool code) looks exactly like it is been done by the duct tape programmer – me. :) That’s ok, simplicity is #1 design criteria for me.

In this solution I won’t be using therefore any patterns but instead I would just rely on one hack and one small convention to get quickly testable code which is easier to be maintained and understood (compared to other approaches).

Demoware “WCF proxy being called directly” sample

image So, here’s the setup for today post. We are having application which goal is to show the names of the users having salary greater then $1000.

Client is implemented using Silverlight accessing the user data on a server side by making a WCF service calls.

In order to do that I used vanilla Silverlight solution (didn’t want to use Prism here) consisting of two projects:

  • Web application containing a UserService.svc WCF service and hosting a silverlight app.
  • Silverlight application which has UserService service proxy and MainPage showing the names of users in a ListBox.

 

 

UserService.svc

using System.Collections.ObjectModel;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Activation;

namespace SilverlightApplication.Web
{
    [ServiceContract(Namespace = "http://blog.vuscode.com/200911")]
    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class UserService
    {
        [OperationContract]
        public Collection<User> GetUsers()
        {
            return new Collection() 
            { 
                new User { Id = 1, Name = "Nikola", Salary = 1000 },
                new User { Id = 2, Name = "John", Salary = 2000 },
                new User { Id = 3, Name = "Jane", Salary = 3000 },

            };
        }
    }

    [DataContract]
    public class User
    {
        [DataMember]
        public int Id { get; set; }

        [DataMember]
        public string Name { get; set; }

        [DataMember]
        public decimal Salary { get; set; }
    }
}

Nothing important here: In one file OperationContract defining a method GetUsers returning the collection of users (User DataContratct being defined in the same file)

Let’s move on…

MainPagePresenter? Where’s the MainPageViewModel?

Well, I was surprised that many people I spoke with recently think MVVM is not the only “right” way to do SilverlightWPF development and I tend to disagree with that. While MVVM has its own values (it is also presentation pattern of my choice too) you can do MVC or MVP (both passive view and supervising controller) equally successfully like you can do it in desktop applications (speaking of which, Silverlight for me is desktop app deployed through browser) If you don’t trust me, ask Jeremy. That’s why I ended with some implementation for this

So, in order to spice up this blog post a bit I ended with MVP- like implementation with MainPagePresenter implemented like this

using System.Windows;
using SilverlightApplication.UserServiceProxy;
using System.Linq;

namespace SilverlightApplication
{
    public class MainPagePresenter
    {
        private FrameworkElement view;
        
        public MainPagePresenter(FrameworkElement view)
        {
            this.view = view;
        }

        public void Init() 
        {
            UserServiceClient proxy = new UserServiceClient();
            proxy.GetUsersCompleted += (sender, e) =>
                                       {
                                           this.view.DataContext = e.Result.Where(p => p.Salary > 1000);
                                       };
            proxy.GetUsersAsync();
        }
    }
}

Constructor here is more interesting because it accepts the instance with type inheriting from FrameworkElement (pretty much most of controls in Silverlight) and stores its pointer to a view filed. In other words, abstraction of a view is being injected into the presenter just instead of the IView I am being smart here and using the FrameworkElement as abstraction  every view (user control implements).

It is the good old demoware type of code you’ve seen on a lot of places with proxy being instantiated in the method body spiced up with cool lambda implementation of async event handler (which any better presenter would use to scare the session attendees) and with a simple linq statement filtering the result set to exclude the rows lower then 1000.

The magic in this code starts when view FrameworkElement.DataContext gets set by the filtered result set. Presenter doesn’t have a clue about what specific control view is but still it is able to set its common property to a value generating desired view behavior.

Wiring  up the view and the presenter

There’s really big religious war on the IT sky regarding who is created first (view or viewmodel presenter) and the allowed level of coupling between them. I have my own take on that to which I would dedicate a separate blog post (it is important subject) but for the sake of this blog post let say that it is perfectlly fine that view is created first and that it has direct reference to presenter.

That’s how the view (MainPage.xaml file) ended being implemented like this

using System.Windows;
using System.Windows.Controls;

namespace SilverlightApplication
{
    public partial class MainPage : UserControl
    {
        MainPagePresenter presenter;

        public MainPage()
        {
            InitializeComponent();
            this.presenter = new MainPagePresenter(this);
            this.Loaded += new RoutedEventHandler(MainPage_Loaded);
        }

        void MainPage_Loaded(object sender, RoutedEventArgs e)
        {
            this.presenter.Init();
        }
    }
}

As you can see in a page constructor a instance of the presenter was created with a pointer to the page itself being passed to presenter (which if you look up would be held as a presenter field and used in Init method).

Then Loaded event handler is being created (don’t put any UI related code in the constructor because it is not guaranteed that UI would be ready when that line would be executed) and in it presenter.Init() method invoked. In other word, view said to presenter “Please set me up!”

Markup code

Did I tell you how much I like WpfSilverlight? Check the simplicity of the markup code!

    <Grid x:Name="LayoutRoot" Background="White">
        <ListBox
                 Name="listBox1"
                 ItemsSource="{Binding}"
                 DisplayMemberPath="Name" />
    </Grid>

Setting ItemsSource to {Binding} effectively said to control “Bind to your own DataContext” (key moment specific to WPFSL), and DisplayMemeberPath Name value can be read “whatever the collection would be in DataContext, collection item would have a property called Name”

Simple, elegant and powerful!

Runtime experience

image

Wow, so much talk about such a simple thing. Ok, I proved it working

My duct tape based solution

is based on the simplest possible solution I was expecting that WCF supports out of box: IXYZServiceClient interface contracting in abstract way proxy behavior. 

To my surprise I have realized that WCF generated proxy is not creating that interface (IUserServiceClient in this example) so I decided to created in manually.

Doing things WCF team should have already done

I’ve clicked on Show All files icon (to see hidden proxy files) and opened Reference.cs file containing proxy c# code.

image

Then I found UserServiceClient class definition

image

Right click it and pick to extract interface (R# can help with this too)

image

I pick only the members I care to be abstracted (leaving unchecked all of the WCF infrastructure members)

image

and ended with this

image image

Obviously not good solution because next proxy refresh and all this is gone, so there are two problems:

  1. How to preserve interface
  2. How to avoid having manually to add interface implementation to generated ServiceClient class.

In order to solve problem #1, I have created then a folder with the same name as the proxy and drag and drop the interface to that folder. First problem solved!

In order to solve problem #2, I am utilizing the fact that the ServiceClient is generated as partial class so I create in the new folder  another partial UserServiceClient class implementing the IUserServiceClient interface. Here’s how that class look like

namespace SilverlightApplication.UserServiceProxy
{
    public partial class UserServiceClient : IUserServiceClient
    {
    }
}

I hope now you can get the reason why I was a folder with the same name as the proxy –> the namespaces of types and interfaces in that folder are matching out of the box the ones in the proxy class

And here's the solution (for visual learners like me)

image

To summarize the solution:

  • I’ve created the IServiceClient by simple extracting the facade of the proxy client interface
  • I’ve created partial UserServiceClient which is only attaching the interface to proxy
  • Both of files are  not destroyed with proxy regeneration.
  • If the WCF service contract changes over the time, regenerating of new IUserServiceClient is trivial task taking less then 15 seconds.

Duct tape solution at its best! :)

Modifying presenter

using System.Windows;
using SilverlightApplication.UserServiceProxy;
using System.Linq;

namespace SilverlightApplication
{
    public class MainPagePresenter
    {
        private FrameworkElement view;
        private IUserServiceClient userServiceClient;
        
        public MainPagePresenter(FrameworkElement view, IUserServiceClient userServiceClient)
        {
            this.view = view;
            this.userServiceClient = userServiceClient;
        }

        public void Init() 
        {
            this.userServiceClient.GetUsersCompleted += (sender, e) =>
                                       {
                                           this.view.DataContext = e.Result.Where(p => p.Salary > 1000);
                                       };
            this.userServiceClient.GetUsersAsync();
        }
    }
}

As you can tell I’ve made two changes:

  • added another parameter to the constructor injecting the newly created IUserServiceClient interface
  • Modified Init method to replace proxy instantiation with usage of injected client proxy abstraction.

Modifying the view

Usually I wouldn’t modify the view but instead rely on IoC container to inject the UserServiceClient instance, but in order to keep this post focused I won’t be using IoC here (you’ll see it in one of my future posts showing my Prism enhancements) so I needed to make a simple change in a page constructor

        public MainPage()
        {
            InitializeComponent();
            this.presenter = new MainPagePresenter(this, new UserServiceClient());
            this.Loaded += new RoutedEventHandler(MainPage_Loaded);
        }

Nothing special, just added new UserServiceClient() to the parameters being passed to a presenter.

Running the app again

image

Yeap, still works :)

Where’s the unit test?

Well, this blog post is long enough (and it is late enough :)) so I would skip the unit test example this time because it is fairly trivial to be written now when we have interface of the service proxy.

Summary

In this blog post I showed another way how to easy introduce a layer of simple abstraction between the WCF service proxy and the code consuming it.

I find the advantage of my approach VS the two of others in its simplicity and maintainability but again considering I “invented it” that comes as no surprise to me :)

Source code of the end solution can be downloaded from here.

del.icio.us Tags: ,,,
Filed under: Uncategorized 6 Comments
20Oct/095

Say no to ServiceLocator

(Disclaimer: As I stated here, while I find over the time ServiceLocator based code to be a bad practice, I do understand the need for its usage in certain brown-field  scenarios as a way of reducing the risk while introducing the IoC.)

In my previous Nikola’s laws of dependency injection blog post I stated couple of IoC laws based on my experience:

  1. Store in IoC container only services. Do not store any entities.
  2. Any class having more than 3 dependencies should be questioned for SRP violation
  3. Every dependency of the class has to be presented in a transparent manner in a class constructor.
  4. Every constructor of a class being resolved should not have any implementation other then accepting a set of its own dependencies.
  5. IoC container should be explicitly used only in Bootstrapper.
    Any other “IoC enabled” code (including the unit tests) should be completely agnostic about the existence of IoC container.

The #5 (“IoC container should be a ghost even in unit tests”) resulted with couple of blog post comments and emails asking for clarification what I mean by that.

In short:

Say no to Service Locator Opaque dependencies!

imageRadenko Zec posted in comment example which with very slight modifications (to reflect better my Law #1) looks like this…

There are 4 projects in the solution used in today blog post:

  1. ClassLibrary containing the two classes where:
    - UserRepository is having dependency on other
    - LoggingService (implementing the ILoggingService)
  2. ConsoleApplication1 simulating the production usage and containing the:
    - bootstrapper class (defining IoC mappings) and the
    - Program.cs (main console file which would execute our functionality
  3. Infrastructure containing my naive implementation of Unity container adapter with the
    - IContainer abstraction of the Unity container,
    - UnityContainerAdapter class implementing the adapter design pattern adapting the UnityContainer to IContainer 
    - UnityContainer service locator class,
  4. TestProject1 containing the unit test for the ClassClibrary1 - UserRepositoryTest

Before diving into the Radenko’s sample implementation let me restate my point by saying that

  • none of the classes using IoC (in this simple example UserRepository) should NOT be aware at all of IoC and
  • UserRepositoryTest should NOT have testfixture setup methods filling the container

Service locator based implementation

As given in Radenko’s comment example Bootstrapper class registers the mappings

namespace ConsoleApplication1
{
    using ClassLibrary1;
    using Infrastructure;

    public static class Bootstraper
    {
        public static void SetUp()
        {
            IContainer cont = UnityContainer.Instance();
            cont.Register<ILoggingService, Loggingservice>();
        }
    }
}

Here’s the simple implementation of that LoggingService (not very important for this blog post but still to be clear)

namespace ClassLibrary1
{
    using System;
    using System.Diagnostics;

    public class LoggingService : ILoggingService
    {
        public void Log(string message)
        {
            Debug.WriteLine(string.Format("LOGGED: {0}.", message));
        }
    }
}

And here’s the class being dependable on that LoggingService

namespace ClassLibrary1
{
    using Infrastructure;

    public class UserRepository
    {
        public void Delete (int userId)
        {
            IContainer cont = UnityContainer.Instance();
            ILoggingService loggingService = cont.Resolve<ILoggingService>();
            loggingService.Log(string.Format("User with id:{0} deleted", userId));
        }
    }
}

So, as you can see in a given example Delete method is implemented like this:

  • get ServiceLocator (singleton instance of the UnityContainerAdapter)
  • using the ServiceLocator retrieve from IoC container component mapped to ILoggingService
  • use that component to log a message.

What is wrong with this code?

  • It violates the Separation of Concerns Single Responsibility Principle  – UserRepository taking care about IoC, instance resolution etc
  • It good example of opaque dependency injection which hides sometime the set of dependencies component has.

    Looks like “not a big deal” but when you face the class with couple of thousands of lines and you have to read them ALL just to get the list and repeat that couple of times, it is a big deal. (Yes, I do agree with you that having that long classes is atrocity of its own kind, but seeing it all the time in my world)

(Couple of more reasons but let’s just continue our journey…)

Here’s how the production usage would look in this example

namespace ConsoleApplication1
{
    using ClassLibrary1;

    class Program
    {
        static void Main(string[] args)
        {
            Bootstraper.SetUp();
            UserRepository userRepository = new UserRepository();
            userRepository.Delete(3);
        }
    }
}

No magic there: code issues a command to bootstrapper to initialize IoC, create a instance of UserRepository and invoke its Delete method.

The last thing left is the example of unit test one might write with this code

namespace TestProject1
{
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using ClassLibrary1;
    using Infrastructure;
    using Rhino.Mocks;

    [TestClass]
    public class UserRepositoryTest
    {
        private ILoggingService loggingService;

        [TestInitialize]
        public void Would_Be_Executed_Before_Every_Test()
        {
            var serviceLocator = UnityContainer.Instance();
            loggingService = MockRepository.GenerateMock<ILoggingService>();
            serviceLocator.Register(loggingService);
        }


        [TestMethod]
        public void Delete_InCaseOfValidUserID_WouldLoggAMessage()
        {
            // arrange
            var userId = 3;
            loggingService.Expect(p => p.Log(string.Empty)).IgnoreArguments();

            // act
            UserRepository userRepository = new UserRepository();
            userRepository.Delete(userId);

            // assert
            this.loggingService.VerifyAllExpectations();
        }
    }
}

This is not a blog post on unit testing so just a short discussion what the test does :

In Test initialization the singleton instance of IoC containe is retrieved. Then an RhinoMock dynamic mock class of ILoggingService is created and then injected into the container. Summarized: we stored something in a container before the test execution, because we know from the source code that it would be used in SUT.

The test itself is very simple:

  • Arrange defines what is the expected behavior of the logging service which is supposed to be caused as a result of SUT activity
  • Act creates an instance of the UserRepository and invokes the delete method
  • Arrange verifies that the expected behavior of the logging service occurred.

Nothing much –> just another example of behavior unit testing.

What is wrong with this unit test?

Well, I wrote them “a few” like this and it works well but with certain pain points which the more test you write the more painful become:

  1. It prevents black box unit testing.

    I tend lately to think about unit tests more as of a functional specifications and due to that I try always to test the class based on defined functional requirements WITHOUT looking in the implementation (to stay as objective as I can). Only once I cover ALL functional cases, I tend to do a quick check of the test coverage and focus on removing the code not being tested (all reqs satisfied and code not used –> potential overkill)
  2. Prevents effective unit testing

    Sooner or later, with this code you end with writing test following the “run the test, get what is missing, add a mapping, run it again, get what is missing, add a line…”  which IMHO ends to be very slow process.
  3. Makes Fixture'SetUp very big and clunky.

    Here we set up one dependency, but imagine hundreds of unit tests with their own many dependencies dig up in the code and how big this section would become…

    In the past I was trying to tackle that by creating special bootstrapper classes filling the IoC container with test infrastructure stubs and with using AutoMockingContainer (still not completely sure about should I do that or not – another blog post) but the end result is that I got more code to be maintained and which is getting broken wevery time production interfaces change etc.

  4. Decreases test readability

    If you want take tests as a sort of functional specification (and you should) pretty often you would want to “read the tests” in order to get what and how the SUT works. Having a bunch of setup code outside of the method being read makes that much harder.

All the right solutions are always simple as possible

In order to comply to law #5 I need to refactor the UserRepository implementation by removing IoC container from it and in order to comply law #3 I need to explicitly enlist in a constructor all of its dependencies.

So, following those two rules I end with this code:

namespace ClassLibrary1
{
    public class UserRepository
    {
        private readonly ILoggingService loggingService;

        public UserRepository(ILoggingService loggingService)
        {
            this.loggingService = loggingService;
        }

        public void Delete (int userId)
        {
            this.loggingService.Log(string.Format("User with id:{0} deleted", userId));
        }
    }
}

As you can see all I did was to transform the ServiceLocator type of code to dependency injection type of code where there is no more SoCSRP violation and where the UserRepository is unaware of IoC (no single line related to it).

For the folks with concerns that this could result with some very long constructors with too many dependencies I suggest checking out Auto Mocking Container and/or:

Law #2 Any class having more than 3 dependencies should be questioned for SRP violation

Let see the unit test for this class

namespace TestProject1
{
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using ClassLibrary1;
    using Rhino.Mocks;

    [TestClass]
    public class UserRepositoryTest
    {
        [TestMethod]
        public void Delete_InCaseOfValidUserID_WouldLoggAMessage()
        {
            // arrange
            var userId = 3;
            
            var loggingService = MockRepository.GenerateMock<ILoggingService>();
            loggingService.Expect(p => p.Log(string.Empty)).IgnoreArguments();
            // act
            UserRepository userRepository = new UserRepository(loggingService);
            userRepository.Delete(userId);
            
            // assert
            loggingService.VerifyAllExpectations();
        }
    }
}

As you can see above:

  • There is no more initialization routine – all of the code related to this test is in one place.
  • Every test initialize only what it needs (no big chunk of initializing union of  of mappings all tests need)
  • While writing the test the C# compiler informs me that the UserRepository needs a logging service. If I don’t provide it test won’t compile.

    That’s how I can avoid looking at the source code in order to get what dependency class has,
  • As for law #5, notice that in my unit test there is NO IoC code at all (even infrastructure component is not referenced anymore).

    IoC has to be for us when needed but also it has to stay away from our work. Full orthogonality. Period.

How the production code looks with new approach?

Almost the same

namespace ConsoleApplication1
{
    using ClassLibrary1;

    using Infrastructure;

    class Program
    {
        static void Main(string[] args)
        {
            Bootstraper.SetUp();
            var unityContainer = UnityContainer.Instance();

            var userRepository = unityContainer.Resolve<UserRepository>();
            userRepository.Delete(3);
        }
    }
}

The main difference is that I’ve used the unity to resolve UserRepository and performs (by that) automatic object graph wire up. It may look like that we ended with the same thing just in other class and that is correct (partially)!

It is correct in a sense that if you think for a second about the real world systems you have many classes chained in cross dependencies in which case it is in your interest to push out from all of them the IoC resolution and let the IoC container of your choice do the auto wire up magic.

It is not correct in a sense that in a most part of your system (usually matching the area you want to test( you what have implict injection and not this explicit. For example, if your CompanyRepository would need a UserRepository (not claiming it has real sense to be done) the CompanyRepository would just have in its constructor IUserRepositoryUserRepository and that’s it.

Conclusion

Reafactoring from service locator to real dependency injection type of code is very easy if not trivial but do require a small “bing!” in your head to just realize it.

After switching to this practice, my tests get much lighter, faster, easier to read and maintain.

Radenko, I hope I answered your question :)

Here’s a sample code I used in today’s blog

Technorati Ознаке: ,,,,
Filed under: Uncategorized 5 Comments
16Oct/0912

Inversion Of Control, Single Responsibility Principle and Nikola’s laws of dependency injection

Today I stumbled upon the stack overflow question regarding using DI frameworks for classes with many dependencies where the example given and couple of answers have reminded me about a subject I want to post for a long time.

So the example from question basically goes like this:

MyClass(ILog log, IAudit audit, IPermissions permissions, IApplicationSettings settings)

// ... versus ...

ILog log = DIContainer.Get<ILog>();

And the 3 questions related to this example are:

  1. How to avoid passing those common but uninteresting dependencies interfaces to every class?
  2. How do you approach dependencies that might be used, but may be expensive to create?
  3. How to build object graph of MyClass considering the fact that any of those dependencies can have their own dependencies etc..

So, let’s start and keep it short…

What is wrong fundamentally with his example?

Being a big proponent of DDD principles as the one leading to clean and maintainable design, I found the design of the question example is wrong in a sense that an entity (which MyClass is) should NOT have dependencies on infrastructure (or any other) services.

Related to that is Nikola’s 1st law of IoC

Store in IoC container only services. Do not store any entities.

I strongly believe that if the author of example was following that principle he wouldn’t be in the position to ask the question he did ask. 

Second thing I learned in past years to detect as wrong in code like the one in example is the high number of dependencies in a constructor which usually in my experience points to SRP (Single Responsibility Principle) violations and low SOC (Separation of Concerns) in code. The example is not showing the code of the MyClass but (based on my experience) I am pretty sure it can be broken to couple of coherent SRP separate classes where each one of them would have very few (if any) number of dependencies

Related to that is Nikola’s 2nd law of IoC

"Any class having more then 3 dependencies should be questioned for SRP violation"

Basically all answers on the question are agreeing that second approach is worst (together with me) because burring up the dependencies a class has prevents effective black box unit testing and makes harder refactoring. I already blogged about  transparent vs. opaque DI, so I’ll skip beating the dead horse here.

The thing I do want here to discuss are the answers which are recommending in general replacing the first case with the single dependency on IServiceLocatorIContainer.

IServiceLocatorIContainer injection doesn’t make a lot of sense in general

There is no real advantage of using the IServiceLocator vs the DIContainer from the second solution. I mean we would be decoupled from the specific IoC container but the problems of low testability and hard refactoring would stay due to the same opaque nature of dependencies class has. In other words, from my point of view using singleton service locator in this sense is even better then the IServiceLocator being injected (same set of problems, one parameter less in constructor).

The only exception from this rule is the case when we have multiple components mapped to a service with a different key (IoC version of Strategy pattern implementation) there is no way to inject the one with a given key (as long they share the same interface) so injecting IContainer in that case is acceptable.

Setter type of dependency injection shouldn’t be used

Second type of answer is not to use constructor dependency injection but instead setter type of DI (I’ve blogged about the differences long time ago). Couple a years ago I was fan of the setter type from the same reasons (removes the constructor noise) but the set of problems I was facing in real world related to it convince me that it is much worse choice then the constructor type. The main reasons behind that opinion are primarily due to the facts that setter type of DI makes dependencies again opaque and (in this case) even more important result with creation of public API members which only purpose is to satisfy infrastructure needs which (I believe) is deeply wrong. No API members should be created just in order to satisfy the infrastructure andor unit testing needs.

Nikola’s 3rd law of IoC

Every dependency of the class has to be presented in a transparent manner in a class constructor.

Factories and IoC

The question contains a thought of injecting the factory interface which reminded me also on a discussion I had with one of my colleagues regarding usage of factories in certain scenarios (as proposed in Art Of Unit Testing – go buy that book in case you haven’t done that already).

IMO, using factories together with IoC doesn’t make a lot of sense because IoC container is in a sense “universal abstract factory”.

In other words, in my experience any time I thought about adding a factory I ended with much simpler IoC based solution so that’s why I would dare to say that “IoC kill the Factory star"

Q2: How do you approach dependencies that might be used, but may be expensive to create?

Very simple, don’t create them to be like that. A constructor of a class being resolved from a IoC should be as light as possible just defining (if any) its own dependencies. Any class initialization or implementation shouldn’t be implicitly triggered from constructor but instead explicitly by invoking a specific member on instance resolved from a container. That’s how resolving all of them could be done without any significant performance issues.

Nikola’s 4th law of IoC

Every constructor of a class being resolved should not have any implementation other then accepting a set of its own dependencies.

Q3: How to build object graph of MyClass?

Without going into the details of how to do this with given framework (which other did in the SO answers and I covered it for Unity here) I would like to emphasize again the need for moving the mapping definition and resolution from UI elements (as proposed as option question) to dedicated application bootstrapper classcomponent  which (implementing the Builder design pattern) sole responsibility would be to define the mappings in a single place or orchestrating other component bootstrappers.

On the beginning of the application life cycle, bootstrapper would build up the dependencies removing (ideally) the need for other parts of code base to be aware of the IoC container awareness.

(More about Builder pattern in dusty but still good Jeremy’s blog post)

Nikola’s 5th law of IoC

IoC container should be explicitly used only in Bootstrapper. Any other “IoC enabled” code (including the unit tests) should be completely agnostic about the existence of IoC container."

Appendix

So here we go, my dear reader – my first blog post in a while. I decided to fight with my Twitter addiction which sucked up all of my blogging energy and to start writing down (in my awful English) the thoughts and experiences I collected in last year so stay tuned for the bunch of very diverse and (hopefully) amusing blog posts

Filed under: Uncategorized 12 Comments
26Aug/090

And the winner is…

Bloggers: Win a free place for Roy Osherove’s TDD Masterclass (worth £2395!)

Roy Osherove is giving an hands-on TDD Masterclass in the UK, September 21-25. Roy is author of "The Art of Unit Testing" (http://www.artofunittesting.com/), a leading tdd & unit testing book; he maintains a blog at http://iserializable.com (which amoung other things has critiqued tests written by Microsoft for asp.net MVC - check out the testreviews category) and has recently been on the Scott Hanselman podcast (http://bit.ly/psgYO)
where he educated Scott on best practices in Unit Testing techniques.
For a further insight into Roy's style, be sure to also check out Roy's
talk at the recent Norwegian Developer's Conference (http://bit.ly/NuJVa). 

Full Details here: http://bbits.co.uk/tddmasterclass

bbits
are holding a raffle for a free ticket for the event. To be eligible to
win the ticket (worth £2395!) you MUST paste this text, including all
links, into your blog and email Ian@bbits.co.uk with the url to the blog entry.  The draw will be made on September 1st and the winner informed by email and on bbits.co.uk/blog

Filed under: Uncategorized No Comments
18Apr/093

Prism (CAL) unit testing – How to test Prism (CAL) Event Aggregator using Rhino Mocks

I spent some time recently working with Microsoft Composite Application Guidance (A.K.A. "Prism", “CAL”) and I think it is very good platform for building composite UI by either using WPF or Silverlight.

One of its greatest advantages is that it was done in open source manner which resulted with most of the community feedback being incorporated into lightweight, testing friendly framework. Reference implementation and samples are also good but showing only static stubs based testing which is ok but not as powerful as mocking with some mocking framework.

My mocking framework of choice is Rhino Mocks and I am going to make couple of simple blog posts showing how to test Prism code using the Rhino Mocks in couple of typical every day scenarios.

And that leads us to today’s blog post…

How to test Prism (CAL) EventAggregator based code using Rhino Mocks?

imageSUT I’ll be using today will be as simple as possible to deliver the message.

LoggingService is service which is responsible for handling the logging of system errors in a way that:

  • subscribes to system wide events which are requested to be logged without referencing the event sources
  • decides if event needs to be published (based on severity)
  • if it does, it formats the system event adding the time stamp etc
  • calls the publishing service which is handling the publishing of formatted event

Considering the fact that in my sample we would be having various publishing services in system publishing to different targets(eventlog, flat text file) the LoggingService gets dependency injected only a component implementing the IPublishingService.

Subscription to system wide events on a decoupled manner is possible through EventAggregator design pattern  which implementation is in Prism provided through IEventAggregator service.

The LogErrorEvent is an event to which LoggerService would subscribe and which carries the event argument of EventData type containing the ErrorLevel and ErrorMessage data.

Show me the code

Enough of my blabbering (and my English), code will speak for itself much better :)

Sample used in today's blog post can be downloaded here.

LoggingService

using System;
using System.Text;
using Microsoft.Practices.Composite.Events;

namespace Example
{
    public class LoggingService
    {
        private readonly IEventAggregator eventAggregator;
        private readonly IPublishingService publishingService;

        public LoggingService(IEventAggregator eventAggregator, IPublishingService publishingService)
        {
            this.eventAggregator = eventAggregator;
            this.publishingService = publishingService;
            this.eventAggregator
                .GetEvent<LogErrorEvent>()
                .Subscribe(this.LogIt);
        }

        private void LogIt(EventData eventData)
        {
            if (eventData.ErrorLevel<=100)
                return;

            var stringBuilder = new StringBuilder();
            stringBuilder
                .AppendFormat("Date:{0}", DateTime.Now)
                .AppendLine()
                .Append(eventData.ErrorMessage)
                .AppendLine();
            this.publishingService.PublishError(stringBuilder.ToString());
        }
    }
}

Nothing fancy there:

  • constructor accepts two parameters which provide to LoggingService access to event aggregator and publishing service through inversion of controls.
  • in constructor injected event aggregator is used for subscribing of LogIt method to LogErrorEvent.
  • LogIt method is private (wouldn’t work in case of Silverlight – but that is sepa rate blog post) and does next things:
    • makes sure that only events with level greater then 100 get published
    • formats the given error message into appropriate format
    • pass the formatted message to publishing service

 

IPublishingService, LogErrorEvent and EventData

namespace Example
{
    public interface IPublishingService
    {
        void PublishError(string errorMessage);
    }
}

using Microsoft.Practices.Composite.Presentation.Events;

namespace Example
{
    public class LogErrorEvent : CompositePresentationEvent
    {
        
    }
}

namespace Example
{
    public class EventData
    {
        public int ErrorLevel { get; set; }
        public string ErrorMessage { get; set; }
    }
}

No need to waste time commenting this…

Testing the code

I could have done test first etc, but I believe that it would obfuscate the point of this blog , which now once we see the code being tested is going to be just showing the tests 

Test 1 – How to make sure that event is getting subscribed

        /// 
        /// Shows how to verify that event aggregator subscription occurred.
        ///
        [TestMethod()]
        public void Ctor_Default_WouldSubscribeToLogErrorEvent()
        {
            // arrange
            var publishingServiceStub = MockRepository.GenerateStub<IPublishingService>();
            var eventAggregatorMock = MockRepository.GenerateStub<IEventAggregator>();
            var logErrorEvent = MockRepository.GenerateMock<LogErrorEvent>();

            // event aggregator get event would return mocked log error event 
            eventAggregatorMock.Stub(p => p.GetEvent<LogErrorEvent>()).Return(logErrorEvent);
            
            // expect that LogErrorEvent would be subscribed in constructor
            logErrorEvent
                .Expect(p => p.Subscribe(null))
                .Return(null)
                .IgnoreArguments() // we don't care which exact method or action subscribed, just that there was some.
                .Repeat.Once();

            // act
            var loggingService = new LoggingService(eventAggregatorMock, publishingServiceStub);

            // assert
            logErrorEvent.VerifyAllExpectations();
        }

The test is using Rhino Mocks AAA syntax introduced in 3.5 version (if you don’t know it, read Rhino Mocks Documentation Wiki excellent documentation).

In Arange section test:

  • defines two stubs for the services being used (stubs because I don’t care to set any expectation related to them in this test)
  • defines the mock of the event which subscription I am about to check
  • stubs the event aggregator behavior so on GetEvent<LoggErrorEvent>() method call would return event mock I created.
  • defines expectation on that event mock that subscription would occur once (IgnoreArguments() is there because this test doesn’t care really which method exactly would subscribe to event. Test cares only that subscription had occurred)

In Act section test just constructs the service

In Assert section test triggers verifying of the event log expectations (which in this test were: someone subscribed to this event)

(Note that making test for verifying that the Publish have occurred during the test would be pretty much the same as this test with a change on mocked expectations only)

Test 2a – How to invoke event aggregator in Act test section

Sometimes there is a behavior we want to unit test which is occurring upon the event being published through IEventAggregator and because we can be using anonymous delegate, private method handling the event (case of this blog post) there is no easy way to invoke functionality which is wanted to be tested.

This test shows how to invoke event aggregator to publish the desired event which would trigger code being tested,

In case of this example we want to test that not severe errors (error level <= 100) are not getting published.

        /// 
        /// An example of how to trigger event aggregator in act section
        ///
        [TestMethod()]
        public void LogIt_ErrorLevel100_WouldNotBePublished()
        {
            // arrange
            var logErrorEvent = new LogErrorEvent();
            var publishingServiceMock = MockRepository.GenerateMock<IPublishingService>();
            var eventAggregatorStub = MockRepository.GenerateStub<IEventAggregator>();

            eventAggregatorStub.Stub(p => p.GetEvent<LogErrorEvent>()).Return(logErrorEvent);
            
            // expect that publishing service would never be called
            publishingServiceMock
                .Expect(p => p.PublishError(Arg<string>.Is.Anything))
                .Repeat.Never();
            
            // act
            var loggingService = new LoggingService(eventAggregatorStub, publishingServiceMock);
            
            // invoke the event aggregator
            logErrorEvent.Publish(new EventData()
            {
                ErrorLevel = 100,
                ErrorMessage = "Some error message"
            });

           // assert
            publishingServiceMock.VerifyAllExpectations();
        }

In this test, test is having in Arrange section:

  • An instance of the log error event (not the mock of that event like in the case of previous test example)
  • Mock of the publishing service (in previous test we had stub) because that is service we need to check it won’t be called.
  • Stub of the event aggregator with stubbed GetEvent<LogErrorEvent> method
  • An expectation that PublishError method of the publishing service would never be called (regardless of the parameter being sent to that method)

In Act section test is :

  • constructing the logging service injecting the event aggregator stub and mock of publishing service
  • invoking the Publish method on a log error event instance passing the test data (error level == 100 in this test)

In Assert section, we are just triggering checking of expectation defined on mock of the publishing service (no call made to publish error method)

Test 2b – How to invoke event aggregator in Act test section

Although from perspective of this blog post it doesn’t have a lot of value here’s the test testing that publishing service will be called in case event will be published with error level greater then 100. (The more Rhino Mocks examples we have on web, the better adopting rate will be :))

	/// 
        /// An example of how to trigger event aggregator in act section
        ///
        [TestMethod()]
        public void LogIt_ErrorLevelGreaterThen100_WouldBePublished()
        {
            // arrange
            var logErrorEvent = new LogErrorEvent();
            var publishingServiceMock = MockRepository.GenerateMock<IPublishingService>();
            var eventAggregatorStub = MockRepository.GenerateStub<IEventAggregator>();

            eventAggregatorStub.Stub(p => p.GetEvent<LogErrorEvent>()).Return(logErrorEvent);

            publishingServiceMock
                .Expect(p => p.PublishError(Arg<string>.Matches(param => param.Contains("Some error message"))))
                .Repeat.Once();

            // act
            var loggingService = new LoggingService(eventAggregatorStub, publishingServiceMock);
            logErrorEvent.Publish(new EventData()
            {
                ErrorLevel = 101,
                ErrorMessage = "Some error message"
            });
            // assert
            publishingServiceMock.VerifyAllExpectations();
        }

Almost the same sample like previous one just with two small differences:

  • In arrange section, expectation is that the PublishError method would be called once with a method parameter containing “Some error message” string (inline constrains)
  • In act section, event is published with error level >100 to trigger the positive case when publishing service is been triggered

Green is nice :)

image

Conclusion

Thanks to P&P team and community feedback, CALPRISM event aggregator is implemented in such a way that mocking it is very easy (if not trivial) and every framework enabling easy testing is a good framework for me :) Good work P&P!

Filed under: Uncategorized 3 Comments