.NET and me Coding dreams since 1998!

22Nov/092

Couple of PDC 2009 thoughts too big to fit in 140 characters – praise to Microsoft

PDC 2009 is over and what a conference that was… For folks there tablet for free, ton of excitement experienced first hand and a lot of valuable social networking… For us others not being able to attend there’s a ton of prime time learning material I am really looking forward to see.

In general, amount of positive vibes from community was so high that even surprised the usual MS haters so we had to wait couple of days to start with their choir attacks on Microsoft. I don’t work for Microsoft, I am not an MVP in fact I don’t have any relationship with Microsoft but I still find as not fair a bunch of things I hear considering how much investment we see from MS in the development space and thus I wanted to speak in Microsoft favor in a very opinioned manner … Not claiming things in this post being universal truths – they just reflect my takes on various subjects related to Microsoft development ecosystem.

The glass is half empty…

“Microsoft Silverlight 4 sucks because it is not any more cross platform orientated”

Where it all started: http://www.theregister.co.uk/2009/11/20/silverlight_4_windows_bias/.

One opinion I agree with in general: http://blogs.silverarcade.com/silverlight-games-101/21/silverlight-is-silverlight-4-moving-away-from-cross-platform/

Here are couple of thoughts of my own on that subject:

  • I never heard anyone from MS saying that SL 4 goodies won’t be supported on Mac.
    Judged on a size of Windows (4 Mb) and MacOsX (20+ Mb) Silverlight installers, Silverlight on those two platforms is probably different code base with the same API. Even if not, I can imagine thadst a custom Mac implementation of clipboard access, web cam etc can be done using custom coding for Mac OS API.
  • Does Microsoft really need to spend its own money for Mac users?
    In my personal opinion not really because of couple of things:
    • Number of users with MacOS is insignificant from market share perspective so ROI is not as high as for the investments made for Windows users.
    • Even with that low market share investing in MacOS might have sense only if Mac zealots are not so brainwashed and hate everything from Microsoft. Just check out their comments on how they despise the sites asking them to install “Microsoft bloat ware called Silverlight”, how they would sooner cancel their Netflix subscription then install Silverlight to watch HD
    • Before Windows 7, Microsoft maybe had to play on “works on Mac too” card to reduce the impact surface for the Apple ads, but now with Windows 7 premier OS they don’t have to. Table got turned and IMHO it is now in Apple interest to support Silverlight on Macs because in couple of years it would be everywhere and their users would simply have to have it in other not to be cut of from the most of internet sites. I really believe in that.
  • What about Linux?
    Well, Moonlight development is in Novel hands anyway so it would be up on them to determine the speed when this would come. Considering the fact that MonoTouch looks like their primary interest I don’t think it would happen soon which considering the reasons similar to Mac (low market share, blind hate toward anything coming from Microsoft) is (IMHO) not a big deal.

"Microsoft Silverlight 4 would break the security model of Windows 7”

So, the pitch of this attack is that due to the fact that Silverlight now provides “full access” mode of sandbox,  we are back basically to the ActiveX era where in order to run a web site (which you want) you have to allow elevated rights to that app. In other words, they said that everyone would “OK" that and the hell gates would be open by that.

Here’s a good blog post summarizing what SL 4 full trust mode means for real (http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2009/11/18/silverlight-4-rough-notes-trusted-applications.aspx)

And here are couple of thought of mine related to this subject:

  • The biggest difference between COM and Silverlight is that Silverlight application running in browser is ALLWAYS sandboxed. In order for user to elevate sandbox rights Silverlight application has to be INSTALLED on desktop (which is first safety switch not a lot of users would pass) and during installation there is clean warning informing users about implications.
  • It is “elevated” not “full”. One example (as Mike explained in his blog) is that even in elevated trust you still can not access any folderfile on user hard drive.
  • I have a feeling that the same guys praising Adobe AIR for its tweeter clients never questioning attack surface AIR has due to ITS own full trust and mocking Silverlight for its too secure sand box preventing developers to do cool apps are the same bragging about the sandbox.
  • Microsoft had to listen loud community demands for cross domain web service calls, enable premier full screen experience for media players and kiosk applications.
  • Enabling access to COM in this trusted mode enabled integration with a bunch of software and opened some really interesting and productive implementation ideas. If you would have so much applications of your own including the prime time Office applications, would you seriously pass the card of enabling their integration. I wouldn’t skip that chance for sure :)

“Microsoft DeveloperDesigner workflow is just a myth and thus XAML instead of code based approach is wrong way to go”

Well, in a way I agree that right now most of the creative folks are working with Photoshop, Illustrator, Flash, CSS, HTML and that not a lot of them won’t laugh when BlendSL would be mentioned to them as a career option. But that is understandable only short term.

There are couple of facts why I think that won’t be the case anymore in a year or two when the demand for MS designers would grow due to next things:

  • Silverlight reach the 45% adoption rate in such a short time without real commitment and investments from industry. That number would just grow with continues investment  and alliances Microsoft is making, so rather sooner then later the tipping point would be reached.
  • Silverlight code is the same .NET which enables reusing of the skills .NET developers have. In other words, I bet good part of millions of .NET developers would step into RIA world in upcoming years which would improve the desirability of Silverlight for big companies.
  • Blend 1 sucked, Blend 2 was ok, Blend 3 was fine (with PhotoshopIllustrator import)… Watching the speed and the trend how RIA tools are improved in MS space one could expect that in a year or two Blend would become very capable tool. If not then, then in next version. The key point is that behind Blend there is MS with endless supply of cache pumping into it on demand.

“Microsoft Silverlight is just next Web Forms”

I agree with this one because I too can see the attempt of MS to bring RIA web development to masses the same they did with desktop developers in 2002 when ASP NET bring them to web world. I agree and I don’t see anything wrong with it. The whole stateless response/request pipeline was never supposed to handle RIA scenarios we have now so abstracting that with some API (what web forms did) is not possible. Regardless of how many layers you put on top of web, it is web down bellow and sooner or later it would pop up.

That’s why I think MS did the right thing and break up with the web completely. Now the web browser is just used to host sandbox plug-in, and web just to bring to plug-in data required for it work. After that, it is full desktop model application not having anything to do with the web. Yes, you heard me well – Silverlight is for me desktop application deployed through browser and that is good…

“There’s no thing done by Microsoft Silverlight which I can not do with MVC/HTMLx/CSS/jQuery respecting the web standards at the same time”

Well, there are two type of web properties: web sites and web applications. I guess there is no need to spend a lot of time explaining how cost ineffective would be doing web application using jscript RIA approach. Silverlight web application utilizing some of the concepts prism and other framework have allows effective work of multiple development teams on the same site with clean separation of concerns different teams and team members have in that process utilizing server side .NET skills and XAML designer skills. With jscript implementation is lengthy, cross-browser compatibilities crawl their ugly head sooner or later, performance problems with DOM size and different browsers etc, standard .NET developers are not very usable in client side scripting coding etc.. It can be done, but I strongly believe performance, implementation period, maintenance and cost effectiveness would all be on Silverlight side.

As far of standard web sites, until Silverlight don’t solve the SEO problem, I agree that jQuery is better SEO solution for pages visible by prospect user. If your site has a authorized portion requiring authorization then SEO is not so important because crawlers won’t be anyhow able to access it.

“There’s too much magic in RIA service – it is just another demo ware framework”

I looked at RIA services in its early preview version at the beginning of 2009 and sure there is something in the code gen based approach a lot of us would find “not pure” (things might change from there). So what? I can see a lot of normal sized web sites benefit a lot from it. Even the my bellowed NHIbernate works with RIA Services . The way I see it is that they gave as as an option a bunch of prebuilt code we would need to build anyhow in almost every web siteapplication we do. For a lot of folks replacing the total control on your code base in favor of productivity boost is perfectly valid option. At the end of the day, if you don’t like it – don’t use it do it yourself or buy some other framework like IdeaBlade etc :)

What could be wrong with free lunch?

The glass is half full…

“Microsoft listen us”

In case you haven’t seen it already go checkout http://silverlight.uservoice.com/pages/4325-feature-suggestions where users voted for features which they wanted to be included in Silverlight 4. MOST OF THEM ARE INCLUDED. We asked, Microsoft responded. Thank you Microsoft!

“Silverlight won’t die.”

Most of the people I follow or read their blogs kind a gravitate toward the feeling that Silverlight is destined to fail, that it doesn’t have a chance against Google backed up HTML 5, that JScript is more then sufficient, that comparing to Flash SL is just a toy…

I admit that recently I started being in doubt regarding my decision to bet my career on SilverlightWPF based presentation layers and even started questioning that decision (bought couple of jQuery books :))

After PDC 2009 I don’t have any doubts and that is totally not related to fabulous presentation Scott Gu did – we all expected from that anyhow so that doesn’t count. For me the much more importance is in the fact that almost every slide in Ray Ozzie Key note on day 1 had something related to Silverlight. As soon I heard him about 3 screens vision and I was sure that Microsoft is betting its future on Silverlight and that they won’t step back in the future and ditch Silverlight. And that makes all the sense on this world and makes me smile.

Another important thing coming from Ray’s vision of 3 screens driven by Silverlight is that I won’t be spending time with the MonoTouch andor other iPhone related technologies. What I am going to do instead is just to wait for Windows Mobile 7 which I’m sure (based on recent great design coming from MS kitchen - ZuneHD last example) would be cool looking and HW rocking (based on Win 7) so I expect to see it well and alive even beside Android and iPhone platforms. If that happens, I’ll just reuse my Wpf/Silverlight skills and start developing for WiMo 7. Another thing I hope to finally be able to do on WiMo 7 ship date is to replace my iPhone with WiMo phone and say goodbye to the Apple proprietary things.

(I’ll still read the jQuery books but I am not going to followread jQuery related RSS feeds)

“WPF is not dead.”

A lot of folks (me included) questioned the commitment Microsoft have in Wpf after seeing the state of SL 4.

I don’t think wpf is going anywhere due to next couple of reasons:

  • Much better development experience in WPF then Silverlight (debugging, tools etc)
  • Visual Studio 2010 is WPF application which shows strong commitment Microsoft has toward further enhancing the WPF.
  • This would sounds weird: WPF has bigger user baseadoption then Silverlight 
    Silverlight is at 45%, WPF is at 90% (every windows XP SP2, Vista, Win 7, Server 2008 has WPF).
    In case you care about windows platform only (like me) WPF is perfect platform to build upon.
  • WPF integrates with OS features (jump lists, progress bars etc), can use sync framework, has direct DB access (for corporate intranets) etc
  • Size of .NET Framework 4.0 Client profile is just 30 Mb (in case of 32 bit version) which is really not a big deal to be downloaded in 2009. In other words, on a PC not having .NET at all, downloading of 30 Mb could make the computer fully capable to execute your app.
    More details here: http://blogs.msdn.com/jgoldb/archive/2009/10/19/what-s-new-in-net-framework-4-client-profile-beta-2.aspx

“Entity Framework is very viable option”

We all remember the EF vote of no confidence which pinpointed the reasons why Entity Framework 1 was not a good option for developers: lack of POCO, DB orientated modeling only etc, etc… Just a year after we are having EF 4 which (judged by PDC 09 Entity Framework session) it looks like they fixed them all. They even work on Code Only (“FluentNHibernate for EF”) feature.

I’m pretty sure that gurus like Ayende would find in lest then 10 seconds 27 NHibernate important features missing in entity framework but to me picking a technology to invest my time (among other things) is based in highest perceived ROI on my incomes. I had similar dilemma in 1995 when I had to pick between Microsoft Visual Basic 3.0  and Borland Delphi 1.0 and I pick Microsoft VB 3.0. Why? Not because I thought it was better, but because of three things:

  • I estimated that demand for VB in upcoming period would be much higher then for Delphi
  • I estimated that Borland is more likely to loose that fight with Microsoft (who has endless cash reserves)
  • I estimated that amount of knowledge (books, articles, dev community etc) would be much bigger and easier to get in VB case

Almost the same questions I can ask today: NHibernate VS Microsoft EF4  and with a lot of sadness (I really love NHibernate) I have to conclude that EF is the way to go for me because:

  • In version EF4 is at least “good enough” alternative: POCO and model first are possible.
  • Looks like that is Microsoft long term data strategy which means its knowledge would be very valuable in MS oriented development shops
  • Tooling is ok (I don’t buy that designer crawls when you have hundreds of entities because I never have them all thanks to bounded context and it is nice to do all of your development not leaving the VS 2010)
  • There’s a lot of technologies building on top of EF (ADO NET Data Services, RIA Services, Azure)
  • There are some really good booksblogsvideo material already for EF while for NHibernate there are no so much sources of knowledge. Microsoft would for sure outperform NHibernate in this are many times and directly impact the level of adopting,
  • The most important: It would get better and better with every next release. (If they get so far in just a year where they would be next year and the year after the next year?)

I’ve been working last couple of months on my own personal pet project based on FluentNHibernate <–> NHibernate combo, but I am probably going to migrate to EF4 now. If nothing else, I would at least get better understanding on how it stands today for real against NHibernate based development.

Conclusion

I am really happy these days being a developer in Microsoft world seeing so much initiatives, innovations and energy on Microsoft side. I have a feeling that Microsoft was sleeping somehow until a year ago when it woke up and started again doing great things (Bing, Natal, Courier, VS 2010, Windows 7, Silverlight 4, Zune HD etc). No more easy points for Apple adds and no more easy points from haters on things Microsoft was honestly really doing bad mainly due to ignoring the community feedback.

Good work Microsoft!

Filed under: Uncategorized 2 Comments
4Nov/093

Fluent NHibernate samples – Auto mapping (Part 1/2)

In my previous blog post, I have announced the sample solution with which I try to provide code sample for very comprehensive documentation which can be found on http://fluentnhibernate.org/..

The project is hosted on CodePlex (http://fnhsamples.codeplex.com/). Right now it contains just a small sample which I would use in this blog post but in the future I intend to grow it until I wouldn’t cover most of the interesting features FNH offers.

The purpose of project is just to demo how fluent nhibernate mappings work so you can quickly “get them” and NOT:

  • to teach on how NHibernate works (buy this book for that)
  • to teach you in detail about fluent nhibernate (go to http://www.fluentnhibernate.org for that)
  • to teach about best practices in abstracting NHibernate dependencies (you can find that here)
  • to teach about best practices in domain modeling etc

I am also by no means an expert in neither nhibernate nor fluent nhibernate. In fact I am just a grunt like many of you reading this blog post who was searching for similar blog post when he was banging his head against the wall trying to learn how to use it - 1+ year ago very little docs) so take EVERYTHING I say with a grain of salt.

I am also aware of the state of my English, but I am really doing my best and if anyone wants to help editing the blog post I would put that version in code plex :)

So, let’s start…

An example of Fluent NHibernate auto mapping in action

So, here’s the domain model I’ll be using in this blog post to show couple of FNH auto mapping aspects

image

And here’s the database model which would be created after the sample would be executed based on fluent nhibernate auto mapping conventions.

image

(To get more details about the domain design check the use case description from previous blog post)

Convention over the configuration

imageFNH auto mapping works based on applying the convention over the configuration which I usually try to explain like this…

“There is a set of rules which would be applied to your domain during its mapping to database model.

Here are couple of examples of rulesconventions:

  • Database table would be named used plural form of the entity name.
  • Primary key of every table would be named following the rule “entity name” + “ID”

Fluent NHibernate comes with a default set of rules which you can customize to your own preferences on a very cool and easy way.

Sounds simple? It is THAT simple.”

Let' see quickly how the fluent nhibernate magic happens.

How the Fluent NHibernate Auto mapping magic works (explanation for us others)?

I won’t go deep into the details (for that go to http://fluentnhibernate.org or How FNH works?) but in general it is based on two design patterns: Proxy and Visitor.

If you are not familiar with the patterns here’s a simple way for you to get them in the context of fluent nhibernate magic.

The role of proxy design pattern in fluent nhibernate

If you check out some of the entities in Vuscode.FNHSamples.Domain project you would see that all of them are non sealed classes with all of the members being virtual…  So, the code like this

namespace Vuscode.FNHSamples.Domain
{
    public class Address
    {
        public virtual string Email { get; set; }

        public virtual string City { get; set; }
        
        public virtual string Country { get; set; }
    }
}

The reason why we HAVE TO respect this rules designing our domain classes is related to the fact that FNH uses castle dynamic proxy functionalities to (in overly simplified version) create an “in memory” proxy child class by inheriting the real one and overriding the properties in the proxy child class in order to enable intercepting their calls.

In this example therefore inside of the FNH engine there would be created a “ProxyAddress” class inheriting from the Address class but which would have implemented additional interfaces and override members effectively adding during the run time behaviors and shapes to original Address class. That’s how a POCO can be achieved: we don’t need any attributes, no special base class or base interface etc..

Now, in order to understand why FNH needs this we need to take a quick peek at how FNH “mapping rules” (properly called conventions) are implemented.

How the fluent nhibernate conventions are implemented?

Always on the same way (thanks to brilliant work of FNH authors): always inherit and implement a special interface which has a method accepting a parameter of certain interface type.

    public class ClassConvention : IClassConvention 
    {
        public void Apply(IClassInstance instance)
        {
            // do something with instance
        }
    }

The reason why  our convention has to implement certain interface is due to the fact that during the run time fluent nhibernate would iterate over all of the types of the given assembly and collect “all of the types implementing the IClassConvention” thus “collecting the rules”.

The reason why a IClassInstance parameter was passed is that the Apply method would get SOMETHING implementing that interface without knowingcaring about what that something really is.

This design approach where you enable your class to work with various entities without coupling to its concrete implementation can be roughly called visitor pattern.

If I use gain the same sample of Address and “AddressProxy” class, imagine that AddressProxyClass created on the fly implements the IClassInstance interface. Wouldn’t that enable us to pass an instance of addressProxy class to ClassConvention Apply method so it could perform its functionality on it? :)

The end result of those two patterns is that instance Address class ends inside of the Convention Apply method without adding anything to domain other then virtual keywords on properties.

Iterating, iterating, iterating…

During the runtime mapping process FNH (overly simplified) iterates over all of the types and creates their proxies.

Every proxy gets added certain interfaces which define methods allowing “outer world” to alter the state of proxy. 
Fluent NHibernate (or user) then creates a collection of conventions defining the rules how certain aspects of proxies should be altered.

Now the code iterates every proxy and for every proxy it iterates over all of the conventions and applies the one matching the proxy.

As the end result of that iteration, we gat all of the proxies with their state sets up in proper state.

Then FNH iterates all of the proxies again and translates states of each one of them to NHibernate required XML representation.

Brilliant, isn’t it?

Back to reality

Now when I (hopefully) explained how NHibernate works in layman’s terms, we can just go over the conventions I have in my sample and provide explanation what is the relationship between them with certain pieces of my sample.

Class Convention

This is convention which tells to Fluent NHibernate how to map entities to database tables.

I have only one rule related to that:”A table name should be plural form of the entity name”, so the code doing that is pretty simple

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ClassConvention : IClassConvention 
    {
        public void Apply(IClassInstance instance)
        {
            instance.Table(Inflector.Pluralize(instance.EntityType.Name));
        }
    }
}

IClassInstance has two key members (key in sense of this sample):

  • EntityType – providing access to a type being mapped to database table
  • Table(name) – method which sets the value of the table name.

(To explore capabilities outside of my sample use intellisense which in case of fluent interfaces is your best friend)

Inflector is just a helper class I took from Castle.ActiveRecord project (been a long time ago) and which purpose is to get plural form of a given string. I put it also in sample project so you can check it out if you wish.

So, now when we know all of the pieces we can read the implementation inside of Apply method like this

“Make sure that whatever the proxy was sent it would be mapped in a data table which name would be plural form of the original class proxy was created from”

(Due to the fact every convention is implemented in same manner implement a interface and get something injected I’ll skip repeating that in other conventions)


DefaultStringPropertyConvention

This is convention which tells how class string properties should be mapped in database column.

My rules are simple: default length 100 and every string can be null value.

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class DefaultStringPropertyConvention : IPropertyConvention
    {
        public void Apply(IPropertyInstance instance)
        {
            instance.Length(100);
            instance.Nullable();
        }
    }
}

Foreign Key Convention

This is convention which defines how fluent nhibernate should behave while mapping association properties to foreign keys.

My rule is simple:”Name of the foreign key is name of the table being referenced + ID suffix”

Here’s the code from sample

using System;

using FluentNHibernate.Conventions;
using System.Reflection;

namespace Vuscode.Framework.NHibernate.Conventions
{
    public class CustomForeignKeyConvention : ForeignKeyConvention
    {
        protected override string GetKeyName(PropertyInfo property, Type type)
        {
            return property == null 
                    ? type.Name + "ID" 
                    : property.Name + "ID";
        }
    }
}

As you can see from the code not exact translation of my rule so it needs some additional clarification…

The GetKeyName method accepts two parameters:

  • property (holding a pointer to a property in entity referencing the “parent” entity
  • type (holding a pointer to a  “parent” table)

If we check out the resulting DB diagram

image

and how the domain implementation of Blogs looks like

using System.Collections.Generic;

namespace Vuscode.FNHSamples.Domain
{
    public class Blog : Entity
    {
        public virtual Author Author { get; set;} // component

        public virtual BlogRoll Roll { get; set; } // references
        
        public virtual IList Posts { get; set; } // has many

        public virtual string BlogTitle { get; set; }

    }
}

We can clearly see that due to the fact that Blog has a Roll property pointing to parent BlogRoll, FNH took that property value + "ID" and created RollID foreign key in Blogs data table. So, that explains how FNH works in FK convention when property being sent is with non null value and leaves us with the question “how come that value can be null”?

To answer that, let we check out the other part of the resulting UML diagram representing DB interpretation of the Author class inheritance

image

As you can see, GuestAuthor and RegularAuthor have their foreign keys named “AuthorID” even none of those two has an explicit “parent” property like in previous case.

In other words There is no GuestAuthor.Author and RegularAuthor.Author properties but we still have the need for defining FK in their tables.

In this type of cases, ForeignKeyConvention would get a null propertyInfo value (because there is no property) and a pointer to a entity to be used with defining of database FK (in this example Author). That’s how in my sample foreign key of those tables become

Many to Many convention

This is convention which defines how fluent nhibernate should behave while mapping the N – N relationships where a class A has a collection property of class B type and the class B has a collection property of class A.

Here’s the code sample from the Vuscode.FNHSamples.Domain –> Post and Category classes

namespace Vuscode.FNHSamples.Domain
{
    using System.Collections.Generic;

    public class Post : Entity
    {
        public virtual IList Categories { get; set; }  // has many to many

        public virtual string Title { get; set; }

        public virtual PostStatus Status { get; set; }

    }
}

A Post can have many Categories

namespace Vuscode.FNHSamples.Domain
{
    using System.Collections.Generic;

    public class Category : Entity
    {
        public virtual IList Posts { get; set; } // has many to many

        public virtual string Name { get; set; }
    }
}

One Category can be used in many samples.

I prefer mapping Many to many relationships using the Association Table Mapping pattern where additional table is created with two foreign keys columns matching the primary keys of the associated tables.

Here’s how it looks in resulting database model with PostsToCategories table being association table.

image

I guess after seeing the diagram it is quite obvious to get the many to many convention I have:

Name of the association class should be like “TableNameA” + “To” + “TableNameB”.

How NOT to implement many to many relationship

As you can read in great detail here, many to many is in general combination of two 1 – N relationships which if it would be done like this

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ManyToManyConvention : IHasManyToManyConvention
    {
        public void Apply(IManyToManyCollectionInstance instance)
        {
            instance.Table(
            	string.Format("{0}To{1}",
                        Inflector.Pluralize(instance.EntityType.Name),
                        Inflector.Pluralize(instance.ChildType.Name))
		);
        }
    }
}

Would result in next database diagram state

image

So, we have 2 associations tables where each one of them covers their own path.

Clearly this is an overkill from Db perspective due to the fact that the same table can be used in both paths.

In NHibernate parlance that is achieved using Inverse relation type (again, you can read about it in great detail here) which in layman’s terms can be explained “when you need to map the Post –> Categories relation, pleas use the already defined Categories –> Post and just revert it”

And that’s exactly what the actual implementation in my code sample does

Here’s the code

Let see how this was implemented in the sample code

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ManyToManyConvention : IHasManyToManyConvention
    {
        public void Apply(IManyToManyCollectionInstance instance)
        {
            if (instance.OtherSide == null)
            {
                instance.Table(
                    string.Format(
                        "{0}To{1}",
                        Inflector.Pluralize(instance.EntityType.Name),
                        Inflector.Pluralize(instance.ChildType.Name)));
            }
            else
            {
                instance.Inverse();
            }
        }
    }
}

The instance.OtherSide is null in situation when there is no already defined relationship between EntityType and ChildType. In that case code uses Table method to set the name of association table respecting my naming convention TableA name + “To” + TableB name. That first if would result with PostsToCategories association table being created.

Now the FNH would iterate more (as described above) and the Post –> Category relation would be processed. When this happens, instance.OtherSide would NOT be null so instead of creating new association table FNH would map that relation as inverse to the original one.

Conclusion

This blog post become really long so I have decided to split it in two posts. In next post I’ll show a few more conventions and process of fluent configuration with switches I’ve been using in my sample

Stay tuned :)

Filed under: Uncategorized 3 Comments
3Nov/090

Fluent NHibernate Samples on CodePlex

I’ve been using Fluent NHibernate for more then a year now and I am big fan of it.

The were only two things bothering me in FNH for all that 1+ year:

  1. frequent API changes (which made my fluent mapping and auto mapping blog post pretty quick completely obsolete – not to mention my code :)), but when I saw how polished FNH get in 1.0 version I don’t mind any more – simply beautiful code.
  2. lack of documentation. Don’t get me wrong - http://groups.google.com/group/fluent-nhibernate was REALLY useful most of the the time but I was allways dreaming about something like the current fluent nhibernate wiki which is simply awesome concentrated amount of useful data.

So, to me there’s no more problems with FNH left (beside a few minor bugs no one cares to answerfix) but in last couple of days I’ve accidentally find out ( here on the fnh mailing list and here on stackoverflow) that a lot of folks would like to see a sample code working with fluent nhibernate 1.0 (beside the wiki etc)

That’s why I decided to loop in some of my time today and create a sample project illustrating on a same example both fluent and auto mapping and that’s how I came with

CodePlex project - Fluent Nhibernate samples

The intention of the project is to initially focus on single sample illustrating all of the major use cases in the ‘real world’ manner and then to slowly grow so the sample would start covering more and more corner cases. Hopefully in a long term it would become a c# solution illustrating in one place all of the major FNH usage aspects.

Project can be found on this location http://fnhsamples.codeplex.com/ together with the source code downloadable as zip or SVN-checkout.

Starting solution

So, in order to start the project and based on the questions I am usually hearing mentioned regarding auto mapping I’ve came up  with this simple domain model

image

So a domain space covers imaginary blog engine where:

  • a blog roll is a group of blogs (e.g. codebetter.com),
  • a blog contain one or many blog posts where every blog post can have one or many categories
  • a blog is owned by a single author which can be regular author (that blog is his primary blog) or guest (where he cross post to this blog)

As you can see in this very simple domain I have cases of:

  • References (N – 1 relation) where many Blogs belong to one BlogRoll (in this sample Blog mimics aggregate root)
  • HasOne  (1 – 1 relation)  where author can have only one blog in a blog role and all blog posts of a blog are written by a single author only.
  • Component where author has a complex Address value property of a Address type which (due to the fact it is not entity) we would like to map to same DB table as Author.
  • Subclass (inheritance)where GuestAuthor and RegularAuthor are children of the abstract Author with each one of them having its own custom properties
  • HasMany (1 – N relation) where a blog has one or many blog posts
  • ManyToMany (N – N relation)  where a post can be tagged with many categories and every category can be used in many posts
  • Enumeration  - where a post has status enumerated value column
  • All entities share the same Entity base class which (contrary to Author entity) is not supposed to be mapped in a separate table

Note: I know that this domain model is far from perfect in modeling sense but that is not the point here. The point here is having a meaningful sample which would be used to showcase all the fluent nhibernate magic.

End result

Here’s how the DB model would look like created using both fluent and auto mappings

image

As you can see, DB diagram match pretty much the domain model and all that (in case of Auto Mapping) just by utilizing conventions without any manual defined mappings.

What next?

For the folks curious to check it out now, go to http://fnhsamples.codeplex.com/  where you can download the source currently containing only auto mapping based solution. I guess, it is so simple that most of us could get it just by looking at the small code base there.

For the rest of the people, my next blog post would present the fluent mapping based solution to get from model to DB diagram and after that auto mapping based solution. In that two blog posts, I would comment “line-by-line” so even people new to fluent nhibernate would (I hope) get it.

After that two blog posts, I would have to switch gears and (finally) spit out couple of things I came out with doing Prism development.

I hope at that moment there would be some suggestions what needs to be added to the sample so the sample would start to grow from its starting trivial size.

Filed under: Uncategorized No Comments
2Nov/093

Me, myself and design patterns

Looks like the Silverlight related posts I’ve announced would have to wait one more blog post due to the event which happened to me today and which made me thinking about a few of the architecture related things which at the end resulted with a few of (at least for me) interesting thought I felt sharing with the community.

The event

So, a fellow blogger @RadenkoZec made a nice blog post about Facade design pattern in which comments we had a discussion if the example is appropriate or not, and if the implementation is Facade design pattern  or some other pattern. I won’t repeat here in detail what we were discussed there (go to the blog post to read comments) but there were couple of things in that discussion I was thinking about during the evening…

Patterns should be explored in 3D

It is very interesting that the sites on the net (like the dotfactory.com) seems to be primarily focused on GoF patterns while I was not able to find sites covering at the same time PoEAA patterns andor DDD patterns.

That fact might look irrelevant but if you would check out the content of today’s discussion you would se that the same example code used in blog post

looked to Radenko as Facade (GoF)

FacadeDesignPattern.png

and to me more like a Gateway (PoEEA)

gatewaySketch

 

If you just take a look at above pictures, you would find for sure at least some similarity and that’s why one should look at all of the patterns in whole and not just be exclusive in picking “The book”. Another example of this interpretation conflict can be found between the PoEAA and DDD where patterns such as repository, factory etc some times have different usages if not different meanings.

In other words, IMHO every developer should persist knowledge in three dimensions:

  • X– DDD, PoEAA, GoF
  • Y – specific patterns and their implementations, implications; anti patterns too.
  • Z – time

For the ones of you probably asking what time has to do with this subject - here’s an answer. Software landscape changes significantly in last 20 years so some of the “scriptures” should be taken with a grain of salt.

Here are couple of examples illustrating my point:

  • Observer pattern doesn’t have a lot of sense being implemented out of box with .NET events,
  • Singletons (btw, would have a dedicated post showing how evil they are) are kind of obsolete with the usage of the IoC containers etc

Another aspect of the need for considering the time dimension in design patterns exploration is the fact that in last few years we are all witnessing both the rise of the dynamic languages and the enhancement of static languages where they are getting aspects not existing in the time when “pattern Bibles” were made (lambdas, generics, C# 4.0 dynamic etc). Every of the patterns therefore should be heretically challenged in the lieu of current state of technology.

Context is the king

By mare looking at above diagrams (which for my point can be both pure UML diagrams) it would be really hard to tell what is the difference..

In both cases we have multiple classes encapsulated in one class orchestrates their calls and expose simple API to be consumed.

But if you take a look at the context you can see based on the provided code sample that the packages in case of facade are interconnected, each one of their classes is dependable in it’s own way dependable on other classes and none of them can exist without others. I could imagine refactoring merging those 3 classes in one. In other words, even physically we have multiple classes caring their own implementation they are so connected that on architectural level there’s just one entity which is hiding behind the facade.

That’s why for me Facade feels like “1 – 1” relationship type pattern where a facade hides the complexity of the class (similar in a way to adapter but let’s not digress with that here)

The second case is with clearly different context where all of the packages behind the “facade” (Gateway) are quite independent. They do not depend on each other at all, they cooperate as partners.

The purpose of the PricingGateway is also to expose a simple API to pricing Package but this time when the API would be used Gateway would perform a role of a conductor, orchestrating the calls to the "hided" elements.

That’s why for me Gateway fells more like a “1 – n” relationship type design pattern

To summarize: In order to distinct patterns one would have to understand the problemimplementation context before picking the right flavor – appropriate pattern.

Patterns as a matter of feelings

As you have probably noticed in the text above I am using the “feels like to me” which I’m pretty sure is looking quite unrelated to explicit and scientific thing such are the design patterns so let me clarify that for you :)

As many other, I’ve stepped through couple of phases in my Pattern life

  1. Discovery of patterns (we hear about them from some of the cool kids and all we do at this stage is trying to pretend as best as we can to look like we know what they are talking about)
  2. Trying to get it (AKA “I foundread the GoF book”)
  3. Becoming a believer (including making the presentations to people in phase #1)
  4. Getting it for real (reading every patterns book, blog post etc you can find and memorizing the UML diagrams, use cases etc)
  5. Seeing patterns everywhere and doing them as much as we can
  6. Paying the price from highly complex code base .
  7. Starting to understand the price of patterns application and start using it only when a real design pain is identified which can not be solved on simpler (but still clean) solution
  8. Instead of patterns catalogs starting to focus on basic principles.

So, as far I can tell I am in that phase #8 where all of the patterns somehow melted with each other and all of them “look similar”. I forgot half of their UML diagrams and reference implementations. The only thing in my head left is the name of the pattern, notion of the use case when it is useful and (very blurry for most of them) how it works.

Reading my last statement someone could make a conclusion that I just got more stupid (possible option I admit :)) and to lazy to remember things but I would disagree with him because every one of the patterns I faced is based on the same set of plain logic principles which in case you know and feel them you are good to go with doing “your own” solution which would somehow match some of the patterns.

Here are some of examples of the design principles I have on my mind: SOLID, open-close, DRY, KISS, orthogonal code,  OOP principles (encapsulation, abstraction), TDD, dependency injection and the list could go on with pages

To summarize: I strongly believe that if the developer is feeling natural regarding the design principles (spits out the code following those principles without even thinking) the knowledge of the design pattern can be at much more abstract level (in a way like knowing a page index of a book) without memorizing every bit of their reference implementation (that’s why we have bing aren’t we?) and the created code would still comply to all of those patterns.

Conclusion

I hope all of my rumblings I shared with you my dear reader would help me now to make up the point of this blog post and that is to explain how come when Radenko pulled (completely rightfully) on his blog post couple of referent examples from sites and books of folks far smarter then I am proving that his example implementation is the same as the one they provided, the only thing I could tell him is that his example doesn’t feel like a case for Facade to me”.

I was not claiming neither that the guy who wrote a book did it wrong nor that the dotfactory.com site got it wrong in their sample.

It was just a gut filling of a pragmatic “duct tape architect” which I would have about that code if it would be mine which I shared with him and the community.

Filed under: Uncategorized 3 Comments
1Nov/096

Design for testability – WCF proxies

Recently I spent some time participating in projects involving Silverlight, Prism etc and there are couple of interesting things I’ve came up with during that period which I would like to share with the community in a form of a couple of blog posts showing the “enhancements” I did in our Prism based implementation.

I was thinking a lot where to start and as a result of that I have decided to start with something small but useful. I’ll focus in this blog post on Silverlight, but this simple trick is usable in any other client technology making WCF calls. So, here we go…

How to have testable Silverlight code depending on WCF calls

Why reinventing the wheel?

In order to make sure that trivial thing I’ll be writing about today was not already covered by someone I did my Bing homework and came out with two approaches you might want to check out:

While I find both of them to be perfectly acceptable solution I think they are suboptimal due to different reasons.

In case of first solution (which based on screen cast comments is the way a lot of people do) there is one more layer of abstraction to be maintained.
In typical DDD style application we have DB tables mapped to domain entities which are (usually) flattened in web server layer where the WCF service contracts behave as application level services with client centric shapes and behaviors. With this approach we need another interface with either its own behaviors or just copy pasting the service contract and additional adapter class which delegates the calls to proxy. IMHO, layer of abstraction down –> maintainability level up :)

In case of second solution, my objections are similar to the one I have regarding service locator. Abstracting the “locator” (servicehost) leads to opaque dependencies on a consumer level which are much harder to be unit tested and understood. On top of that, I find this solution also to introduce additional complexity with defining factories, providers etc is IMHO are overkill for simple “TDD enable WCF proxy dependable code”

Duct tape programmer solution

Regardless of how much I disagree with Joel on the value of the duct tape programmer, I couldn’t get away from the fact that my solution compared with other two (full of big patterns and cool code) looks exactly like it is been done by the duct tape programmer – me. :) That’s ok, simplicity is #1 design criteria for me.

In this solution I won’t be using therefore any patterns but instead I would just rely on one hack and one small convention to get quickly testable code which is easier to be maintained and understood (compared to other approaches).

Demoware “WCF proxy being called directly” sample

image So, here’s the setup for today post. We are having application which goal is to show the names of the users having salary greater then $1000.

Client is implemented using Silverlight accessing the user data on a server side by making a WCF service calls.

In order to do that I used vanilla Silverlight solution (didn’t want to use Prism here) consisting of two projects:

  • Web application containing a UserService.svc WCF service and hosting a silverlight app.
  • Silverlight application which has UserService service proxy and MainPage showing the names of users in a ListBox.

 

 

UserService.svc

using System.Collections.ObjectModel;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Activation;

namespace SilverlightApplication.Web
{
    [ServiceContract(Namespace = "http://blog.vuscode.com/200911")]
    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class UserService
    {
        [OperationContract]
        public Collection<User> GetUsers()
        {
            return new Collection() 
            { 
                new User { Id = 1, Name = "Nikola", Salary = 1000 },
                new User { Id = 2, Name = "John", Salary = 2000 },
                new User { Id = 3, Name = "Jane", Salary = 3000 },

            };
        }
    }

    [DataContract]
    public class User
    {
        [DataMember]
        public int Id { get; set; }

        [DataMember]
        public string Name { get; set; }

        [DataMember]
        public decimal Salary { get; set; }
    }
}

Nothing important here: In one file OperationContract defining a method GetUsers returning the collection of users (User DataContratct being defined in the same file)

Let’s move on…

MainPagePresenter? Where’s the MainPageViewModel?

Well, I was surprised that many people I spoke with recently think MVVM is not the only “right” way to do SilverlightWPF development and I tend to disagree with that. While MVVM has its own values (it is also presentation pattern of my choice too) you can do MVC or MVP (both passive view and supervising controller) equally successfully like you can do it in desktop applications (speaking of which, Silverlight for me is desktop app deployed through browser) If you don’t trust me, ask Jeremy. That’s why I ended with some implementation for this

So, in order to spice up this blog post a bit I ended with MVP- like implementation with MainPagePresenter implemented like this

using System.Windows;
using SilverlightApplication.UserServiceProxy;
using System.Linq;

namespace SilverlightApplication
{
    public class MainPagePresenter
    {
        private FrameworkElement view;
        
        public MainPagePresenter(FrameworkElement view)
        {
            this.view = view;
        }

        public void Init() 
        {
            UserServiceClient proxy = new UserServiceClient();
            proxy.GetUsersCompleted += (sender, e) =>
                                       {
                                           this.view.DataContext = e.Result.Where(p => p.Salary > 1000);
                                       };
            proxy.GetUsersAsync();
        }
    }
}

Constructor here is more interesting because it accepts the instance with type inheriting from FrameworkElement (pretty much most of controls in Silverlight) and stores its pointer to a view filed. In other words, abstraction of a view is being injected into the presenter just instead of the IView I am being smart here and using the FrameworkElement as abstraction  every view (user control implements).

It is the good old demoware type of code you’ve seen on a lot of places with proxy being instantiated in the method body spiced up with cool lambda implementation of async event handler (which any better presenter would use to scare the session attendees) and with a simple linq statement filtering the result set to exclude the rows lower then 1000.

The magic in this code starts when view FrameworkElement.DataContext gets set by the filtered result set. Presenter doesn’t have a clue about what specific control view is but still it is able to set its common property to a value generating desired view behavior.

Wiring  up the view and the presenter

There’s really big religious war on the IT sky regarding who is created first (view or viewmodel presenter) and the allowed level of coupling between them. I have my own take on that to which I would dedicate a separate blog post (it is important subject) but for the sake of this blog post let say that it is perfectlly fine that view is created first and that it has direct reference to presenter.

That’s how the view (MainPage.xaml file) ended being implemented like this

using System.Windows;
using System.Windows.Controls;

namespace SilverlightApplication
{
    public partial class MainPage : UserControl
    {
        MainPagePresenter presenter;

        public MainPage()
        {
            InitializeComponent();
            this.presenter = new MainPagePresenter(this);
            this.Loaded += new RoutedEventHandler(MainPage_Loaded);
        }

        void MainPage_Loaded(object sender, RoutedEventArgs e)
        {
            this.presenter.Init();
        }
    }
}

As you can see in a page constructor a instance of the presenter was created with a pointer to the page itself being passed to presenter (which if you look up would be held as a presenter field and used in Init method).

Then Loaded event handler is being created (don’t put any UI related code in the constructor because it is not guaranteed that UI would be ready when that line would be executed) and in it presenter.Init() method invoked. In other word, view said to presenter “Please set me up!”

Markup code

Did I tell you how much I like WpfSilverlight? Check the simplicity of the markup code!

    <Grid x:Name="LayoutRoot" Background="White">
        <ListBox
                 Name="listBox1"
                 ItemsSource="{Binding}"
                 DisplayMemberPath="Name" />
    </Grid>

Setting ItemsSource to {Binding} effectively said to control “Bind to your own DataContext” (key moment specific to WPFSL), and DisplayMemeberPath Name value can be read “whatever the collection would be in DataContext, collection item would have a property called Name”

Simple, elegant and powerful!

Runtime experience

image

Wow, so much talk about such a simple thing. Ok, I proved it working

My duct tape based solution

is based on the simplest possible solution I was expecting that WCF supports out of box: IXYZServiceClient interface contracting in abstract way proxy behavior. 

To my surprise I have realized that WCF generated proxy is not creating that interface (IUserServiceClient in this example) so I decided to created in manually.

Doing things WCF team should have already done

I’ve clicked on Show All files icon (to see hidden proxy files) and opened Reference.cs file containing proxy c# code.

image

Then I found UserServiceClient class definition

image

Right click it and pick to extract interface (R# can help with this too)

image

I pick only the members I care to be abstracted (leaving unchecked all of the WCF infrastructure members)

image

and ended with this

image image

Obviously not good solution because next proxy refresh and all this is gone, so there are two problems:

  1. How to preserve interface
  2. How to avoid having manually to add interface implementation to generated ServiceClient class.

In order to solve problem #1, I have created then a folder with the same name as the proxy and drag and drop the interface to that folder. First problem solved!

In order to solve problem #2, I am utilizing the fact that the ServiceClient is generated as partial class so I create in the new folder  another partial UserServiceClient class implementing the IUserServiceClient interface. Here’s how that class look like

namespace SilverlightApplication.UserServiceProxy
{
    public partial class UserServiceClient : IUserServiceClient
    {
    }
}

I hope now you can get the reason why I was a folder with the same name as the proxy –> the namespaces of types and interfaces in that folder are matching out of the box the ones in the proxy class

And here's the solution (for visual learners like me)

image

To summarize the solution:

  • I’ve created the IServiceClient by simple extracting the facade of the proxy client interface
  • I’ve created partial UserServiceClient which is only attaching the interface to proxy
  • Both of files are  not destroyed with proxy regeneration.
  • If the WCF service contract changes over the time, regenerating of new IUserServiceClient is trivial task taking less then 15 seconds.

Duct tape solution at its best! :)

Modifying presenter

using System.Windows;
using SilverlightApplication.UserServiceProxy;
using System.Linq;

namespace SilverlightApplication
{
    public class MainPagePresenter
    {
        private FrameworkElement view;
        private IUserServiceClient userServiceClient;
        
        public MainPagePresenter(FrameworkElement view, IUserServiceClient userServiceClient)
        {
            this.view = view;
            this.userServiceClient = userServiceClient;
        }

        public void Init() 
        {
            this.userServiceClient.GetUsersCompleted += (sender, e) =>
                                       {
                                           this.view.DataContext = e.Result.Where(p => p.Salary > 1000);
                                       };
            this.userServiceClient.GetUsersAsync();
        }
    }
}

As you can tell I’ve made two changes:

  • added another parameter to the constructor injecting the newly created IUserServiceClient interface
  • Modified Init method to replace proxy instantiation with usage of injected client proxy abstraction.

Modifying the view

Usually I wouldn’t modify the view but instead rely on IoC container to inject the UserServiceClient instance, but in order to keep this post focused I won’t be using IoC here (you’ll see it in one of my future posts showing my Prism enhancements) so I needed to make a simple change in a page constructor

        public MainPage()
        {
            InitializeComponent();
            this.presenter = new MainPagePresenter(this, new UserServiceClient());
            this.Loaded += new RoutedEventHandler(MainPage_Loaded);
        }

Nothing special, just added new UserServiceClient() to the parameters being passed to a presenter.

Running the app again

image

Yeap, still works :)

Where’s the unit test?

Well, this blog post is long enough (and it is late enough :)) so I would skip the unit test example this time because it is fairly trivial to be written now when we have interface of the service proxy.

Summary

In this blog post I showed another way how to easy introduce a layer of simple abstraction between the WCF service proxy and the code consuming it.

I find the advantage of my approach VS the two of others in its simplicity and maintainability but again considering I “invented it” that comes as no surprise to me :)

Source code of the end solution can be downloaded from here.

del.icio.us Tags: ,,,
Filed under: Uncategorized 6 Comments