.NET and me Coding dreams since 1998!

2Jun/1017

5 reasons why Silverlight sucks in LOB (compared to WPF)

Recently, Brian Noyes and  Rob Relyea have touched the “WPF VS Silverlight” subject and considering the fact I was also recently thinking about it I wanted to share my thoughts on that topic too.

As I said in previous post, I’ve started at home blogging about the accountingLOB applications in Serbia and one of the questions I got challenged by one of my readers (who knows how BIG Silverlight fan I am)  is

“Would you be using Silverlight for your own accountingLOB application?”

Initially answer looked very clear to me: with all the improvements Silverlight 4 brought to LOB game, desktop like programming model and web deployment looks like a perfect fit for public facing application (outside of intranets)

But, after doing some more thinking on this subject, to my surprise I came up with the opposite conclusion:
WPF is better choice for serious LOB applications.

And here are 5 most important reasons why I think like this:

Silverlight 4 is not cross platform environment any more

The biggest advantage SL had over the WPF (in my mind at least) is ability to be deployed to non-windows machines (MacOS and Linux powered machines).
Having Silverlight 4 with a whole slew of COM+ dependable features virtually prevents creating a Silverlight 4 siteapplication which would run on Mac and Linux. At least, that is the state as of today I am aware – somebody please correct me if I am wrong in this.

The way I see this change is that Silverlight 4 is shifting toward being unique “cross-screen” (desktop, mobile and TV) platform which is perfectly fine with me just it doesn’t have any particular value in context of LOB applications).

UPDATE: I did found a couple of folks with Mac which were kind enough to tell me that on Silverlight.net site there is Silverlight 4 plug-in for Mac which (as long as COM+ features are not used) works fine.

Silverlight adoption rate is not good enough

According to later RIA Stats adoption rate of Silverlight is around 60%. I’ll put aside the fact that I am not seeing that number around me in Czech Republic and accept it as correct one with slightly different interpretation: 40% of PCs are not having Silverlight installed.

The funniest thing is that WPF has 99% adoption rate because every PC with Windows newer then Windows XP SP2 (including Vista and Windows 7) has  WPF installed on it. I am not sure how many Windows 2000 and Windows 98 machines are out there but whatever the number that is personally I don’t think anyone should care targeting that segment as very unlikely to invest any money in purchasing your LOB product.

Even if a PC is not having the .NET framework at all, the download size to get it on PC is just 28 MB which is bigger then 9 Mb size of MacOs Silverlight 3 plug in but who cares (with any non dial up connection it is matter of seconds). In my personal opinion, this is one of the most important WPF features in .NET 4 :)

Silverlight tooling is good enough. WPF tooling is better

Starting with VS 2010 and Blend 4 we can work in SL4 and Silverlight is getting much more attention (just look at the paces of silverlight and wpf toolkits and everything gets to be clear there) but using WPF allows me to use all of the memory profilers, dbg viewers, any framework I want etc. If you are in doubt what exactly I think with this here’s an example: Silverlight does support printing but in case of serious LOB applications you need all the muscle WPF offers. Think something like Crystal Reports for example.

Silverlight programming model is more constrained then the WPF one

Doing Silverlight applications, one is forced to adopt the “make async web service call and get a chunk of data and do something with it” which in my personal experience limits the productivity of LOB developer compared to the speed he has developing with WPF . There’s no direct access to DB (which is actually great) but that ignores the fact that some LOB applications might need just that. For example, application can be written to target local SQL CompactExpress which then is set up to replicatemerge deltas with the central enterprise server. Anything like that (and we know how this things can get crazy in enterprises) is not possible in Silverlight.

Another thing related to this is aspect of offline access. I am aware that Silverlight 4 does have isolated storage and yes it has a bunch of open source DBs sitting on top of it, but it is just a single user storage. In reality, quite often in serious LOB applications we are seeing office andor P2P network topologies where it is essential that you have a “proxy per office” or ability to sync directly the data of “user X”. I know that Sync Framework is coming for Silverlight in 2010 but it is not there now and I am not sure if it would support topologies other then client <->(Azure) server.

Silverlight is still technically inferior to WPF in some areas.

Read Brian's post to see what this point is about.

Conclusion

Now you heard 5 of my most important reasons why I choose to stick with WPF on this. Am I missing the point? Making a false statement? Do you have more reasons in favor of WPF or Silverlight?

Looking forward to hear the comments :)

Technorati Oznake: ,
Filed under: Uncategorized 17 Comments
12May/100

I am starting two blogs about accounting and eCom in Serbia

Shameless SEO plug-in for my new blogs :)

(This probably means that you don’t want to read rest of this blog post)

Recently I had more free time then I have it usually, so I’ve decided to spend some of my time blogging about a theme I was always passionate about: accounting applications and internet business in Serbia. I would be doing it on Serbian language because I doubt that anyone outside of Balkan would be interested in those topics.

I also started researching much more WPF programming which (together with Silverlight I’ve been practicing for some time) is a topic I think most of people are not very much interested in today’s web world so that’s where the blog silence on this blog comes from but I have collected over time descent backlog of post ideas (mainly around CAGPrism and the way I adapt it to my needs) which I plan to spit out in upcoming days on this blog.

Just in 0.01% case you would be interested in checking out my Serbian blogs (they do provide Google translate translations) here are the links:

Filed under: Uncategorized No Comments
17Feb/105

Using the entity framework POCO template with VS2010 RC (easier way)

A while ago Julie blog posted tutorial on how to use VS 2010 beta POCO T4 templates in VS2010 RC. While I found that post very helpful I had hard time following it and after a bit of playing I think I found easier way how to get POCO template working:

Here are n simple steps:

  1. Go to Announcing the Entity Framework POCO Template update for Visual Studio 2010 Beta 2 and download the attached zip file (or click here)
  2. Unpack it anywhere
  3. Run the C# andor VB installer
  4. Open Visual studio 2010 RC
  5. Add a POCO template (New ItemCodeADO.NET POCO Entity Generator

Open the Model TT file being generated by T4 (No need to do anything in *.Context.tt file)

Update L15 of the generated file to be

	EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this);

Original content:TemplateFileManager fileManager = TemplateFileManager.Create(this);

Update L678 of the generated file to be

	fileManager.Process();

It is originally fileManager.WriteFiles();

And voila – your T4 template is operational without the need to copy anything, no need for VS2010 b2 image files.

Please let me know (here on blog or on twitter @malovicn) if you would have any problems with this steps

Technorati Ознаке: ,
Filed under: Uncategorized 5 Comments
13Feb/104

Knetlik conference was a lot of fun

Knetlik .NET conference is over – we had today several interesting presentations about different .NET aspects (ranging from Silverlight for Facebook to AOP PostSharp programming.

I was presenting “Quick introduction to DDD” and while I was trying to speak as fast as I could and to stay away from details and implementation I went well above 10 minutes. Not sure why not being able to present 37 slides in 10 minutes surprised me so much :) Other then breaking the time slot, I am satisfied with the presentation – remembered most of the things I wanted to highlight and I guess slides were fun enough (I was using Roman empire as a base for analogies) to be remembered so I hope when some of attendees would start reading the DDD books they would have much easier job in understanding the basic concepts.

I was stupid enough to start Camtasia on my lap top, so I’ve uploaded here my deck just in case someone would need them once Andrew would publish the recording.

Prague Silverlight User Group (SLUG)

On the side note, but equally important I met several folks and local DPE, which shared my enthusiasm regarding the Prague SLUG (SiLverlight User Group) so that is definitely something which would happen in 2010.

So, if you are interested in SilverlightWPF development and leaving in Prague, please join the group I have created here: http://pragueslug.groups.live.com/ so you would be informed when the SLUG meetings would start and we might use it for group discussions too. See you there :)

Filed under: Uncategorized 4 Comments
20Jan/101

Do you want to learn DDD in 10 minutes? :)

· I’ve been doing a lot of presentations in last couple of years and people attending them know how hard was to fit most of them to 2 - 3 hour time slot (big subjects – trying to get into details)

So, when Andrew asked me if I would like to be one of presenters and pitch some subject in 10 (ten) minutes max I accepted immediately because sounds so challenging (read: fun).

So, my subject I would be presenting is “Quick introduction to Domain Driven Design” which in case you are newbie won’t definitely “teach you DDD” but (I hope) would be of some help getting a grip on some key concepts which understanding I find VERY helpful while reading “the blue book”.

I’ll also give my best also to present it in a way which won’t bore you to dead so it should be fun experience even you are “not newbie”.

In case you want to find out more details about the conference and register for 2 hours of pure information injection, click here.

Filed under: Uncategorized 1 Comment
8Jan/1014

Asking the right questions while interviewing developers

Just another fizzbuzz interview question

I really hate interviews regardless on which side of the table I am sitting during them(Ok, it is a bit easier when you interview candidates :) )

One of the main reasons why I hate them is the stupidity of the programming trivia and questions being usually asked: all sorts of binary tree related traversals, converting numbers from one base to another, number sequences  etc… I mean, who really cares about that stuff in everyday's work?? Am I applying for mathematician job or for a developer one?
I have a feeling those questions are created to fresh grads because they don’t have much real world experiences but that’s just me guessing..,

But, not all of the interview questions are stupid and the most famous example of what I find to be a good type of interview question is famous “fizzbuzz problem” described by Jeff Atwood and Scott Hanselman which is an example of very simple task which resolution is purely related on persons real-world like thinking skills and not related at all to any other type of knowledge.

Couple of years ago I’ve stumbled on a video recording of a session made by Brad Adams regarding his excellent Framework Design Guidelines book where he mentioned an example which looked so trivial to me that I quickly discarded it as very obvious to literally everyone. (I couldn’t find that link for this blog post :()

Couple of months after that I was interviewing a candidate for one senior position and due to the fact he gave fuzzy answers making me wondering if he gets heap and stack so I took Brad’s example idea and came out with a simple question looking something like this

namespace InterviewTrivia
{    
	public class ClassA    
	{        
		public override string ToString()        
		{            
			return "Hello from class A";        
		}    
	}    
	
	public class ClassB : ClassA    
	{        
		public override string ToString()        
		{            
			return "Hello from class B";        
		}    
	}    
	
	public class ClassC : ClassB    
	{        
		public override string ToString()        
		{            
			return "Hello from class C";        
		}    
	}
}

Nothing fancy here just 3 classes inheriting from each other and each one overriding object ToString() method

Now the console application Program class can be done something like this:

using System;
namespace InterviewTrivia
{
	class Program
	{        
		static void Main(string[] args)        
		{            
			ClassA first = new ClassC();                        
			ClassC second = new ClassC();            
			ClassB third = (ClassB)second;            
			ClassA fourth = (ClassA)third;            
			object fifth = (object)fourth;            
			
			Console.WriteLine("1: " + first.ToString());            
			Console.WriteLine("2: " + second.ToString());            
			Console.WriteLine("3: " + third.ToString());            
			Console.WriteLine("4: " + fourth.ToString());            
			Console.WriteLine("5: " + fifth.ToString());            
			Console.ReadLine();        
		}    
	}
}

As you can see there, first 5 lines are doing different variants of casting class c instance and then printing out the result of ToString() method.

I wrote this sample on a whiteboard (to avoid having R# helping him with the answer) and ask the candidate what would be the resulting output of an app.

Why is this fizzbuzz question?

Because it is:

  • very simple problem and code sample
  • focused on concrete programming scenario pinpointing the (IMHO) important OOP concepts which need to be understood by senior developers
  • in order to solve it one doesn’t need any knowledge other then programming skills.
  • it is white board friendly – very important attribute for DEV interviews

So, what happened?

To my surprise that candidate bluntly failed the test reporting as a result:

  1. “Hello from ClassA”
  2. “Hello from ClassC”
  3. “Hello from ClassB”
  4. “Hello from ClassA”
  5. “InterviewTrivia.ClassA”

I was done with the interview convinced this candidate was exceptionally bad and I’ve stayed convinced in that until accidentally I’ve run the same sample with couple of other developers and in most cases got the same answer as the candidate gave. That fact is maybe just the result that we .NET developers are really spoiled and allowed to be blissfully ignorant about how stack and heap in .NET works (which is btw the thing I strongly disagree), but that is not the point of this blog post.

The point is that with properly chosen trivia and programming questions we are using in interviews we can avoid loosing good candidates just because they don’t know for sure how to convert –1234.56 to hexadecimal base number (even on paper) and focus on getting information on really important attributes a new member of the team would be bringing to the team (or not) once joining the company.

(Sample code of today’s post can be found here)

Technorati Tags: ,,

Share this post :

Filed under: Uncategorized 14 Comments
22Nov/092

Couple of PDC 2009 thoughts too big to fit in 140 characters – praise to Microsoft

PDC 2009 is over and what a conference that was… For folks there tablet for free, ton of excitement experienced first hand and a lot of valuable social networking… For us others not being able to attend there’s a ton of prime time learning material I am really looking forward to see.

In general, amount of positive vibes from community was so high that even surprised the usual MS haters so we had to wait couple of days to start with their choir attacks on Microsoft. I don’t work for Microsoft, I am not an MVP in fact I don’t have any relationship with Microsoft but I still find as not fair a bunch of things I hear considering how much investment we see from MS in the development space and thus I wanted to speak in Microsoft favor in a very opinioned manner … Not claiming things in this post being universal truths – they just reflect my takes on various subjects related to Microsoft development ecosystem.

The glass is half empty…

“Microsoft Silverlight 4 sucks because it is not any more cross platform orientated”

Where it all started: http://www.theregister.co.uk/2009/11/20/silverlight_4_windows_bias/.

One opinion I agree with in general: http://blogs.silverarcade.com/silverlight-games-101/21/silverlight-is-silverlight-4-moving-away-from-cross-platform/

Here are couple of thoughts of my own on that subject:

  • I never heard anyone from MS saying that SL 4 goodies won’t be supported on Mac.
    Judged on a size of Windows (4 Mb) and MacOsX (20+ Mb) Silverlight installers, Silverlight on those two platforms is probably different code base with the same API. Even if not, I can imagine thadst a custom Mac implementation of clipboard access, web cam etc can be done using custom coding for Mac OS API.
  • Does Microsoft really need to spend its own money for Mac users?
    In my personal opinion not really because of couple of things:
    • Number of users with MacOS is insignificant from market share perspective so ROI is not as high as for the investments made for Windows users.
    • Even with that low market share investing in MacOS might have sense only if Mac zealots are not so brainwashed and hate everything from Microsoft. Just check out their comments on how they despise the sites asking them to install “Microsoft bloat ware called Silverlight”, how they would sooner cancel their Netflix subscription then install Silverlight to watch HD
    • Before Windows 7, Microsoft maybe had to play on “works on Mac too” card to reduce the impact surface for the Apple ads, but now with Windows 7 premier OS they don’t have to. Table got turned and IMHO it is now in Apple interest to support Silverlight on Macs because in couple of years it would be everywhere and their users would simply have to have it in other not to be cut of from the most of internet sites. I really believe in that.
  • What about Linux?
    Well, Moonlight development is in Novel hands anyway so it would be up on them to determine the speed when this would come. Considering the fact that MonoTouch looks like their primary interest I don’t think it would happen soon which considering the reasons similar to Mac (low market share, blind hate toward anything coming from Microsoft) is (IMHO) not a big deal.

"Microsoft Silverlight 4 would break the security model of Windows 7”

So, the pitch of this attack is that due to the fact that Silverlight now provides “full access” mode of sandbox,  we are back basically to the ActiveX era where in order to run a web site (which you want) you have to allow elevated rights to that app. In other words, they said that everyone would “OK" that and the hell gates would be open by that.

Here’s a good blog post summarizing what SL 4 full trust mode means for real (http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2009/11/18/silverlight-4-rough-notes-trusted-applications.aspx)

And here are couple of thought of mine related to this subject:

  • The biggest difference between COM and Silverlight is that Silverlight application running in browser is ALLWAYS sandboxed. In order for user to elevate sandbox rights Silverlight application has to be INSTALLED on desktop (which is first safety switch not a lot of users would pass) and during installation there is clean warning informing users about implications.
  • It is “elevated” not “full”. One example (as Mike explained in his blog) is that even in elevated trust you still can not access any folderfile on user hard drive.
  • I have a feeling that the same guys praising Adobe AIR for its tweeter clients never questioning attack surface AIR has due to ITS own full trust and mocking Silverlight for its too secure sand box preventing developers to do cool apps are the same bragging about the sandbox.
  • Microsoft had to listen loud community demands for cross domain web service calls, enable premier full screen experience for media players and kiosk applications.
  • Enabling access to COM in this trusted mode enabled integration with a bunch of software and opened some really interesting and productive implementation ideas. If you would have so much applications of your own including the prime time Office applications, would you seriously pass the card of enabling their integration. I wouldn’t skip that chance for sure :)

“Microsoft DeveloperDesigner workflow is just a myth and thus XAML instead of code based approach is wrong way to go”

Well, in a way I agree that right now most of the creative folks are working with Photoshop, Illustrator, Flash, CSS, HTML and that not a lot of them won’t laugh when BlendSL would be mentioned to them as a career option. But that is understandable only short term.

There are couple of facts why I think that won’t be the case anymore in a year or two when the demand for MS designers would grow due to next things:

  • Silverlight reach the 45% adoption rate in such a short time without real commitment and investments from industry. That number would just grow with continues investment  and alliances Microsoft is making, so rather sooner then later the tipping point would be reached.
  • Silverlight code is the same .NET which enables reusing of the skills .NET developers have. In other words, I bet good part of millions of .NET developers would step into RIA world in upcoming years which would improve the desirability of Silverlight for big companies.
  • Blend 1 sucked, Blend 2 was ok, Blend 3 was fine (with PhotoshopIllustrator import)… Watching the speed and the trend how RIA tools are improved in MS space one could expect that in a year or two Blend would become very capable tool. If not then, then in next version. The key point is that behind Blend there is MS with endless supply of cache pumping into it on demand.

“Microsoft Silverlight is just next Web Forms”

I agree with this one because I too can see the attempt of MS to bring RIA web development to masses the same they did with desktop developers in 2002 when ASP NET bring them to web world. I agree and I don’t see anything wrong with it. The whole stateless response/request pipeline was never supposed to handle RIA scenarios we have now so abstracting that with some API (what web forms did) is not possible. Regardless of how many layers you put on top of web, it is web down bellow and sooner or later it would pop up.

That’s why I think MS did the right thing and break up with the web completely. Now the web browser is just used to host sandbox plug-in, and web just to bring to plug-in data required for it work. After that, it is full desktop model application not having anything to do with the web. Yes, you heard me well – Silverlight is for me desktop application deployed through browser and that is good…

“There’s no thing done by Microsoft Silverlight which I can not do with MVC/HTMLx/CSS/jQuery respecting the web standards at the same time”

Well, there are two type of web properties: web sites and web applications. I guess there is no need to spend a lot of time explaining how cost ineffective would be doing web application using jscript RIA approach. Silverlight web application utilizing some of the concepts prism and other framework have allows effective work of multiple development teams on the same site with clean separation of concerns different teams and team members have in that process utilizing server side .NET skills and XAML designer skills. With jscript implementation is lengthy, cross-browser compatibilities crawl their ugly head sooner or later, performance problems with DOM size and different browsers etc, standard .NET developers are not very usable in client side scripting coding etc.. It can be done, but I strongly believe performance, implementation period, maintenance and cost effectiveness would all be on Silverlight side.

As far of standard web sites, until Silverlight don’t solve the SEO problem, I agree that jQuery is better SEO solution for pages visible by prospect user. If your site has a authorized portion requiring authorization then SEO is not so important because crawlers won’t be anyhow able to access it.

“There’s too much magic in RIA service – it is just another demo ware framework”

I looked at RIA services in its early preview version at the beginning of 2009 and sure there is something in the code gen based approach a lot of us would find “not pure” (things might change from there). So what? I can see a lot of normal sized web sites benefit a lot from it. Even the my bellowed NHIbernate works with RIA Services . The way I see it is that they gave as as an option a bunch of prebuilt code we would need to build anyhow in almost every web siteapplication we do. For a lot of folks replacing the total control on your code base in favor of productivity boost is perfectly valid option. At the end of the day, if you don’t like it – don’t use it do it yourself or buy some other framework like IdeaBlade etc :)

What could be wrong with free lunch?

The glass is half full…

“Microsoft listen us”

In case you haven’t seen it already go checkout http://silverlight.uservoice.com/pages/4325-feature-suggestions where users voted for features which they wanted to be included in Silverlight 4. MOST OF THEM ARE INCLUDED. We asked, Microsoft responded. Thank you Microsoft!

“Silverlight won’t die.”

Most of the people I follow or read their blogs kind a gravitate toward the feeling that Silverlight is destined to fail, that it doesn’t have a chance against Google backed up HTML 5, that JScript is more then sufficient, that comparing to Flash SL is just a toy…

I admit that recently I started being in doubt regarding my decision to bet my career on SilverlightWPF based presentation layers and even started questioning that decision (bought couple of jQuery books :))

After PDC 2009 I don’t have any doubts and that is totally not related to fabulous presentation Scott Gu did – we all expected from that anyhow so that doesn’t count. For me the much more importance is in the fact that almost every slide in Ray Ozzie Key note on day 1 had something related to Silverlight. As soon I heard him about 3 screens vision and I was sure that Microsoft is betting its future on Silverlight and that they won’t step back in the future and ditch Silverlight. And that makes all the sense on this world and makes me smile.

Another important thing coming from Ray’s vision of 3 screens driven by Silverlight is that I won’t be spending time with the MonoTouch andor other iPhone related technologies. What I am going to do instead is just to wait for Windows Mobile 7 which I’m sure (based on recent great design coming from MS kitchen - ZuneHD last example) would be cool looking and HW rocking (based on Win 7) so I expect to see it well and alive even beside Android and iPhone platforms. If that happens, I’ll just reuse my Wpf/Silverlight skills and start developing for WiMo 7. Another thing I hope to finally be able to do on WiMo 7 ship date is to replace my iPhone with WiMo phone and say goodbye to the Apple proprietary things.

(I’ll still read the jQuery books but I am not going to followread jQuery related RSS feeds)

“WPF is not dead.”

A lot of folks (me included) questioned the commitment Microsoft have in Wpf after seeing the state of SL 4.

I don’t think wpf is going anywhere due to next couple of reasons:

  • Much better development experience in WPF then Silverlight (debugging, tools etc)
  • Visual Studio 2010 is WPF application which shows strong commitment Microsoft has toward further enhancing the WPF.
  • This would sounds weird: WPF has bigger user baseadoption then Silverlight 
    Silverlight is at 45%, WPF is at 90% (every windows XP SP2, Vista, Win 7, Server 2008 has WPF).
    In case you care about windows platform only (like me) WPF is perfect platform to build upon.
  • WPF integrates with OS features (jump lists, progress bars etc), can use sync framework, has direct DB access (for corporate intranets) etc
  • Size of .NET Framework 4.0 Client profile is just 30 Mb (in case of 32 bit version) which is really not a big deal to be downloaded in 2009. In other words, on a PC not having .NET at all, downloading of 30 Mb could make the computer fully capable to execute your app.
    More details here: http://blogs.msdn.com/jgoldb/archive/2009/10/19/what-s-new-in-net-framework-4-client-profile-beta-2.aspx

“Entity Framework is very viable option”

We all remember the EF vote of no confidence which pinpointed the reasons why Entity Framework 1 was not a good option for developers: lack of POCO, DB orientated modeling only etc, etc… Just a year after we are having EF 4 which (judged by PDC 09 Entity Framework session) it looks like they fixed them all. They even work on Code Only (“FluentNHibernate for EF”) feature.

I’m pretty sure that gurus like Ayende would find in lest then 10 seconds 27 NHibernate important features missing in entity framework but to me picking a technology to invest my time (among other things) is based in highest perceived ROI on my incomes. I had similar dilemma in 1995 when I had to pick between Microsoft Visual Basic 3.0  and Borland Delphi 1.0 and I pick Microsoft VB 3.0. Why? Not because I thought it was better, but because of three things:

  • I estimated that demand for VB in upcoming period would be much higher then for Delphi
  • I estimated that Borland is more likely to loose that fight with Microsoft (who has endless cash reserves)
  • I estimated that amount of knowledge (books, articles, dev community etc) would be much bigger and easier to get in VB case

Almost the same questions I can ask today: NHibernate VS Microsoft EF4  and with a lot of sadness (I really love NHibernate) I have to conclude that EF is the way to go for me because:

  • In version EF4 is at least “good enough” alternative: POCO and model first are possible.
  • Looks like that is Microsoft long term data strategy which means its knowledge would be very valuable in MS oriented development shops
  • Tooling is ok (I don’t buy that designer crawls when you have hundreds of entities because I never have them all thanks to bounded context and it is nice to do all of your development not leaving the VS 2010)
  • There’s a lot of technologies building on top of EF (ADO NET Data Services, RIA Services, Azure)
  • There are some really good booksblogsvideo material already for EF while for NHibernate there are no so much sources of knowledge. Microsoft would for sure outperform NHibernate in this are many times and directly impact the level of adopting,
  • The most important: It would get better and better with every next release. (If they get so far in just a year where they would be next year and the year after the next year?)

I’ve been working last couple of months on my own personal pet project based on FluentNHibernate <–> NHibernate combo, but I am probably going to migrate to EF4 now. If nothing else, I would at least get better understanding on how it stands today for real against NHibernate based development.

Conclusion

I am really happy these days being a developer in Microsoft world seeing so much initiatives, innovations and energy on Microsoft side. I have a feeling that Microsoft was sleeping somehow until a year ago when it woke up and started again doing great things (Bing, Natal, Courier, VS 2010, Windows 7, Silverlight 4, Zune HD etc). No more easy points for Apple adds and no more easy points from haters on things Microsoft was honestly really doing bad mainly due to ignoring the community feedback.

Good work Microsoft!

Filed under: Uncategorized 2 Comments
4Nov/093

Fluent NHibernate samples – Auto mapping (Part 1/2)

In my previous blog post, I have announced the sample solution with which I try to provide code sample for very comprehensive documentation which can be found on http://fluentnhibernate.org/..

The project is hosted on CodePlex (http://fnhsamples.codeplex.com/). Right now it contains just a small sample which I would use in this blog post but in the future I intend to grow it until I wouldn’t cover most of the interesting features FNH offers.

The purpose of project is just to demo how fluent nhibernate mappings work so you can quickly “get them” and NOT:

  • to teach on how NHibernate works (buy this book for that)
  • to teach you in detail about fluent nhibernate (go to http://www.fluentnhibernate.org for that)
  • to teach about best practices in abstracting NHibernate dependencies (you can find that here)
  • to teach about best practices in domain modeling etc

I am also by no means an expert in neither nhibernate nor fluent nhibernate. In fact I am just a grunt like many of you reading this blog post who was searching for similar blog post when he was banging his head against the wall trying to learn how to use it - 1+ year ago very little docs) so take EVERYTHING I say with a grain of salt.

I am also aware of the state of my English, but I am really doing my best and if anyone wants to help editing the blog post I would put that version in code plex :)

So, let’s start…

An example of Fluent NHibernate auto mapping in action

So, here’s the domain model I’ll be using in this blog post to show couple of FNH auto mapping aspects

image

And here’s the database model which would be created after the sample would be executed based on fluent nhibernate auto mapping conventions.

image

(To get more details about the domain design check the use case description from previous blog post)

Convention over the configuration

imageFNH auto mapping works based on applying the convention over the configuration which I usually try to explain like this…

“There is a set of rules which would be applied to your domain during its mapping to database model.

Here are couple of examples of rulesconventions:

  • Database table would be named used plural form of the entity name.
  • Primary key of every table would be named following the rule “entity name” + “ID”

Fluent NHibernate comes with a default set of rules which you can customize to your own preferences on a very cool and easy way.

Sounds simple? It is THAT simple.”

Let' see quickly how the fluent nhibernate magic happens.

How the Fluent NHibernate Auto mapping magic works (explanation for us others)?

I won’t go deep into the details (for that go to http://fluentnhibernate.org or How FNH works?) but in general it is based on two design patterns: Proxy and Visitor.

If you are not familiar with the patterns here’s a simple way for you to get them in the context of fluent nhibernate magic.

The role of proxy design pattern in fluent nhibernate

If you check out some of the entities in Vuscode.FNHSamples.Domain project you would see that all of them are non sealed classes with all of the members being virtual…  So, the code like this

namespace Vuscode.FNHSamples.Domain
{
    public class Address
    {
        public virtual string Email { get; set; }

        public virtual string City { get; set; }
        
        public virtual string Country { get; set; }
    }
}

The reason why we HAVE TO respect this rules designing our domain classes is related to the fact that FNH uses castle dynamic proxy functionalities to (in overly simplified version) create an “in memory” proxy child class by inheriting the real one and overriding the properties in the proxy child class in order to enable intercepting their calls.

In this example therefore inside of the FNH engine there would be created a “ProxyAddress” class inheriting from the Address class but which would have implemented additional interfaces and override members effectively adding during the run time behaviors and shapes to original Address class. That’s how a POCO can be achieved: we don’t need any attributes, no special base class or base interface etc..

Now, in order to understand why FNH needs this we need to take a quick peek at how FNH “mapping rules” (properly called conventions) are implemented.

How the fluent nhibernate conventions are implemented?

Always on the same way (thanks to brilliant work of FNH authors): always inherit and implement a special interface which has a method accepting a parameter of certain interface type.

    public class ClassConvention : IClassConvention 
    {
        public void Apply(IClassInstance instance)
        {
            // do something with instance
        }
    }

The reason why  our convention has to implement certain interface is due to the fact that during the run time fluent nhibernate would iterate over all of the types of the given assembly and collect “all of the types implementing the IClassConvention” thus “collecting the rules”.

The reason why a IClassInstance parameter was passed is that the Apply method would get SOMETHING implementing that interface without knowingcaring about what that something really is.

This design approach where you enable your class to work with various entities without coupling to its concrete implementation can be roughly called visitor pattern.

If I use gain the same sample of Address and “AddressProxy” class, imagine that AddressProxyClass created on the fly implements the IClassInstance interface. Wouldn’t that enable us to pass an instance of addressProxy class to ClassConvention Apply method so it could perform its functionality on it? :)

The end result of those two patterns is that instance Address class ends inside of the Convention Apply method without adding anything to domain other then virtual keywords on properties.

Iterating, iterating, iterating…

During the runtime mapping process FNH (overly simplified) iterates over all of the types and creates their proxies.

Every proxy gets added certain interfaces which define methods allowing “outer world” to alter the state of proxy. 
Fluent NHibernate (or user) then creates a collection of conventions defining the rules how certain aspects of proxies should be altered.

Now the code iterates every proxy and for every proxy it iterates over all of the conventions and applies the one matching the proxy.

As the end result of that iteration, we gat all of the proxies with their state sets up in proper state.

Then FNH iterates all of the proxies again and translates states of each one of them to NHibernate required XML representation.

Brilliant, isn’t it?

Back to reality

Now when I (hopefully) explained how NHibernate works in layman’s terms, we can just go over the conventions I have in my sample and provide explanation what is the relationship between them with certain pieces of my sample.

Class Convention

This is convention which tells to Fluent NHibernate how to map entities to database tables.

I have only one rule related to that:”A table name should be plural form of the entity name”, so the code doing that is pretty simple

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ClassConvention : IClassConvention 
    {
        public void Apply(IClassInstance instance)
        {
            instance.Table(Inflector.Pluralize(instance.EntityType.Name));
        }
    }
}

IClassInstance has two key members (key in sense of this sample):

  • EntityType – providing access to a type being mapped to database table
  • Table(name) – method which sets the value of the table name.

(To explore capabilities outside of my sample use intellisense which in case of fluent interfaces is your best friend)

Inflector is just a helper class I took from Castle.ActiveRecord project (been a long time ago) and which purpose is to get plural form of a given string. I put it also in sample project so you can check it out if you wish.

So, now when we know all of the pieces we can read the implementation inside of Apply method like this

“Make sure that whatever the proxy was sent it would be mapped in a data table which name would be plural form of the original class proxy was created from”

(Due to the fact every convention is implemented in same manner implement a interface and get something injected I’ll skip repeating that in other conventions)


DefaultStringPropertyConvention

This is convention which tells how class string properties should be mapped in database column.

My rules are simple: default length 100 and every string can be null value.

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class DefaultStringPropertyConvention : IPropertyConvention
    {
        public void Apply(IPropertyInstance instance)
        {
            instance.Length(100);
            instance.Nullable();
        }
    }
}

Foreign Key Convention

This is convention which defines how fluent nhibernate should behave while mapping association properties to foreign keys.

My rule is simple:”Name of the foreign key is name of the table being referenced + ID suffix”

Here’s the code from sample

using System;

using FluentNHibernate.Conventions;
using System.Reflection;

namespace Vuscode.Framework.NHibernate.Conventions
{
    public class CustomForeignKeyConvention : ForeignKeyConvention
    {
        protected override string GetKeyName(PropertyInfo property, Type type)
        {
            return property == null 
                    ? type.Name + "ID" 
                    : property.Name + "ID";
        }
    }
}

As you can see from the code not exact translation of my rule so it needs some additional clarification…

The GetKeyName method accepts two parameters:

  • property (holding a pointer to a property in entity referencing the “parent” entity
  • type (holding a pointer to a  “parent” table)

If we check out the resulting DB diagram

image

and how the domain implementation of Blogs looks like

using System.Collections.Generic;

namespace Vuscode.FNHSamples.Domain
{
    public class Blog : Entity
    {
        public virtual Author Author { get; set;} // component

        public virtual BlogRoll Roll { get; set; } // references
        
        public virtual IList Posts { get; set; } // has many

        public virtual string BlogTitle { get; set; }

    }
}

We can clearly see that due to the fact that Blog has a Roll property pointing to parent BlogRoll, FNH took that property value + "ID" and created RollID foreign key in Blogs data table. So, that explains how FNH works in FK convention when property being sent is with non null value and leaves us with the question “how come that value can be null”?

To answer that, let we check out the other part of the resulting UML diagram representing DB interpretation of the Author class inheritance

image

As you can see, GuestAuthor and RegularAuthor have their foreign keys named “AuthorID” even none of those two has an explicit “parent” property like in previous case.

In other words There is no GuestAuthor.Author and RegularAuthor.Author properties but we still have the need for defining FK in their tables.

In this type of cases, ForeignKeyConvention would get a null propertyInfo value (because there is no property) and a pointer to a entity to be used with defining of database FK (in this example Author). That’s how in my sample foreign key of those tables become

Many to Many convention

This is convention which defines how fluent nhibernate should behave while mapping the N – N relationships where a class A has a collection property of class B type and the class B has a collection property of class A.

Here’s the code sample from the Vuscode.FNHSamples.Domain –> Post and Category classes

namespace Vuscode.FNHSamples.Domain
{
    using System.Collections.Generic;

    public class Post : Entity
    {
        public virtual IList Categories { get; set; }  // has many to many

        public virtual string Title { get; set; }

        public virtual PostStatus Status { get; set; }

    }
}

A Post can have many Categories

namespace Vuscode.FNHSamples.Domain
{
    using System.Collections.Generic;

    public class Category : Entity
    {
        public virtual IList Posts { get; set; } // has many to many

        public virtual string Name { get; set; }
    }
}

One Category can be used in many samples.

I prefer mapping Many to many relationships using the Association Table Mapping pattern where additional table is created with two foreign keys columns matching the primary keys of the associated tables.

Here’s how it looks in resulting database model with PostsToCategories table being association table.

image

I guess after seeing the diagram it is quite obvious to get the many to many convention I have:

Name of the association class should be like “TableNameA” + “To” + “TableNameB”.

How NOT to implement many to many relationship

As you can read in great detail here, many to many is in general combination of two 1 – N relationships which if it would be done like this

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ManyToManyConvention : IHasManyToManyConvention
    {
        public void Apply(IManyToManyCollectionInstance instance)
        {
            instance.Table(
            	string.Format("{0}To{1}",
                        Inflector.Pluralize(instance.EntityType.Name),
                        Inflector.Pluralize(instance.ChildType.Name))
		);
        }
    }
}

Would result in next database diagram state

image

So, we have 2 associations tables where each one of them covers their own path.

Clearly this is an overkill from Db perspective due to the fact that the same table can be used in both paths.

In NHibernate parlance that is achieved using Inverse relation type (again, you can read about it in great detail here) which in layman’s terms can be explained “when you need to map the Post –> Categories relation, pleas use the already defined Categories –> Post and just revert it”

And that’s exactly what the actual implementation in my code sample does

Here’s the code

Let see how this was implemented in the sample code

namespace Vuscode.Framework.NHibernate.Conventions
{
    using FluentNHibernate.Conventions;
    using FluentNHibernate.Conventions.Instances;

    public class ManyToManyConvention : IHasManyToManyConvention
    {
        public void Apply(IManyToManyCollectionInstance instance)
        {
            if (instance.OtherSide == null)
            {
                instance.Table(
                    string.Format(
                        "{0}To{1}",
                        Inflector.Pluralize(instance.EntityType.Name),
                        Inflector.Pluralize(instance.ChildType.Name)));
            }
            else
            {
                instance.Inverse();
            }
        }
    }
}

The instance.OtherSide is null in situation when there is no already defined relationship between EntityType and ChildType. In that case code uses Table method to set the name of association table respecting my naming convention TableA name + “To” + TableB name. That first if would result with PostsToCategories association table being created.

Now the FNH would iterate more (as described above) and the Post –> Category relation would be processed. When this happens, instance.OtherSide would NOT be null so instead of creating new association table FNH would map that relation as inverse to the original one.

Conclusion

This blog post become really long so I have decided to split it in two posts. In next post I’ll show a few more conventions and process of fluent configuration with switches I’ve been using in my sample

Stay tuned :)

Filed under: Uncategorized 3 Comments
3Nov/090

Fluent NHibernate Samples on CodePlex

I’ve been using Fluent NHibernate for more then a year now and I am big fan of it.

The were only two things bothering me in FNH for all that 1+ year:

  1. frequent API changes (which made my fluent mapping and auto mapping blog post pretty quick completely obsolete – not to mention my code :)), but when I saw how polished FNH get in 1.0 version I don’t mind any more – simply beautiful code.
  2. lack of documentation. Don’t get me wrong - http://groups.google.com/group/fluent-nhibernate was REALLY useful most of the the time but I was allways dreaming about something like the current fluent nhibernate wiki which is simply awesome concentrated amount of useful data.

So, to me there’s no more problems with FNH left (beside a few minor bugs no one cares to answerfix) but in last couple of days I’ve accidentally find out ( here on the fnh mailing list and here on stackoverflow) that a lot of folks would like to see a sample code working with fluent nhibernate 1.0 (beside the wiki etc)

That’s why I decided to loop in some of my time today and create a sample project illustrating on a same example both fluent and auto mapping and that’s how I came with

CodePlex project - Fluent Nhibernate samples

The intention of the project is to initially focus on single sample illustrating all of the major use cases in the ‘real world’ manner and then to slowly grow so the sample would start covering more and more corner cases. Hopefully in a long term it would become a c# solution illustrating in one place all of the major FNH usage aspects.

Project can be found on this location http://fnhsamples.codeplex.com/ together with the source code downloadable as zip or SVN-checkout.

Starting solution

So, in order to start the project and based on the questions I am usually hearing mentioned regarding auto mapping I’ve came up  with this simple domain model

image

So a domain space covers imaginary blog engine where:

  • a blog roll is a group of blogs (e.g. codebetter.com),
  • a blog contain one or many blog posts where every blog post can have one or many categories
  • a blog is owned by a single author which can be regular author (that blog is his primary blog) or guest (where he cross post to this blog)

As you can see in this very simple domain I have cases of:

  • References (N – 1 relation) where many Blogs belong to one BlogRoll (in this sample Blog mimics aggregate root)
  • HasOne  (1 – 1 relation)  where author can have only one blog in a blog role and all blog posts of a blog are written by a single author only.
  • Component where author has a complex Address value property of a Address type which (due to the fact it is not entity) we would like to map to same DB table as Author.
  • Subclass (inheritance)where GuestAuthor and RegularAuthor are children of the abstract Author with each one of them having its own custom properties
  • HasMany (1 – N relation) where a blog has one or many blog posts
  • ManyToMany (N – N relation)  where a post can be tagged with many categories and every category can be used in many posts
  • Enumeration  - where a post has status enumerated value column
  • All entities share the same Entity base class which (contrary to Author entity) is not supposed to be mapped in a separate table

Note: I know that this domain model is far from perfect in modeling sense but that is not the point here. The point here is having a meaningful sample which would be used to showcase all the fluent nhibernate magic.

End result

Here’s how the DB model would look like created using both fluent and auto mappings

image

As you can see, DB diagram match pretty much the domain model and all that (in case of Auto Mapping) just by utilizing conventions without any manual defined mappings.

What next?

For the folks curious to check it out now, go to http://fnhsamples.codeplex.com/  where you can download the source currently containing only auto mapping based solution. I guess, it is so simple that most of us could get it just by looking at the small code base there.

For the rest of the people, my next blog post would present the fluent mapping based solution to get from model to DB diagram and after that auto mapping based solution. In that two blog posts, I would comment “line-by-line” so even people new to fluent nhibernate would (I hope) get it.

After that two blog posts, I would have to switch gears and (finally) spit out couple of things I came out with doing Prism development.

I hope at that moment there would be some suggestions what needs to be added to the sample so the sample would start to grow from its starting trivial size.

Filed under: Uncategorized No Comments
2Nov/093

Me, myself and design patterns

Looks like the Silverlight related posts I’ve announced would have to wait one more blog post due to the event which happened to me today and which made me thinking about a few of the architecture related things which at the end resulted with a few of (at least for me) interesting thought I felt sharing with the community.

The event

So, a fellow blogger @RadenkoZec made a nice blog post about Facade design pattern in which comments we had a discussion if the example is appropriate or not, and if the implementation is Facade design pattern  or some other pattern. I won’t repeat here in detail what we were discussed there (go to the blog post to read comments) but there were couple of things in that discussion I was thinking about during the evening…

Patterns should be explored in 3D

It is very interesting that the sites on the net (like the dotfactory.com) seems to be primarily focused on GoF patterns while I was not able to find sites covering at the same time PoEAA patterns andor DDD patterns.

That fact might look irrelevant but if you would check out the content of today’s discussion you would se that the same example code used in blog post

looked to Radenko as Facade (GoF)

FacadeDesignPattern.png

and to me more like a Gateway (PoEEA)

gatewaySketch

 

If you just take a look at above pictures, you would find for sure at least some similarity and that’s why one should look at all of the patterns in whole and not just be exclusive in picking “The book”. Another example of this interpretation conflict can be found between the PoEAA and DDD where patterns such as repository, factory etc some times have different usages if not different meanings.

In other words, IMHO every developer should persist knowledge in three dimensions:

  • X– DDD, PoEAA, GoF
  • Y – specific patterns and their implementations, implications; anti patterns too.
  • Z – time

For the ones of you probably asking what time has to do with this subject - here’s an answer. Software landscape changes significantly in last 20 years so some of the “scriptures” should be taken with a grain of salt.

Here are couple of examples illustrating my point:

  • Observer pattern doesn’t have a lot of sense being implemented out of box with .NET events,
  • Singletons (btw, would have a dedicated post showing how evil they are) are kind of obsolete with the usage of the IoC containers etc

Another aspect of the need for considering the time dimension in design patterns exploration is the fact that in last few years we are all witnessing both the rise of the dynamic languages and the enhancement of static languages where they are getting aspects not existing in the time when “pattern Bibles” were made (lambdas, generics, C# 4.0 dynamic etc). Every of the patterns therefore should be heretically challenged in the lieu of current state of technology.

Context is the king

By mare looking at above diagrams (which for my point can be both pure UML diagrams) it would be really hard to tell what is the difference..

In both cases we have multiple classes encapsulated in one class orchestrates their calls and expose simple API to be consumed.

But if you take a look at the context you can see based on the provided code sample that the packages in case of facade are interconnected, each one of their classes is dependable in it’s own way dependable on other classes and none of them can exist without others. I could imagine refactoring merging those 3 classes in one. In other words, even physically we have multiple classes caring their own implementation they are so connected that on architectural level there’s just one entity which is hiding behind the facade.

That’s why for me Facade feels like “1 – 1” relationship type pattern where a facade hides the complexity of the class (similar in a way to adapter but let’s not digress with that here)

The second case is with clearly different context where all of the packages behind the “facade” (Gateway) are quite independent. They do not depend on each other at all, they cooperate as partners.

The purpose of the PricingGateway is also to expose a simple API to pricing Package but this time when the API would be used Gateway would perform a role of a conductor, orchestrating the calls to the "hided" elements.

That’s why for me Gateway fells more like a “1 – n” relationship type design pattern

To summarize: In order to distinct patterns one would have to understand the problemimplementation context before picking the right flavor – appropriate pattern.

Patterns as a matter of feelings

As you have probably noticed in the text above I am using the “feels like to me” which I’m pretty sure is looking quite unrelated to explicit and scientific thing such are the design patterns so let me clarify that for you :)

As many other, I’ve stepped through couple of phases in my Pattern life

  1. Discovery of patterns (we hear about them from some of the cool kids and all we do at this stage is trying to pretend as best as we can to look like we know what they are talking about)
  2. Trying to get it (AKA “I foundread the GoF book”)
  3. Becoming a believer (including making the presentations to people in phase #1)
  4. Getting it for real (reading every patterns book, blog post etc you can find and memorizing the UML diagrams, use cases etc)
  5. Seeing patterns everywhere and doing them as much as we can
  6. Paying the price from highly complex code base .
  7. Starting to understand the price of patterns application and start using it only when a real design pain is identified which can not be solved on simpler (but still clean) solution
  8. Instead of patterns catalogs starting to focus on basic principles.

So, as far I can tell I am in that phase #8 where all of the patterns somehow melted with each other and all of them “look similar”. I forgot half of their UML diagrams and reference implementations. The only thing in my head left is the name of the pattern, notion of the use case when it is useful and (very blurry for most of them) how it works.

Reading my last statement someone could make a conclusion that I just got more stupid (possible option I admit :)) and to lazy to remember things but I would disagree with him because every one of the patterns I faced is based on the same set of plain logic principles which in case you know and feel them you are good to go with doing “your own” solution which would somehow match some of the patterns.

Here are some of examples of the design principles I have on my mind: SOLID, open-close, DRY, KISS, orthogonal code,  OOP principles (encapsulation, abstraction), TDD, dependency injection and the list could go on with pages

To summarize: I strongly believe that if the developer is feeling natural regarding the design principles (spits out the code following those principles without even thinking) the knowledge of the design pattern can be at much more abstract level (in a way like knowing a page index of a book) without memorizing every bit of their reference implementation (that’s why we have bing aren’t we?) and the created code would still comply to all of those patterns.

Conclusion

I hope all of my rumblings I shared with you my dear reader would help me now to make up the point of this blog post and that is to explain how come when Radenko pulled (completely rightfully) on his blog post couple of referent examples from sites and books of folks far smarter then I am proving that his example implementation is the same as the one they provided, the only thing I could tell him is that his example doesn’t feel like a case for Facade to me”.

I was not claiming neither that the guy who wrote a book did it wrong nor that the dotfactory.com site got it wrong in their sample.

It was just a gut filling of a pragmatic “duct tape architect” which I would have about that code if it would be mine which I shared with him and the community.

Filed under: Uncategorized 3 Comments