.NET and me Coding dreams since 1998!

21Sep/120

Awaitable task progress reporting

In my previous Task.WhileAll post I had a very fishy piece of code in sample where I put a Thread to sleep in order to give enough time for async composition to complete so the test could pass.

Well, every time I use a Thread.Sleep anywhere I know it is a bad thing so I've decided to get rid of it.

IProgressAsync<T>

The problem is related to the fact that IProgress<T> interface defines a void handler which can't be awaited and thus it ruins composition.

That's why I decided to define my own "true async" version of the interface which looks has the same report method but returning a Task I can await.

    
namespace WhileAllTests
{
    using System.Threading.Tasks;

    public interface IProgressAsync<in T>
    {
        Task ReportAsync(T value);
    }
}

Having a async version of reporting is VERY useful when the subscriber itself is awaiting further calls. I could get that with async void but using async void IMHO always turns to be bad solution so I choose to use the Task returning signature even I need a custom made interface for that.

And here's the implementation

    
    using System;
    using System.Threading.Tasks;

    public class ProgressAsync<T> : IProgressAsync<T>
    {
        private readonly Func<T, Task> handler;

        public ProgressAsync(Func<T, Task> handler)
        {
            this.handler = handler;
        }

        public async Task ReportAsync(T value)
        {
            await this.handler.Invoke(value);
        }
    }

No magic here:

  • Instead of Action<T> my ctor accepts the Func<T, Task> so I can await it
  • ReportAsync awaits in async manner the provided Task  enabling composition

Now having this in place I can update my task extension method to compose reporting method invocation

public static async Task<IList> WhileAll(this IList<Task> tasks, IProgressAsync progress)
        {
            var result = new List(tasks.Count);
            var remainingTasks = new List<Task>(tasks);
            while (remainingTasks.Count > 0)
            {
                await Task.WhenAny(tasks);
                var stillRemainingTasks = new List<Task>(remainingTasks.Count - 1);
                for (int i = 0; i < remainingTasks.Count; i++)
                {
                    if (remainingTasks[i].IsCompleted)
                    {
                        result.Add(remainingTasks[i].Result);
                        await progress.ReportAsync(remainingTasks[i].Result);
                    }
                    else
                    {
                        stillRemainingTasks.Add(remainingTasks[i]);
                    }
                }

                remainingTasks = stillRemainingTasks;
            }

            return result;
        }

With all this in place I can remove thread sleep from my unit tests and have it more useful

 

[TestClass]
public class UnitTest1 {

Listresult = new List();

[TestMethod]
    [TestClass]
    public class UnitTest1 {

        List<int> result = new List<int>();

        [TestMethod]
        public async Task TestMethod1() {

            var task1 = Task.Run(() => 101);
            var task2 = Task.Run(() => 102);
            var tasks = new List<Task<int>>() { task1, task2 };

            var listener = new ProgressAsync<int>(this.OnProgressAsync);
            var actual = await tasks.WhileAll(listener);

            Assert.AreEqual(2, this.result.Count);
            Assert.IsTrue(this.result.Contains(101));
            Assert.IsTrue(this.result.Contains(102));

            Assert.AreEqual(2, actual.Count);
            Assert.IsTrue(actual.Contains(101));
            Assert.IsTrue(actual.Contains(102));
        }

       private async Task OnProgressAsync(int arg) { result.Add(arg); }
    }

There you go, updated code of Task.WhileAll can be found here

Filed under: Development No Comments
20Sep/120

Task.WhileAll

When Task.WhenAll met the Task.WhenAny…

There are two methods which can be used for awaiting an array of tasks in non blocking manner: Task.WhenAll and Task.WhenAny.

It is quite obvious how they work:

  • WhenAll completes when every task is completed,
  • WhenAny when any of the task is completed.

I needed today something which is a mix of those two and I’ve came up with something which completes when all is awaited but also provides hook for me to respond on awaits of individual tasks.

I’ve called my little extension: Task.WhileAll.

Extension method

Here's the complete implementation

    
public static class TaskExtensions 
{
	public static async Task<IList<T>> WhileAll<T>(this IList<Task<T>> tasks, IProgress<T> progress) 
	{
		var result = new List<T>(tasks.Count);
		var done = new List<Task<T>>(tasks);
		while(done.Count > 0) 
		{
			await Task.WhenAny(tasks);
			var spinning = new List<Task<T>>(done.Count - 1);
			for(int i = 0; i < done.Count; i++) 
			{
				if(done[i].IsCompleted) 
				{
					result.Add(done[i].Result);
					progress.Report(done[i].Result);
				} else {
					spinning.Add(done[i]);
				}
			}

			done = spinning;
		}

		return result;
	}
}

 

The code is quire simple:

  • it is an async extension method extending the IList<Task<T>>
  • method returns on completition IList<T> (result of awaited tass)
  • Method accepts IProgress<T> interface publishing information about the tasks who had just completed to the interested subscribers
  • Inside of the method body we have a loop which is active as long there are tasks with IsCompleted==false.
  • The loop has a “sleep” line where I use await Task.WhenAny to asynchronuslly wait for any task to complete

Unit test

Here’s a simple unit test ilustrating the usage of the extension method

  
    using System;
    using System.Collections.Generic;
    using System.Threading;
    using System.Threading.Tasks;

    using Microsoft.VisualStudio.TestTools.UnitTesting;

    [TestClass]
    public class UnitTest1 {
        [TestMethod]
        public async Task TestMethod1() {

            var task1 = Task.Run(() => 101);
            var task2 = Task.Run(() => 102);
            var tasks = new List<Task<int>>() { task1, task2 };


            List<int> result = new List<int>();
            var listener = new Progress<int>(
                taskResult => {
                    result.Add(taskResult);
                });

            var actual = await tasks.WhileAll(listener);
            Thread.Sleep(50); // wait a bit for progress reports to complete

            Assert.AreEqual(2, result.Count);
            Assert.IsTrue(result.Contains(101));
            Assert.IsTrue(result.Contains(102));

            Assert.AreEqual(2, actual.Count);
            Assert.IsTrue(actual.Contains(101));
            Assert.IsTrue(actual.Contains(102));
        }
    }

Again, nothing complicated :

  • I create an array of two dumb tasks
  • I define the listner which takes the task result and (for the unit test needs) adds it to the collection of results
  • I await the extension methods on task array with provided listner
  • Wait for 50 ms so the progress report would have time to finish (code is async after all)
  • Check that both tasks were reported to subscriber
  • Check that both tasks are returned as result.

Conclussion

That’s it. Dead simple code which I use A LOT to increase CPU utilization of my web crawlers.

Source code of the extension method and unit test can be found here.

Filed under: Development No Comments
14Jun/120

Hacking the KendoUI grid

Here’s the thing: I love KendoUI grid!

It allows a server side desktop guy like me getting really nice results.In case you don’t have a clue what I am talking about go and look at their demo page for what KendoUI can do.

The only problem I have with KendoUI is that is “client side” UI technology which is I am sure the way to go in 2012 and perfect for many folks but I personally prefer doing my things as much I can in C# and use jscript when I have to.

Source code of today’s blog post can be found here

What is the problem?

Imagine a use case where you have a page where you enter a salary document which has a heading with  a date and number and a list of employees getting paid in that month.

Something like this

    
     public class CartHeaderModel
    {
        public string Number { get; set; }
        public DateTime Date { get; set; }

        public IList Items { get; set; }
    }

    public class CartItemModel
    {
        public string FullName { get; set; }

        public decimal NetAmount { get; set; }
        public decimal GrossAmount { get; set; }
    }

Client side story

To enable entering of this data using MVC, one might create a page with two text boxes and a  grid for showing and entering the employee data. (Similar example to this is cart and cart items and other master details LOB scenarios).

Something like this

    
<body>
        
	@model KendoPractice.Models.CartHeaderModel
        
	@using (Html.BeginForm((string)ViewBag.FormAction, "Home"))
        
	{
            
		<fieldset id="employee-new">
<legend>Podaci obračuna</legend>

<div>
@Html.LabelFor(m => m.Number)
@Html.TextBoxFor(m => m.Number)
@Html.ValidationMessageFor(m => m.Number)
</div>
<div>
@Html.LabelFor(m => m.Date)
@Html.TextBoxFor(m => m.Date)
@Html.ValidationMessageFor(m => m.Date)
</div>
<div id="grid"></div>

<input type="hidden" id="items" name="items" />
<input type="submit" value="Snimi podatke" />

</fieldset>
}
</body>

Few things are going on here:

  • There’s a form having two text boxes for header information
  • A grid is represented with just a empty div element with id=”grid”
  • There is one hidden input field with name “items” which would be used to post client grid state to server.

This is not supposed to be a post about Kendo (I am planning to do a few How-To later but not now) so here’s a very short explanation on how grid gets hooked up – it is quite simple.

            $("#grid").kendoGrid({
                dataSource: salaryDataSource,
                editable: { mode: "incell" },
                columns: [
                    { title: "Employee name", field: "FullName", width: 90, validation: { required: true } },
                    { title: "Net wage", field: "NetAmount", width: 90, validation: { required: true } },
                    { title: "Gross wage", field: "GrossAmount", width: 90, validation: { required: true } },
                ],
            });

As you can see, jquery selector gets a pointer on div with id grid and then wraps it with a .kendoGrid() having a few properties set up so the column headers etc. would be set. The data comes from a object called salaryDataSource which looks like this

	 
	var itemsData = @Html.Raw(ViewData["gridInitContext"]);
	var salaryDataSource = new kendo.data.DataSource({
                data: itemsData,

                change: function (e) {
                    var datas = salaryDataSource.data();
                    var result = "[";
                    var separator = "";
                    for (var i = 0; i < datas.length; i++) {
                        result += separator + JSON.stringify(datas[i]);
                        separator = ",";
                    }
                    result += "]";
                    $("#items").val(result);
                }
            });

 

Few interesting things here:

  • DataSource is hooked to a variable called itemsData which content is injected from server using the ViewData[“gridInitContext”]
  • Every time data source is changed, all of the bounded items are concatenated to a JSON array and value of a hidden field with an id items is set to that JSOn value.

To summarize the client side story:

  • KendoUI wraps the div and provides a rich grid UI functionality without any ajax calls
  • Initial data context of the grid is injected in a form of JSON collection from server using a ViewData[“gridInitContext”]
  • Every time bounded data of data source change the value of the hidden input field is getting updated to its JSON representation and on page post that value gets posted.

Server side story

Having in mind just summarized things, we are going to quickly check the Home controller now

    
    public class HomeController : Controller
    {
        public ActionResult Index()
        {
            //  in real world this comes from a repository etc - blog post only code
            var model = new CartHeaderModel()
            {
                Number = "12345",
                Date = DateTime.Now,
                Items = new List()
                {
                    new CartItemModel { FullName = "John Doe", GrossAmount = 123, NetAmount = 456},
                    new CartItemModel { FullName = "Jane Doe", GrossAmount = 555, NetAmount = 666},
                }
            };

            // pass to view JSON reprensentation of init context of the grid
            ViewData["gridInitContext"] = JsonConvert.SerializeObject(model.Items);

            return View(model);
        }

        [HttpPost]
        public ActionResult Index(string items, CartHeaderModel cartHeaderModel)
        {
            cartHeaderModel.Items = Newtonsoft.Json.JsonConvert.DeserializeObject>(items);

            // now with complete model we proceed as there was no kendo at all.
            return View();
        }
    }

Few interesting moments:

  • Index get action serialize to json items collection and store to view data so KendoUI can initialize grid  from it.
  • Post action gets two params where the items one contains JSON reprensentation of the client state which gets deserialized and the model is being completed.

image

What is the point of all this?

It is quite simple:

  • Have a server side code as ignorant as possible about the KendoUI.
  • Enable KendoUI to work in its full coolness.

In other words, I wanted KendoUI to be a toppling on my MVC cake, and not the part of the cake itself.

Samples on kendoui.com site are always showing the use case when you define CRUD controller just to feed the grid which IMHO breaks the MVC because half of your model gets to the server item by item using ajax calls, and half of it gets after page post.

I don’t know if this would be useful to anyone, but I wanted to share it with the community just in case there is someone out there.

Filed under: MVC No Comments
14Apr/120

One BIG problem with Azure Tables

Migrating SQL Azure to Azure Tables - GUID gotcha

Windows Azure rocks! I am so impressed with the power it gives me with the price I can afford that I started porting the codebase I work on at home in my spare time.

So far, I’ve connected my clients to Azure Service Bus Topics (works great), created my own custom app SkyDrive using Azure Blob storage (works great) and today I tried to port my SQL Server databases to Azure Table Storage.

3 main reasons when to pick Azure Tables – fulfilled

The reason why I decided to port my data is that in a way I was using already my SQL Server DB as it is table:

  • My primary key of the table is sequential Guid (in order to remove performance issue caused by normal guid PKs)
    RowKey checked
  • In some of the tables I have a column OwnerID which I already use to horizontally partition my data –
    PartitionKey checked
  • I do all of the joins etc.  on client side and perform only two select queries: SelectByPK and SelectNewerPKs.This second select gets from client input parameter anchor identity value and returns all of the rows whose RowKey is greater then a given value. I use it to get the change set / data deltas which I need to sync from server to client DB in order to update client DB with the new data existing in the cloud.( get me all of the rows which PK is greater then the given value)

No need to bother you with more details regarding my database, II guess it is clear just based on these 3 things how good match my DB structure is for moving to Azure Tables.

Why Azure tables?

Quite simple: price and scalability.

SQL Azure is very affordable (especially after the last price cut) so in order to get 5 Gb DB you pay only 25 USD/month which is really nothing. Still, if your app architecture doesn’t use SQL server relational capabilities and relies primarily on clients and “PK selects”  (as mine does) then Azure tables can be used and the price for 5 Gb storage is 0.625 USD/month. Let me repeat that one more time: less then one USD per month would cost me 5 Gb Table storage space. Completely and utterly insanely awesome!

Scalability is the same story as with the price. While SQL Azure is making big steps to enable sharding scenarios with SQL Azure Federations, tables are having partitioning built in from “day 1” as a fundamental design principle and allows scaling datasets which size measures in TBs. Awesome!

So what is the problem then with Azure Tables?

Problem is that Azure Tables uses Comparer<Guid> (.NET approach) and not  the Comparer<SqlGuid> (SqlServer and SqlCompact approach).

In other words

"SQL Server and Azure Tables are sorting rows differentlly

when RowKey is unique idenitifier (GUID)

which leads to the row shuffling"

 

Let me explain it in one simple example….

Let say we have next dummy table in SQL Server database

image

And let say this table has two rows:

  1. ID: 7240963F-D384-4D78-BADF-A03300F678CC, Name:  First
  2. ID: 15DF6719-9671-43D3-BAFA-A03300F678CD, Name:  Second

If I would execute next query on Sql Server and Sql Compact (databases I support on local client boxes

SELECT * FROM GuidTest ORDER BY ID

I get (not surprising) next results

image

Now, I insert the same data to Azure Table (using Neudisk Azure Storage Explorer is one easy way to do that) and run the same query I get opposite results where row second in sql server is first in Azure table storage.

image

Why it happens?

.NET and Sql Server are having comparing guid values differently – here’s a simple illustration

using System;
using System.Data.SqlTypes;
namespace ConsoleApplication6
{
    class Program
    {
        static void Main(string[] args)
        {
            // .NET guid
            Guid first = Guid.Parse("7240963F-D384-4D78-BADF-A03300F678CC");
            Guid second = Guid.Parse("15DF6719-9671-43D3-BAFA-A03300F678CD");

            // .SQL server guid
            SqlGuid firstSql = new SqlGuid(first);
            SqlGuid secondSql = new SqlGuid(second);

            Console.WriteLine(".NET compare:" + first.CompareTo(second));
            Console.WriteLine("SQL compare:" + firstSql.CompareTo(secondSql));

            Console.ReadKey();
        }
    }
}

And here’s the result illustrating the difference between .NET and Sql Server.

image

What now?

In my case, I am probably going to drop Azure Tables and use SQL Azure because I have a single code shared between the cloud and clients where it does the SelectNewer on SQL clients so the different results would break the cloud data sync code I have.

The other option I consider is to have a custom code for Azure Tables maybe utilizing the mandatory timestamp values but it is probably going to end as too complicated.

Real bummer Azure team choose a different path to follow then the SQL Server one - It was too god to be true

Filed under: Azure No Comments
21Jan/120

What is new in WCF in .NET 4.5 – Task and async

.NET 4.5 WCF – unit testable out of the box

As I mentioned already in How to get testable WCF code in simplest way? problem with abstracting WCF services occur due to the fact that the service contract is by definition not containing the async members defined and every solution I’ve seen enabling asynchronous calls to a WCF service adds a certain level of complexity to the code base so therefore I have chosen to use service generated proxy enhanced with some T4 magic creating appropriate interfaces.

I am happy to report that is not true any more and that

WCF in .NET 4.5 enables VERY easy asynchronous service calls in a testable manner out of the box.

Here’s a source code of the simplest illustration of what is the point. Usual constraints: works-for-me and take-it-as-an-idea-only.

In case all this async, Task<T> C# 5.0 things are new for you I suggest checking out some of presentations from my TPL delicious stack (especially “Future Directions For C#…” one there)

Server side code

Let’s stick to the simplest possible sample of a vanilla WCF service having a single operation returning a server time

using System;
using System.ServiceModel;
using System.Threading.Tasks;

namespace WcfService1
{
    [ServiceContract]
    public interface ITestService
    {
        [OperationContract]
        Task<DateTime> GetServerTimeAsync();
    }
}

As you can notice there are two interesting moments in the contact definition:

  • Returning type is not DateTime - it is Task<DateTime>
  • The name of the operation ends with Async which is a naming convention for marking new C# 5.0 async methods

Implementation of the service contract is equally trivial:

using System;
using System.Threading.Tasks;

namespace WcfService1
{
    public class TestService : ITestService
    {
        public async Task<DateTime> GetServerTimeAsync()
        {
            return DateTime.Now;
        }
    }
}

Implementation has three key moments:

  • Method name ends with Async
  • It returns Task<DateTime>
  • method has a async keyword which allows me to write a simple method body like I would do it normally and return a simple date time and completely forget about Task<T>

In other words, thanks to C# 5.0 all I have to do is to replace DateTime with async task<DateTime> and everything else stays the same – AWESOME!.

Client side code

I am going to add to the solution simple console application and create a trivial service client file

using System;
using System.ServiceModel;
using System.Threading.Tasks;
using WcfService1;

namespace ConsoleApplication1
{
    public class TestServiceClient : ClientBase<ITestService>, ITestService
    {
        public Task<DateTime> GetServerTimeAsync()
        {
            return Channel.GetServerTimeAsync();
        }
    }
}

No magic here: using shared service library I get the service contract on client and use it in combination with ClientBase<T> to create a simple class wrapper implementing via delegation service contract.

Now the class which simulates the one performing a wcf service call in its implementation

using System;
using System.Threading.Tasks;
using WcfService1;

namespace ConsoleApplication1
{
    public class ServerTimeReader
    {
        private readonly ITestService testService;

        public ServerTimeReader(ITestService testService)
        {
            this.testService = testService;
        }

        public async Task<DateTime> GetTimeAsync()
        {
            return await this.testService.GetServerTimeAsync();
        }
    }
}

The ServerTimeReader has a ITestService based dependency injected through its constructor. It has a method called GetTimeAsync which awaits the async wcf service call to be finished. All the mumbo jumbo of AMP, events etc. in a single keyword – brilliant.

Now when we have a class invoking a WCF call let’s ramp up IoC container and make a async call to server using the code we wrote so far.

using System;
using Microsoft.Practices.Unity;
using WcfService1;

namespace ConsoleApplication1
{
    class Program
    {
        private static UnityContainer container;

        static void Main(string[] args)
        {
            container = new UnityContainer();
            container.RegisterType<ITestService, TestService>();

            ReadTime();

            Console.ReadLine();
        }

        private static async void ReadTime()
        {
            var serverTimeReader = container.Resolve<ServerTimeReader>();
            var serverTime = await serverTimeReader.GetTimeAsync();

            Console.WriteLine("Server time:" + serverTime);
        }
    }
}

It's a console app so the entry point is static Main method which creates a new instance of IoC container (Unity in this sample) and adds to the IoC container mapping of the server side service contract with the service client I wrote

Then client calls a ReadTime method which uses IoC container to resolve ServerTimeReader instance injecting to it during that resolution a service client instance. Then the code awaits the GetTimeAsync method which awaits the service client call which results with a asynchronous call to a server being made and awaited on client.

Once the server returns the result to client the client shows it in console – that’s it.

image

Conclusion

The simplicity of the code performing fully async call to a WCF service is so brilliant that I am not going to write unit test here for the GetTimeAsync method because it should be quite obvious how to do that. The code is almost the same as it would be if it was written for sync WCF calls and to learn just how to tacklethe TPL/async specific things check out this stack overflow page recommended by my friend Slobodan Pavkov

That’s it folks – hope this will be useful to someone!

10Jan/121

Silverlight Data Pager, fake ItemsCount and stored procedures

Silverlight data pager control in real world

imageWhat is the better thing one could wish to blog about after a year of blog silence then about something as simple as it gets: How to use Silverlight DataPager control with server side paging?

There’s a swarm of blog posts showing how to do that but each one of them I saw does that with WCF data services, using DataService<T> and  entity framework which is really great way to do it except it violates one of the holiest of all enterprise software laws

Thou should obey and use always only and only
stored procedures when accessing the database.

Well, as simple as it looks I couldn’t google it with Bing so here’s a blog post showing how to do such a dumb thing. It’s a solution which “works-for-me” but it does feel hacky so take it with a grain of salt and please take it just as an illustration of the idea.

Source code with the sample used in this post can be downloaded from here

What is exactly the problem?

The problem is that in a given database I can have million rows which I don’t want to download all to client so I ‘m using DataPager control to get page by page of result.

If I would be allowed to use WCF DataService (ex. RIA service) the solution would be quite trivial: my WCF service would expose IQueryable<User>, SL client would use a Linq statement which would transfer over the wire to the server, on the server would be translated to entity framework ORM call and the data flows back in a second.

Unfortunately, I am not allowed to use ORMs so the dynamic SQL based solutions are out of the question.

The solution

Solution used in this blog post is a vanilla Silverlight project with one web project containing:

  • a WCF service called UsersService which gets from Silverlight client desired page index and page size and calls the stored procedure using DAL.
  • A User DTO class which only has 3 properties: Id, Name and Birthdate
  • UserDAL class which imitates the DAL code simulating the DB with 200 rows in a table and stored procedure which gets result page for given parameters

Here's that dummy DAL code

namespace SilverlightApplication1.Web.Model
{
    public class UserDAL
    {
        private static readonly IList<User> _users = new List<User>();

        static UserDAL()
        {
            for (int i = 0; i < 200; i++)
            {
                _users.Add(new User { Id = i, Name = "User " + i, BirthDate = DateTime.Now.AddDays(-i) });
            }
        }

        public static IList<User> InvokeStoreProcedure(int pageIndex, int pageSize, out int totalCount)
        {
            totalCount = _users.Count;
            int startIndex = pageIndex * pageSize;
            return _users.Skip(startIndex).Take(pageSize).ToList();
        }
    }
}

On Silverlight project client side we are having:

  • a UserService wcf service proxy
  • MainPage.xaml containing in markup data grid and data pager
  • MainPageViewModel class which is view model of the main page

Here’s the view xaml which data binds the DataGrid and DataPager to some Users collection property:

<UserControl x:Class="SilverlightApplication1.MainPage"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:sdk="http://schemas.microsoft.com/winfx/2006/xaml/presentation/sdk"
             Height="600"
             Width="800">
    <Grid x:Name="LayoutRoot">
        <Grid.RowDefinitions>
            <RowDefinition Height="*" />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>
        <sdk:DataGrid ItemsSource="{Binding Path=Users, Mode=TwoWay}"
                      AutoGenerateColumns="True" />
        <sdk:DataPager Grid.Row="1"
                       Source="{Binding Path=Users}" />

    </Grid>
</UserControl>

 

The code behind that xaml only wires up the view model and view:

namespace SilverlightApplication1
{
    public partial class MainPage
    {
        public MainPage()
        {
            InitializeComponent();

            Loaded += (sender, args) =>
                          {
                              ViewModel = new MainPageViewModel();
                          };
        }

        public MainPageViewModel ViewModel
        {
            get { return (MainPageViewModel) DataContext; }
            set { DataContext = value; }
        }
    }
}

And here's the view model class:

using System.ComponentModel;

namespace SilverlightApplication1
{
    public class MainPageViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;

        public MainPageViewModel()
        {
            Users = new UserPagedCollectionView() { PageIndex = 0, PageSize = 25 };
            Users.Init();
        }

        private UserPagedCollectionView users;
        public UserPagedCollectionView Users
        {
            get { return this.users; }
            set { this.users = value; OnPropertyChanged("Users"); }
        }

        public void OnPropertyChanged(string propertyName)
        {
            PropertyChangedEventHandler handler = PropertyChanged;
            if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }
}

No magic happening in a constructor of a view model :

  • sets a view model Users property to a instance of UserPagedCollectionView
  • invokes the User.Init();

Secret sauce

Obviously the only thing left not shown is the UserPagedCollectionView instance where the magic happens. The class  itself is a bit longer to be pasted so here are just the juicy stuff

using System;
using System.Collections;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.ComponentModel;
using SilverlightApplication1.UsersServiceProxy;

namespace SilverlightApplication1
{
    ///
 /// Endless paged collection view for testing purposes. /// 

    public class UserPagedCollectionView : IEnumerable, IPagedCollectionView,
                                           INotifyPropertyChanged, INotifyCollectionChanged
    {
        private bool isPageChanging;

        private int pageSize;

        private readonly UsersServiceClient proxy;
        readonly Dictionary<int, User> users = new Dictionary<int, User>();

        private int GetPageCount()
        {
            var result = ItemCount / pageSize;
            if (result * pageSize < ItemCount)
            {
                result++;
            }

            return result;
        }

        public UserPagedCollectionView()
        {
            proxy = new UsersServiceClient();
            proxy.GetResultsCompleted += (sender, ea) = >
                                             {
                                                 ItemCount = ea.Result.TotalCount;
                                                 int position = ea.Result.StartIndex;
                                                 foreach (var user in ea.Result.Items)
                                                 {
                                                    if (!users.ContainsKey(position))
                                                    {
                                                        users.Add(position, user);
                                                    }
                                                    position++;
                                                 }
                                                 OnCollectionChanged();
                                             };

            PageChanged += (sender, ea) => Init();
        }

        public void Init()
        {
           proxy.GetResultsAsync(PageIndex, pageSize);
        }

        public IEnumerator GetEnumerator()
        {
            var startIndex = PageIndex * pageSize + 1;
            var endIndex = startIndex + pageSize;

            var result = new List<User>();

            for (int i = startIndex; i < endIndex; i++)
            {
                if (users.ContainsKey(i))
                {
                    result.Add(users[i]);
                }
            }

            return result.GetEnumerator();
        }
    }
}

Major points:

  • The class implements the IPagedCollectionView
  • It has the Init() method which just calls the service on the server and gets the page of data for a page index and page size with values from a DataPager.
  • it has a dictionary<int, user> field users which is caching the user data retrieved from server where the key is position.
  • It has a method GetPageCount() which returns the total number of the pages
    (regardless of the number of items data pager  is bind to)
  • In its constructor it is constructing an instance of the WCF service proxy and it subscribes to the GetResultsCompleted event where it takes the service results (total number of items, current page index and page users and store it all as data pager context) and raise an CollectionChanged event informing the UI that it needs to refresh
  • Every time a PageChanged event is invoked (clicking one of the arrows, entering the page number etc.) it invokes the Init method  which again makes the call to the server getting the data for the new page.

Result

That’s it – on initial page load only 25 rows are retrieved but the page count is set to 8 (as 8 x 25 = 200) – exactly what I needed. Clicking the next page gets again from server only the next 25 rows etc. There are no requirements regarding using the ORM and the page data is retrieved using the SP (mocked in this post with a simple in memory method).

Can it be better? Sure, add sorting in game, preemptive loading of the future pages, do not hit the server if you already have the data etc. As I said this is not a production code just an illustration of how I solve a problem I couldn’t find a solution on the net so I hope it will save some time someone in the future – that’s all.

image

Filed under: Silverlight 1 Comment
25Nov/105

Naked MVVM – simplest way to do WCF code

How to get testable WCF code in simplest way?

What is the problem?

We all know that creating an instance of service proxy inside of the view model makes writing tests for the view model very hard because during the unit test run we don’t have usually the web service on the other side or even if we do it slows down web tests.

You know how they say

“Unit test is the test which runs without any problem with network cable unplugged”

Like the previous post about simplest possible way to do MVVM, the solution for this problem was covered in so many blog posts that even I am personally aware of a couple of cool and ‘frameworkish’ ways to solve it: use WCF behaviors, create your own ChannelFactory<T> with either sync call in separate thread or IAsyncResult based approach and (my personal favorite) hack the Visual Studio proxy generator. I’m sure there are at least 24 more solutions to do this Smeško

Still, there are two main problems with all the approaches I saw which belong to one of the next two groups:

  1. They deal purely with async based scenarios.
    If I have a service with a method GetForecast(DateTime date), I don’t want to maintain another interface just to get a way to make async call.
  2. They are rocket science type of solutions
    We are all geeks and like nice and shiny toys, but what about regular folks like me and a lot of the readers? Is there a really simple way to do this for “us others”?

Luckily, I think I found one which is definitely not the coolest one and 100% can be enhanced etc, but it is the one which proved to me in my day to day WPF/SL coding to be the easiest one “to grok and use”.

Conceptual solution

imageThe solution follows next design goals:

  • doesn’t require any typing
  • it is using Visual Studio proxy generated with “Add service..” menu action
  • it is using the well documented MethodAsync() invoker, MethodCompleted event subscriber pattern
  • it is using T4 to auto generate code which enhances the VS generated service proxy
  • every service proxy file follows naming convention of ending with word “Proxy”

A year ago, I have blogged in great detail about the unfortunate fact of ServiceClient generated in service proxy not implementing an IServiceClient interface. In case you want to understand what my solution do under the hood go read that blog post now and then continue reading this one. In case “you don’t care how it works as long it is working” here’s a very short summary for you:

ServiceClient generated by proxy generator is marked as partial class.That allows me to create another partial class with same name and namespace outside of proxy which only purpose is to hook the IServiceClient interface I generated manually based on the ServiceClient itself.

In the original blog post I do it manually which ended as a PITA due to the fact that every change of service contract one has to keep updated the interface. As a result of noticing that I waste a lot of time on that, I spent 20 minutes and created a simple T4 class which does that automatically for me.

You can download the source code of end solution here.

Before

Project structure is very trivial. It is vanilla Silverlight project which has a TimerService WCF service doing just this

using System;
using System.ServiceModel;
using System.ServiceModel.Activation;

namespace NakedMVVM.Web
{
    [ServiceContract]
    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class TimerService
    {
        [OperationContract]
        public string GetTime()
        {
            return "Yes it works on " + DateTime.Now;
        }
    }
}

Once we add a service proxy to NMVVM_WCF project (NOTE that proxy name ends with Proxy)

image

We can happily write now our demoware code “…

namespace NMVVM_WCF
{
    using System.ComponentModel;
    using System.Runtime.Serialization;

    using NMVVM_WCF.TimerServiceProxy;

    public class MainPageViewModel : INotifyPropertyChanged
    {
        public MainPageViewModel()
        {

            TimerServiceClient client = new TimerServiceClient();
            client.GetTimeCompleted += OnGetTimeCompleted;
            client.GetTimeAsync();
        }

        private void OnGetTimeCompleted(object sender, GetTimeCompletedEventArgs e)
        {
            Message = e.Result;
        }

        [DataMember]
        private string message;

        public string Message
        {
            get
            {
                return this.message;
            }
            set
            {
                if (this.message == value)
                {
                    return;
                }
                this.message = value;
                this.OnPropertyChanged("Message");
            }
        }

        public event PropertyChangedEventHandler PropertyChanged;

        public void OnPropertyChanged(string propertyName)
        {
            PropertyChangedEventHandler handler = this.PropertyChanged;
            if (handler != null)
            {
                handler(this, new PropertyChangedEventArgs(propertyName));
            }
        }
    }
}

Nothing wrong with this code per se, just it makes unit testing of the view model much harder task then it should be…

After

To fix this problem, let’s do next 2 steps:

  • download the T4 template file (no need to look in what it contains at all) from here .
  • add the file to the root folder of NMVVM_WCF project using VS IDE “Add existing item”

As a result of this activities t4 template was executed and a file with ClientEnhancer was auto-generated with next content

	namespace NMVVM_WCF.TimerServiceProxy
	{
		public partial interface ITimerServiceClient
		{
			#region Events
			event System.EventHandler GetTimeCompleted;
			event System.EventHandler OpenCompleted;
			event System.EventHandler CloseCompleted;
			#endregion Events

			#region Methods
			 void GetTimeAsync();
			 void GetTimeAsync(object userState);
			 void OpenAsync();
			 void OpenAsync(object userState);
			 void CloseAsync();
			 void CloseAsync(object userState);
			#endregion Methods
		}

		public partial class TimerServiceClient  : ITimerServiceClient
		{
		}
	}

As you can guess, that's complete code I was coding by hand and keep it updated manually with service contract changes. Having this in place it is quite easy to change ViewModel to accept the IServiceClient as a constructor parameter

        public MainPageViewModel(ITimerServiceClient client)
        {
            client.GetTimeCompleted += OnGetTimeCompleted;
            client.GetTimeAsync();
        }

        private void OnGetTimeCompleted(object sender, GetTimeCompletedEventArgs e)
        {
            Message = e.Result;
        }

The only thing left is to update the MainPage.xaml.cs file

namespace NMVVM_WCF
{
    using NMVVM_WCF.TimerServiceProxy;

    public partial class MainPage
    {
        public MainPage()
        {
            InitializeComponent();
            DataContext = new MainPageViewModel(new TimerServiceClient());
        }
    }
}

And that’s it – application works like it was and we have a highly testable view model using only service client interface which is easy to stub/mock.

Having a hard time figuring out path from “before” to “after”? Here’s a short video showing step by step things just described

You can download the source code of end solution here.

Aftermath

My own version of this T4 template, beside the T4 template code used in this blog post is also auto filling IoC container with mappings to all of service clients and its interfaces generated by template. That auto generation combined with auto MVVM wire up I described in first post allow me to have this “TDD enabling of WCF service proxies” fully automated.

I decided not to put that additional template code so it won’t bloat the post with IoC containers etc, but it is VERY easy to modify and customize the T4 template – even if you never did it spend 20 minutes looking at .tt file I shared for this post and I guarantee you – you’ll get it.

The only downside of this approach is that you have to manually drop the T4 template file to every project with service proxies which in my case is not the problem at all – I add it once and after that it keeps things in sync on its own.

I am really not sure why Microsoft is not doing this in the default proxy generation process – it is not breaking anything or damaging backward compatibility and it enables easy testing. I was experimenting modifying the Visual Studio proxy generator myself, but I decided to abandon it (even it was working at the end) due to required registry modifications etc. In my opinion, dropping one file in project without any other requirements to make it testable is more transparent then other approaches and everyone could do this.

What do you think about it? Is it simple enough?

Filed under: Uncategorized 5 Comments
7Nov/1023

Naked MVVM–simplest possible MVVM approach

How to do MVVM in simplest possible way?

Yes, I am aware that there are at least 50 “How to do MVVM” blog posts and well known frameworks: prism, MVVM Light, Caliburn etc. Still, my friend Slobodan Pavkov convinced me to write a post and explain the approach I am personally using in my code, because (as he believes) it is so simple that it can be useful to someone -  so here I am - writing it down.
Idea is so simple that I guess it is very possible someone already blogged about it and if so please let me know so I could link that blog post here. I presume you know what MVVM already is – if not go read some of the hundreds blog posts about that and once you get it come back. My sample is done in WPF (as that is my LOB platform of choice) but it works without any changes in Silverlight too.
As all MVVM framework I had to pick a name reflecting the spirit of my “framework” and I ended with “Naked MVVM” because it reflects design principles I respect:
  1. No base classes of any kind required for framework
  2. No interfaces of any kind required for framework
  3. No attributes of any kind required for framework
  4. View first – Blend friendly & simple composition
  5. IoC enabled
  6. Works out of box as much as possible

Basic ideas behind “Naked MVVM”

Scenario

Simple MVVM framework requires simple possible problem: show in MVVM way a text box showing current date. That’s it – let’s roll.

You can download the source code of end solution here.

No base classes, interfaces and attribute

Usual implementations of MMVM I’ve seen usually have ViewModel<TView> base class and/or some form of IView view abstraction etc.
Here’s how View looks like in my approach
namespace NakedMVVM
{
    public partial class MainWindowView
    {
        public MainWindowView()
        {
            InitializeComponent();
        }
    }
}
And here is the view model containing all of the necessary requirements from my Naked MVVM framework
namespace NakedMVVM
{
    public class MainWindowViewModel
    {
    }
}
As you can tell from the code above, there are ZERO requirements from view and view model.

Wiring up the view and the view model

s you already know, the whole MVVM pattern is based on the idea that view data binds to a view model which then talks to a model.
image
A lot of samples I’ve seen, define some form of view abstraction which should enable view model to communicate with view on framework level.
All of those samples ignore one simple but VERY IMPORTANT fact – there is such abstraction already baked in .NET – FrameworkElement. Every view (user control, window etc.) inherits from the FrameworkElement and can be casted to it. The reason why I picked it up is that the framework element has a DataContext member (to bad it is not defined in some interface so I could replace FrameworkElement with it). Setting a user control data context to some value results with all of the controls in that window/user control being bounded to the same value.
To codify that thought...
namespace NakedMVVM
{
    using System.Windows;

    public class MainWindowViewModel
    {
        public MainWindowViewModel(FrameworkElement frameworkElement)
        {
            frameworkElement.DataContext = this;
        }
    }
}
The problem here is how an IoC container (one of the requirements above is to use IoC) can resolve this generic FrameworkElement constructor parameter? That question is exactly the reason why we have all of the IView and IView<T> in the MVVM blog posts. To me that well documented approach is an overkill because we create entities just to hold our infrastructure. Much better approach could be to resolve a framework element from a IoC container using a well known key. There are many way how to do that but let here illustrate it in the simplest to digest form using the ServiceLocator.
namespace NakedMVVM
{
    using System.Windows;

    using Framework;

    public class MainWindowViewModel
    {
        public MainWindowViewModel()
        {
            var frameworkElement = ServiceLocator.IoC.Resolve("MainView");
            frameworkElement.DataContext = this;
        }
    }
}
As you can see here, view model becomes a data context of a view without any artificial code artifacts created to enable that. If I wouldn’t have to respect my design principle #1 “No base classes of any kind required” I could extract this class to base view model class and have it applicable on all view models.
namespace NakedMVVM
{
    using System.Windows;
    using Framework;

    public class MainWindowViewModel : ViewModel
    {
    }

    public abstract class ViewModel
    {
        public ViewModel()
        {
            var frameworkElement = ServiceLocator.IoC.Resolve(this.GetType().Name.Replace("Model",""));
            frameworkElement.DataContext = this;
        }
    }
}
Too bad I am not allowed to do that so I am again deleting all of the changes in this ViewModel and restore it back to be an empty class with no base class and no wire-up code in it. To see what I do in my code you would have to be patient for a little bit more because I need to explain first may way of …

Filling the IoC container

In most MVVM samples, there is a bootstrapper class where developer enlist all of the IoC mappings. In this example it could be something like this
using System.Windows;

namespace NakedMVVM
{
    using Framework;

    public partial class App : Application
    {

        public App()
        {
            ServiceLocator.IoC.RegisterType<FrameworkElement,MainWindowView>("MainWindowView");
        }
    }
}
imageJust by looking at this single line of code, it becomes obvious that:
  • I have to do the same thing for every user control/window I have
  • I map always framework element to a user control/windows
  • The key I use to store it in IoC is the same as the name of user control/window
Every WPF/SL developer I know (including me) when doing MVVM follows the next naming convention:
  • every user control is suffixed with “View” and
  • every view model of a control is suffixed with “ViewModel”
In concrete case of the sample used in this blog post, user control is named MainWindowView and her view model class MainWindowViewModel
If we combine the 3 obvious facts given above with the naming convention we could easily come to the same idea as I did:
“Iterate all of the types in current assembly. Each one of them which name ends with “View” map as framework element using the full type name as a key. Each one of them which name ends with a “ViewModel” map as object with a full type name as a key.”
Translating that thought into a C# code class IoCBuilder in this sample was created containing this
namespace Framework
{
    using System;
    using System.Reflection;
    using System.Windows;
    using System.Windows.Controls;

    public static class IoCBuilder
    {
        public static void CollectViewAndViewModelMappings()
        {
            foreach (var type in Assembly.GetCallingAssembly().GetTypes())
            {
                var typeIsUserControl = type.BaseType == typeof(UserControl);
                if (typeIsUserControl)
                {
                    var typeIsView = type.Name.EndsWith("View", StringComparison.InvariantCultureIgnoreCase);
                    if (typeIsView)
                    {
                        ServiceLocator.IoC.RegisterType(typeof(FrameworkElement), type, type.FullName);
                    }
                }
                else
                {
                    var typeIsViewModel = type.Name.EndsWith("ViewModel", StringComparison.InvariantCultureIgnoreCase);
                    if (typeIsViewModel)
                    {
                        ServiceLocator.IoC.RegisterType(typeof(object), type, type.FullName);
                    }
                }
            }
        }
    }
}
Now when we have this code in place, we can replace the explicit mappings from our bootstrapper class to a framework call
namespace YAMVVM
{
    using System.Windows;
    using Framework;

    public partial class App : Application
    {
        public App()
        {
            IoCBuilder.CollectViewAndViewModelMappings();
        }
    }
}
Major upside of this approach (at least for me) is that respects design principle #6 and allows me to just add a view and a view model without thinking about IoC mappings etc.

My way of wiring up view and view model

Having in mind the content of IoC container and design principle #4 (Blendable framework) after a lot of experimenting I’ve realized that the behavior is the most suitable way of doing the wire up.
The behavior itself is quite trivial and it it reflecting the same approach as shown above in explicit wire up sample code.
namespace Framework.Behaviors
{
    using System.Windows;
    using System.Windows.Interactivity;
    using Framework;

    public class AutoWireUpViewModelBehavior : Behavior<UIElement>
    {
        protected override void OnAttached()
        {
            base.OnAttached();
            var view = (FrameworkElement)this.AssociatedObject;
            var viewModelName = string.Format("{0}Model", view.GetType().FullName);
            var viewModel = ServiceLocator.IoC.Resolve<object>(viewModelName);
            view.DataContext = viewModel;
        }
    }
}
Code is quite simple: pointer to a user control is being passed to a behavior. Following the naming convention explained above, I add the “Model”to the name of the view so I would get the specific view model key which I use to resolve view model from a container. Once resolved view model, I set to a DataContext of a user control being passed to a behavior. Same approach as the one in explicit sample, just encapsulated in the behavior.
Now when there is a framework level behavior, only thing a designer has to do to wire up a view and view model (I emphasize here again that both of them are following the #1-#3 principles and not having any base class, interface etc) is to fire up a blend and just drag&drop the AutoWireUp behavior to a control.
imageimage
Off course, for us developers executing next R# snippet and resolving namespaces is maybe more suitable solution
    <i:Interaction.Behaviors>
        <Framework:AutoWireUpViewModelBehavior />
    </i:Interaction.Behaviors>
Regardless of which way you would prefer, the end goal is achieved: view and view model are wired up on a unobtrusive way removing the need for infrastructure bloat used usually to enable that.

Putting it to work

If we run our sample, everything will work just fine (even with view and view model being completely empty) except we won’t see the current date on the screen so we can’t know for sure if it work or not, aren’t we?
Let modify the view to its final state
<Window x:Class="NakedMVVM.MainWindowView"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:Framework="clr-namespace:Framework.Behaviors;assembly=Framework"
        xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity">
    <i:Interaction.Behaviors>
        <Framework:AutoWireUpViewModelBehavior />
    </i:Interaction.Behaviors>
    <Grid>
        <TextBlock Text="{Binding HeadingCaption}" />
    </Grid>
</Window>
and the view model
namespace NakedMVVM
{
    using System;
    using System.ComponentModel;

    public class MainWindowViewModel : INotifyPropertyChanged
    {
        public MainWindowViewModel()
        {
            HeadingCaption = "Yes it works on " + DateTime.UtcNow;
        }

        private string headingCaption;

        public string HeadingCaption
        {
            get { return this.headingCaption; }
            set
            {
                this.headingCaption = value;
                this.OnPropertyChanged("HeadingCaption");
            }
        }

        #region The usual INPC implementation
        public event PropertyChangedEventHandler PropertyChanged;

        public void OnPropertyChanged(string propertyName)
        {
            PropertyChangedEventHandler handler = this.PropertyChanged;
            if (handler != null)
            {
                handler(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        #endregion    }
    }
}
Run the app
image
See, it works Smeško
You can download the source code of end solution here.
What do you think? Have you seen this approach in whole somewhere? Does it makes sense to you or it look to you just-another-fluffy-pattern-thing?
Looking forward to hear your thoughts on my approach!
Filed under: Uncategorized 23 Comments
26Jun/100

Windows LiveID – Microsoft red headed stepchild?

I personally believe Microsoft is missing (if not already missed) the opportunity to monetize serious potential of Windows LiveID.

For years already, there are more then half a billion user accounts (which surpasses current number of Facebook accounts) which Microsoft could’ve use to create serious advertisement revenue the same way Facebook is doing now. The value proposition for users is the fact that they don’t have to remember “yet another user name and password” which is a definite win – at least for me. For identity holder (Microsoft/Facebook), getting information on activities user makes across the web is clearly a win from marketing and advertisement perspective. It is such an obvious case of win-win scenario, that I don’t want to spend any more words on selling it to you, dear reader.

Microsoft (as with many other cool things – Ajax etc) pioneered the Sing sign-on concept more then a decade ago but didn’t do much with it allowing to OpenID, OAuth, Facebook Connect etc to emerge as industry standards.

The reason behind LiveID failure to reach world domination in identity space is related to the poor developer story which prevented wider adoption of the LiveID as “the one online identity”./ Having in mind we are speaking about the Microsoft  as developer oriented company I find that to be quite hilarious in one hand and a proof of lack of Microsoft strategic vision in this area. In other words, I believe no one cared(s) in Microsoft so much about getting the benefits from Windows LiveID as some startup might try to (who said Facebook Connect?).

Why Microsoft failed to dominate with Windows LiveID

Here are couple of reasons why I think thinks are like they are right now…

Year 2003, Microsoft attempts rolling out LiveID (at that time called Passport) to couple of big companies (including eBay and Monster) which dies in 2004. Whatever the reasons were, loosing two of such a big adopters in 2004 is (IMHO) VERY stupid because if it did happen I bet we would be all using LiveID across the web right now “with not much of an alternative”.

Year 2006, MS had a STS running on http://sts.labs.live.com. I can not imagine a reason why would something like that die but there’s no such thing today.
Having an STS (web service auth token issuer) is all I would personally care about in order to adopt LiveID in my apps.

On Mix 2008, they announced Windows LiveID SDK CTP which was up until yesterday the only way of integrating Windows LiveID with client apps and sites. They have also announced on the same mix that LiveID would become OpenID provider, but that didn’t long last too.

Beside this efforts, Microsoft was trying also to pitch LiveID in parallel using its own strongest weapon: Windows.

Microsoft's Windows XP has an option to link a Windows user account with a Windows Live ID (appearing with its former names), logging users into Windows Live ID whenever they log into Windows. To me that sounds very nice (I’ve already auth myself logging on my PC and established a trust relationship which for most of the web sites out there should be sufficient). The only problems with this is that is almost unknown feature. I did a smoke test asking 10 people I know which use Windows Live Messenger on day to day basis if they use it – none of them even knew about it. I don’t even have clear understanding how this thing gets installed other then guessing it gets bundled in Live essentials installer .

Then, there is CardSpace which is industry correct and secure way of handling our online identity information. All great, except for the fact that it is quite a mystery “how to use it” Smile. In 2007, there was a beta of  CardSpace LiveID integration but after that nothing happens.CardSpace being a part of most of Windows apps is such a untapped potential that it is hart for me to believe that no one is trying to utilize it more seriously.

Couple of things I hope Microsoft will do with LiveID in the future

Microsoft still has some chance to emerge as one of the leaders in identity space but to do that they might consider doing some of the next things:

  • promote “LiveID” to become a 1st grade citizen in Windows.

    Ask for LiveID to be entered during the windows installation process (most of us have it anyhow). Windows Live service would then (during the install process itself) issue Card Space managed card for a user. Right from the moment system would be installed, that card should be used on every LiveID site.
    Even better put that card on my Live SkyDrive so it could roam with me while I work on different computers. If not possible to be built in Windows (ether Win7 SP1 or Win8), can we at least consider building it in Internet Explorer 9?

    I know this could be probably breaking some monopoly law, but lets face it – MS didn’t become what it is playing fair but playing bold.
    Apple integrating MobileMe and Chrome integrating google bookmarks and Flash are doing exactly that.

  • Use LiveID on ALL of the Microsoft sites. No excuses. Period.
    Just check out last post on windowsteamblog.com which (ironic isn’t it) introduces newest LiveID “Messenger Connect” API.
    To post a comment you need to Sign In
    image
    but that is not using the LiveID  Smile
    image

    If you don’t trust in it, why should we?

  • Spread it across the web similar to Facebook ‘Like’ button (‘Post with Messenger’) 
    Here’s a sample of how Bing can be used to collect ‘Likes’ by adding a ‘Post With Messenger button’ which would send it to people on messenger contact list andor Facebook, MySpace etc… Even this would end with forwarding it to Facebook as ‘Like’ there is still value in collecting those data associated with LiveID…

Do the same with as much as possible social networking sites.
Aggregate the data and share it with us developers so we can personalize better our content (not only ads)..

  • Stay away from hustling users as much as you can
    Force me to log in only once in 24 hour or more.  I ‘m sure this goes against best security practices etc. but for most web sites it really doesn’t matter. Otherwise you shift from "login screen” to “nag screen“
  • Support other browsers the same you do with IE.
    What’s the story with the FireFox and Windows Live? One of my friends, decided to use DropBox instead of LiveMesh (even offered 250% more storage space) just because he hates LiveID. Reason: he uses Firefox and it looks like that “Remember me”check box on Firefox has slight dementia so the login screen is quite annoying Smile 
  • Support other security protocols
    LiveID as OpenID provider, OAuth etc… The more adapters the merrier.

Last but not the least - respect us developers

I got personally interested in this topic because I choose WPF for my LOB application I am playing lately and I decided to use LiveID for authentication (everyone I know has one – regardless how they use it).

My preferred approach to building this app is S+S (which I am not sure if is still official MS way to go) where a desktop client application gets powered by services from the web/cloud getting with this approach best of both worlds: best user experience on windows machines + data in the cloud.

I was so naïve when I decided to try tout LiveID to expect to find some LiveID STS web service to which I would pass user name + password + ApplicationID and get back in response membership token. Naïve because I didn’t even consider the possibility that such thing doesn’t exist which ended as a true case.I really don’t understand why there is no such offering by Microsoft as we speak.
 
The second in favor solution is “acceptable” experience Microsoft has in its own Live Essential suite tools.Here’s a sample of how to do it.

image

As you can see it is a simple client app window which I am not sure what it does on “Sign in” but whatever it does (POST or web service) I am ok with using it too.

What I don’t want:

  • login control being a web browser control showing the special windows live login html page
  • generic control where I can not change the text (my app + folks not speaking English)

Reasons why I don’t accept those two things are I guess exactly the same as the one Microsoft came up with  when deciding not to use it in their own products.

If Microsoft expects me to use LiveID in my WPF apps they have to provide me a way to get the same user experience they have in dealing with the same problem.

Yesterday Microsoft released new Messenger Connect SDK which contains a sample WPF application and WPF template which is encouraging. In order to run the sample, one need to register application with windows live which right now is done through connect where you fill the request form and someone sometime would consider it.

Based on a quick glance over the sample there are no obvious ”skinning” capabilities – no control just a direct call to some function. The only thing I could do while waiting (hopefully) to get an LiveID was to run the sample as it is out of the b ox and this is the result I got “HTML in a box” – not very encouraging.

image

I’ll wait for the application key before I make a final call but so far it doesn’t look like Microsoft cares about WPF + LiveID integration experience..

Keeping fingers crossed but not holding my breath …

del.icio.us Tags: ,
Filed under: Uncategorized No Comments
15Jun/102

What is wrong with Cosmopolitan theme

I am HUGHE fan of Metro design paradigm, so I was more then excited to check out Silverlight business application theme pack  containing the Metro theme template (“Cosmopolitan”) which was released officially couple of days ago.

I am not designer but still wanted to share with community my initial impression and that is: WTF.

Here is picture illustrating why..

UIWaste

Considering the fact that we are speaking here about the web site, the fact that there’s 300 pixel of wasted vertical space (~220 in OOB scenarios) is insane. Think about how usable this site would be used in typical netbook/laptop/slate (any smaller height wide screen).

What they should do is simply copy paste Zune minimalist approach which preserves the UI waste and maximize the central part of the screen showing content.

I am aware that this is template which can be customized etc, but we all know that in a lot of cases it won’t be customized at all and we might end with a bunch of web sites using “Metro theme” (especially once WP7 would be released) which would contribute to Silverlight reputation in a bad way,

In other words, while I really appreciate templates provided to us, I think Microsoft creative ninjas (or someone from the community)  should do a couple more iterations on Cosmopolitan template and make it more usable by default and then we would customize it with 2nd level menu etc.

Filed under: Uncategorized 2 Comments