Category Archives: Tech. Discussions

Integrating Javascript Unit Tests with Visual Studio – Intro

I’ve been working on a project over the last few months that quickly evolved from a Silverlight project to an ASP .NET MVC3. If you’re shifting from Silverlight to MVC and want to maintain that rich client interaction, it means that you’re probably going to be writing a lot of javascript code.

In fact, we quickly realized that we were really writing a javascript client application with a .NET backend. This can be a scary proposition when it’s your first real foray into heavy javascript development. It seems like the first thing to go is the logical structuring and thought that you’d put into your code if this were a strongly typed language. Something about that <script/> tag that just makes you want use it and throw thousands of lines of javascript at it.

As a result, next to go are your unit tests if you had any to begin with.  I will confess, I’ve never been a huge unit tester.  But once you start working with dynamic languages and your warm fuzzy “Build Succeeded” blanket is taken away, I’ll reach for whatever comfort I can get.

The goal of this series of posts is to walkthrough what I did to unit test my javascript and get those test results into Visual Studio’s test output.  Although I may touch briefly on how to use the frameworks I’m talking about, the focus will be gluing them together.

That said here’s what we’ll be working with:

  • MVC3 w/ Razor
  • Qunit
  • KnockoutJS
  • XML/XSLT
  • Watin
  • C# Data-driven unit tests
Some of these ideas were cobbled, stolen and enhanced from other places, so if you can’t wait here is where I started off:

Simplify Unit Testing with IDisposable and the Using Statement

I’m sure I’m not the first to do this or something similar, but I’ll pat myself on the back all the same. Recently, I’ve been doing a lot of IoC and unit testing on some WCF services we’re working on. The services take implementations of our IRepository for data access which I’ve mocked out for testing purposes.

Most people are familiar with the using statement and it’s purpose to assure objects that implement IDisposable are properly cleaned up. A lot of people are also aware that you can use this interface and statement to do automatic scoping of some operations in a nice consistent manner. The first time I ever saw this type of pattern was as a shortcut to set the Cursor in a WinForms app to the WaitCursor and then back to the default once some long running operation was complete.

For Example:

using (CursorScope.SetCursor(this, Cursors.WaitCursor))
{
    // long running operation
}

Where the CursorScope class looks like this:

class CursorScope : IDisposable
{
    private Cursor _originalCursor;
    private Control _control;

    private CursorScope(Control control, Cursor newCursor)
    {
        _control = control;
        _originalCursor = _control.Cursor;
        _control.Cursor = newCursor;
    }

    public static CursorScope SetCursor(Control control, Cursor newCursor)
    {
        return new CursorScope(control, newCursor);
    }

    public void Dispose()
    {
        _control.Cursor = _originalCursor;   
    }
}

So as we enter the using statement, the cursor will be changed for the control, and as we leave the using statement and our object is “disposed” it will be set back to the original value regardless of any exceptions or errors that may occur between. This syntax and behavior is useful in a number situations. The most recent place I’ve been using this, as the title mentions is in some of my unit tests. To test out my services, I’ve built some mock repositories for the services to call. There are two scenarios where this has come in handy.

The first is when I’m trying to test the error and exception handling in my service. To get complete code coverage you need to make sure your tests also throw exceptions where you’re expecting them to be handled otherwise your catch block will never be tested. To support this, I added a boolean flag on my mock repository called ThrowException, when this is set to true, any operation you try to take on the repository with throw an exception I could use this in one of two ways, I could try to remember always setting the flag to true and then back to false when I’m done testing it. The problem there is depending on how you’re unit tests are run and set up, if you forget to set the flag back to false, it may cause your mock repository to throw exceptions in other unit tests. To help avoid this situation I made the flag private and added a ThrowException() method to the mock repository that returns an IDisposable.

private bool AllMethodsThrowExceptions { get; set; }

public IDisposable ThrowExceptions()
{
    AllMethodsThrowExceptions = true;
    return new ExceptionDisposable(this);
}

class ExceptionDisposable : IDisposable
{
    public MockRepository Repository { get; set; }
    public ExceptionDisposable(MockRepository repo)
    {
        Repository = repo;
    }

    public void Dispose()
    {
        Repository.AllMethodsThrowExceptions = false;    
    }
}

And then in our unit test we can test the exception path like so:

[TestMethod]
public void GetUserExceptionTest()
{
    using (_mockUserRepo.ThrowExceptions())
    {
        _log.ClearLog();
        var response = _service.GetUser(new GetUserRequest());
        Assert.IsNull(response.User);
        Assert.AreEqual(1, response.ErrorMessages.Count);
    }
}

Which will cause our exceptions to be thrown so we can test to make sure our code handles them properly. Then, when we’re done with that part of our unit testing, it will reset the flag back so any other tests that run subsequently, everything will work correctly.

The other area where this comes in useful is because our mock repository uses in-memory data, and as we manipulate this could throw off our tests as well, so we implement the same pattern, but this time instead of setting a flag, we copy our repository’s data and reset it once the test is complete.

public IDisposable ChangingData()
{
    return new ChangingDataDisposable(this);
}

public class ChangingDataDisposable : IDisposable
{
    public MockRepository<TEntity> Repository { get; set; }
    private List<TEntity> OriginalData;
    public ChangingDataDisposable(MockRepository<TEntity> repo)
    {
        Repository = repo;
        OriginalData = CloneData(Repository.Data);
    }

    private T CloneData<T>(T data)
    {
        if (!typeof(T).IsSerializable)
            throw new ArgumentException("Must be serializable.", "data");

        if (Object.ReferenceEquals(data, null))
            return default(T);

        var s = new DataContractSerializer(typeof(T));
        using (var stream = new MemoryStream())
        {
            s.WriteObject(stream, data);
            stream.Seek(0, SeekOrigin.Begin);
            return (T)s.ReadObject(stream) ;
        }
    }

    public void Dispose()
    {
        Repository.Data = OriginalData;
    }
}

Here when you call ChangingData() I clone the data using serialization, and again, when you’ve disposed the object it resets the data.

[TestMethod]
public void UpdateUserTest()
{
    using (_mockUserRepo.ChangingData()) 
    {
        var user = _mockUserRepo.Data.FirstOrDefault(i => i.Id == 1);
        Assert.AreEqual("Nick", user.FirstName);

        user.FirstName = "Brian";

        var request = new UpdateUserRequest();
        request.User = user;

        var response = _service.UpdateUser(request);
        Assert.AreEqual("Brian", user.FirstName);
        Assert.AreEqual(1, response.EntityId);
    }
}

And all the data is reset back to the original values and ready for the next test to work off it.

Custom Markup Extension To Replace IValueConverter

Something that’s been around in WPF for a long time but is just seeing the light of day in the upcoming Silverlight 5 release is the concept of a custom MarkupExtension. For those unfamiliar with the concept, a MarkupExtension is anything within the curly braces “{}” of your xaml.

Some examples include, Binding, Static- and DynamicResource, Type, Static, etc. You can look at the code below and see some examples:

<Grid x:Name="LayoutRoot" > 
    <ContentControl Style="{StaticResource BgBlueTop}"/> 
        <Border > 
            <ItemsControl ItemsSource="{Binding Messages}"> ... </ItemsControl> 
        </Border> 
</Grid>

Basically, the xaml parser can interpret an attribute as a string literal value or convert it to an object through some means. Markup extensions allow you to do the deciding about how the value you’re setting should be interpreted by the xaml parser. For a much more in depth look at how this all works, a decent place to start is this link on MSDN:

http://msdn.microsoft.com/en-us/library/ms747254.aspx

One interesting use of MarkupExtensions I’ve been playing with is a reimplementation of the Binding markup extension to provide an alternative to the IValueConverter interface. Sometimes you really only need to do a conversion once or a couple times on a specific screen. When you implement a value converter it feels like you are taking a tiny bit of view or business logic and stuffing it in an unrelated portion of your application. That being said, a standardized library of value converters like boolean-to-visibility and so on can make developing on a project much easier to use. But for the one off scenarios, it’d be nice to keep all that logic contained in your ViewModel and not have to worry about spinning up a new class for such a simple thing.

I used the post here as a starting point for my custom binding because working with Binding or BindingBase was not going so well. From there I created my own Binding class and added a ConverterDelegate property. This property looks for a method on the same DataContext of the binding and uses a generic IValueConverter behind the scenes to call that method of the DataContext. This helps get rid of all those little one-off classes that you have to create for specific conversion scenarios.

The code is far from production ready but the gist of it is that a custom MarkupExtension overrides the ProvideValue method of the abstract MarkupExtension class. In the background I look up the method on my DataContext and use a IValueConverter that calls that method to do the conversion. So to wire up my converter I can simply do this:

<Window x:Class="CustomMarkup.Window1" 
	xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
	xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
	xmlns:local="clr-namespace:CustomMarkup" Title="Window1" Height="300" Width="300"> 
	<Grid> 
           <Button Content="{local:ExBinding Path=ButtonText, ConverterDelegate=ToUpper}" Height="25" Width="100"/> 
	</Grid> 
</Window>

You can see I have a converter delegate called ToUpper and in my code behind or ViewModel I can simply write whatever methods I need to do conversions like this:

public partial class Window1 : Window
{
	public Window1()
	{
		DataContext = this;
		InitializeComponent();
	}

	// our converter method
	public object ToUpper(object value)
	{
		return (object)value.ToString().ToUpper();
	}

	public string ButtonText { get; set;}
}

I can post the sample project if I get interest but the real point is to give a small showing of what’s possible with custom MarkupExtensions in WPF and SL5.

Uses for Silverlight reflection, pt. II

So, as promised, here is the followup to what I was doing with Silverlight reflection that made me need access to Internal members of a class.

Localization in Silverlight is still an interesting story and everybody seems to have their own way of doing it. The client I’m on uses Excel spreadsheets that load into a database. The localization data then pulls when a usercontrol loads and changes the controls data based on the locale and what was in the database.

This was very tedious for developers to setup. You would create your UI and then have to go back and pull the default text for your controls and add them all to the spreadsheet.

Using reflection I can create an instance of all the user controls in my assembly and display them to the user in a listbox. From there, when they select a user control, I can reflect on that type and display all the controls defined in the UserControl to the user. When you define a control in xaml and give it a name, it becomes an internal member of a partial class generated by Visual Studio.

So, what I do is when you are all done with your UserControl you simply run my tool on the assembly and it reflects through and shows all the UserControls. When you select one, it uses reflection to create an instance with the default constructor, which initializes all your controls properties.

From there I grab all the controls through reflection and use some logic to get which properties of each I want to localize. For TextBlocks I grab the Text property, for ContentControls it’s the Content property, and so on.

Using that I generate a simple tab-delimited string with all the default localization for all the controls and the user can copy/paste right into Excel and move on to their next task and let someone else do the translating/localizing for them.

Transactions inside stored procedures

More a short rant today than anything else.  If you are writing a DAL most of it can be generated and be completely boilerplate.  Everything looks the same, acts the same, is called the same way.very beautiful.

There’s always exceptions, however.  Sometimes you just NEED to do something more complicated.  Inserting into multiple tables at once, possibly across different databases.

The project I’m working on has a very consistent way of putting database operations in transactions.  I’ve been troubleshooting a number of problems in the app where the app will crash after trying to save some records.  The proc seemed to be working, it was returning a new id and everything was happy.  Why couldn’t the app find these records?

Because the stored procedures fall under the exceptions to the rule above and the developer of them thought it would be best to do transaction handling inside the stored procedure. 

Two problems, one the transactions across the linked server didn’t work because of configuration issues.  Secondly, the stored proc did a try . catch . rollback without calling RAISERROR.

I certainly think that transactions inside the stored procedure can clean up some of the client code, and help in situations when the stored procedure can be called in many different places and always needs to be transactional.  Just keep in mind that unless we let the calling client know something went wrong it’s just as bad as swallowing exceptions in C# or VB.  Fail fast and hard, as always.

Frame-based animation in WPF

I recently was working on the ubiquitous photo/slideshow app in WPF.  This is something I’ve been tinkering with off and on for last 6 months.  The original intention was to create a photo slideshow application for my upcoming wedding.  Being the nerd, a static video slideshow just wasn’t going to cut it.

Along the way I learned quite a bit about keeping performance up and memory usage low while working with tons of images.  It’s finally in a position where it’s almost done and I wanted to add a few tweaks.  The photos zoom in and randomly arrange like they were dropped on a table.  Once there, they show one by one. 

I wanted to add a little random “drift” while the image was showing to make it more interesting to the eye.  I started originally by creating random storyboards and listening to the Storyboard.Completed event.  When the event fired I created a new storyboard to animate my photo’s Canvas.Left and Top properties.  This worked, but there was an annoying lag between the stop and start of the animations. 

I wanted to move to a frame-based animation rather than WPF’s built-in time-based animation style.  I could have used a timer to update my properties, but I wanted to work more within the constraints of WPF.  I found articles for Silverlight that indicate an empty storyboard with no duration will fire it’s completed event on the next frame, there you can update your properties and restart the storyboard to update your properties every frame.  Although this may work in Silverlight, I could not get it to work in WPF.  Though as I look back, I didn’t try setting the Duration to “0:0:0″.  I wonder if that would work?

Regardless, the technique I ended up using was listening to CompositionTarget.Rendering event it code-behind.  The event fires before your UI renders each frame, allowing you to hook in and do frame based animation.

For more information, you can go here:

http://msdn.microsoft.com/en-us/library/ms748838.aspx

Why does Control.Parent exist?

Encapsulation is one of the major tenets of object orient programming. The ability to not worry about how things work behind the scenes and just use a component is lovely.

Recently, I’ve had to be a on a project where a lot of UserControls have been created and tons of them reference their parent property internally to get some bit of data that they need for this or that.

First of all, not only have you completely broken encapsulation by doing that, but now you can’t use that UserControl on any other form.

My favorite part of encapsulation is easy refactoring. We all make mistakes and wish we would have written something differently, and if we have the time to correct those mistakes we should. Recently, a developer wanted to give the UI a bit more user-friendliness so he threw a bunch of splitter panels on a form with some grids and UserControls to let the user customize the form a bit.

The problem is one of the UserControls referenced it’s Parent property. So now instead of ((parentForm)this.Parent).SomeProp I had to go back and fix the bug he introduced by doing this ((parentForm)this.Parent.Parent.Parent.Parent.Parent).SomeProp just get it to run. I realize shortcuts can save a ton of time in some cases, but why is Control.Parent allowed to exists when it makes it so easy to break encapsulation and lead to big problems with refactoring down the road.