Category Archives: Uncategorized

Integrating Javascript Unit Tests with Visual Studio – Wrapping Up

Welcome to the grand finale of this four part series. So far you’ve got your javascript unit tests, you’ve got them all nicely organized and indexed using the info from the previous post, now how do I get to see those results when I’m inside Visual Studio? For that last cherry on the pie we’re going to use a browser automation tool walled Watin and a data-driven unit test in C#. This was borrowed from another blog and tweaked to use the XML indexing trick.

http://blogs.imeta.co.uk/MGodfrey/archive/2010/07/15/874.aspx

http://watin.org/

First things first, setting up your data driven unit test. If you are not familiar, this is a unit test in MSTest that you can pass in a dataset have rerun against all the values in that dataset. Here, our “dataset” is going to be the results of our javascript unit tests. In the class initialize of our unit test we go out and download the xml data that holds the location of all our test pages. From there, we use Watin to visit each page and parse out the results of each test. Because our unit tests run when a browser hits the test page, this is all we need to do run our unit tests. One thing you may need to tweak is the Watin dll has needs a reference to Interop.SHDocVw.dll, if you get errors loading that DLL, check it’s properties in your project reference and make sure that Embed Interop is false and Copy Local is true


Then we’re going to add a single test file called JavaScriptTests.cs to our unit test project. In the class initialize we’re going to setup the code that downloads our xml test index and parses out each test page’s results

    private static string _baseUrl = "http://localhost:8000/Test/";
    private static IE _ie;

    [ClassInitialize]
    public static void ClassInit(TestContext context)
    {
        var xdoc = XDocument.Load(_baseUrl + "?notransform");
       
        _ie = new IE();
        _ie.ShowWindow(NativeMethods.WindowShowStyle.Hide);

        var resultsDoc = new XDocument(new XElement("Rows"));
        
        foreach (var testPage in xdoc.Descendants("Row"))
        {
            _ie.GoTo(_baseUrl + testPage.Descendants("TestPageUrl").First().Value);
            _ie.WaitForComplete(5);
            var results = _ie.ElementsWithTag("li").Filter(x => x.Parent.Id == "qunit-tests");
            var xResults = from r in results
                           select new XElement("Row",
                               new XElement("name", GetTestName(r)),
                               new XElement("result", r.ClassName),
                               new XElement("summary", r.OuterText));
            
            resultsDoc.Root.Add(xResults);
        }
        resultsDoc.Save("JavascriptTests.xml");
    }

Quick summary, we load the xml index into a XDocument, parse each url, use Watin to load the page, and scrape out the result for each test on the page. This is all added to a separate XDocument and saved in the local test run folder as JavascriptTests.xml. This will be the dataset that gets passed into our data driven unit test and ends up looking something like this:

<?xml version="1.0" encoding="utf-8"?>
<Rows>
  <Row>
    <name>My First Tests_FullNameTest</name>
    <result>pass</result>
    <summary>My First Tests: FullNameTest (0, 1, 1)Rerun
full name built properlyExpected: "Nick Olson"</summary>
  </Row>
  <Row>
    <name>My First Tests_capitalizeTest</name>
    <result>pass</result>
    <summary>My First Tests: capitalizeTest (0, 1, 1)Rerun
capitalize worksExpected: "OLSON"</summary>
  </Row>
  <Row>
    <name>My Second Tests_FullNameTest</name>
    <result>pass</result>
    <summary>My Second Tests: FullNameTest (0, 1, 1)Rerun
full name built properlyExpected: "Brian Olson"</summary>
  </Row>
  <Row>
    <name>My Second Tests_FullNameFailTest</name>
    <result>fail</result>
    <summary>My Second Tests: FullNameFailTest (1, 0, 1)Rerun
full name built properlyExpected: "Brian Olson"
Result: "Nick Olson"
Diff: "Brian "Nick  Olson" </summary>
  </Row>
</Rows>

This is the aggregated test results for both of our test pages. The last little bit is a simple data driven unit test that takes this xml file and basically just asserts on the result element of each Row in the xml document

    [DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML", "|DataDirectory|\\JavascriptTests.xml", "Row", DataAccessMethod.Sequential), TestMethod]
    public void JavascriptTestRunner()
    {
        var testName = TestContext.DataRow["name"].ToString();
        var testResult = TestContext.DataRow["result"].ToString();
        var summary = TestContext.DataRow["summary"].ToString();

        TestContext.WriteLine("Testing {0} - {1}", testName,testResult);
        if (testResult != "pass")
            Assert.Fail("{0} failed: {1}", testName,summary);
    }

That’s it! You’re all done. Now when you run your tests in Visual Studio, it will run all the C# unit tests, including your new data driven javascript test and aggregate all these into your normal Test Results window. If you looked at my sample code you may have noticed that I have a failing test in javascript. If I run my tests in Visual Studio you can it sitting, failed, next to some C# unit tests, but it only tells me my data driven test failed, no other info

But if we click on that test it will open up a detail window and give us the results for every record in the data set for that data driven test, which is a lot nicer

Double clicking on that will give us even more detail yet

So there you go, test in the browser, test in Visual Studio it’s up to you. Generally I like to work on the html test page while I’m working and writing my tests and let the MSTest integration work for me on a build server or something else when the tests are run.

Here is the complete little sample app, JsTestComplete.zip

Integrating Javascript Unit Tests with Visual Studio – Organizing Your Tests

So now that your javascript is tested, we need work on getting your test results into Visual Studio. The first step is to organize your tests. You probably don’t have one giant C# file for all your unit tests and you probably are going to want different files for your javascript tests. I took an approach discussed by Phil Haack here and extended it a little bit.

http://haacked.com/archive/2011/12/10/using-qunit-with-razor-layouts.aspx

The basic idea is to use controller-less views in Razor to rollup all your test files into one nice little index so you can access them all at once.  In my MVC project I create a new Test folder and add a subfolder called TestPages, in the root of the Test folder I have three files I will walk through next

The Index.cshtml is a standard view, with a twist.  Instead of a standard HTML layout of head, body, etc tags, I actually set it up to be a straight XML file.  This is my big addition to the whole take on this type of integration.  It uses some simple file access to get the list of pages in the TestPages folder.  These are the pages that actually contain our Qunit tests.  The Index creates a simple XML document that looks like this

<!--?xml version="1.0" encoding="utf-8" ?-->
<!--?xml-stylesheet type="text/xsl" href="http://localhost:51844/Test/transform"?-->
<Rows>
        <Row>
            <TestPage>TestOne</TestPage>
            <TestPageUrl>TestPages/TestOne</TestPageUrl>
        </Row>
        <Row>
            <TestPage>TestTwo</TestPage>
            <TestPageUrl>TestPages/TestTwo</TestPageUrl>
        </Row>
</Rows>

The transform.cshtml is a simple XSLT that works on the XML that Index.cshtml generates so when you view it in a browser it looks like this.

Clicking one of the pages brings you to that test page in the TestPages folders and runs the Qunit unit tests and displays the results.  This is where the real benefit of your Knockout viewmodels comes in. Because it let’s you right pure javascript and rely on databinding to manipulate the DOM, you can simply include your viewmodel scripts and start unit testing them in your test pages. If you javascript was littered with class selectors and hardcoded ids you would have a much harder time and would probably have to create mock elements on your test pages for all of those elements.

For now, that should all you need to get your tests all organized and runnable in the browser.  From here the next step will be integrating them into Visual Studio so when you run your unit tests you get both native C# or VB test results, but your javascript results as well.

To download a simple test solution, check it out here
JsTest.zip

Integrating Javascript Unit Tests with Visual Studio – Testing your Javascript

First, let’s pause…the point of this post series is how to integrate your javascript unit tests with Visual Studio, not to teach you how to use the frameworks I’m discussing.  The rest of this post is going to assume you’re familiar with KnockoutJS and Qunit.

Knockout is a framework in javascript that let’s you create javascript viewmodels for your html page.  When done correctly, your view model simply maintains it’s own state and is blissfully unaware of the existence of any HTML elements. This is one of the death knells of your javascript unit tests.  When your javascript knows about html structure and elements it forces you to mock your view out when unit testing otherwise you can’t properly test it.  Knockout let’s you write clean testable javascript and move the glue between your code and the HTML DOM into a declaritive data binding syntax in the HTML.  Again, not here to teach you Knockout, much better places to learn that, like so:

http://learn.knockoutjs.com/

Likewise, Qunit is the javascript unit testing framework used by jQuery.  It seems like the modern internet wouldn’t exist without jQuery, so we might as well run with that, even though there a number of javascript unit test frameworks out there.

http://docs.jquery.com/QUnit

To integrate our tests we’re going to need some code to tests and tests to test it first, so let’s start there.  I’ve stolen the following view model from the knockout tutorial site:

function AppViewModel() {
    this.firstName = ko.observable("Nick");
    this.lastName = ko.observable("Olson");

    this.fullName = ko.computed(function() {
        return this.firstName() + " " + this.lastName();
    }, this);

    this.capitalizeLastName = function() {
        var currentVal = this.lastName();        // Read the current value
        this.lastName(currentVal.toUpperCase()); // Write back a modified value
    };
}

What should we unit test? Well, we’ve got the fullName property which has some logic, and the function to capitalize the last name, so let’s start there.

$(function () {
        module("My First Tests");

        test("FullNameTest", function () {
            var model = new AppViewModel();
            equal("Nick Olson",model.fullName(), "full name built properly");
        });

        test("capitalizeTest", function()
        {
            var model = new AppViewModel();
            model.capitalizeLastName();
            equal("OLSON",model.lastName(), "capitalize works");
        });
    });

To see it all in action, check it out here:
http://jsfiddle.net/x6be5/2/

 

 

MVVM and Speech using the Kinect–Pt. I

I have always been fascinated by interacting with a computer in ways beyond the traditional keyboard and mouse.  The problem is, as I learned while getting my degree in computer science, some problems are really hard.

I didn’t want to dedicate years to obtaining PhD in computer vision or spend my life doing statistics. But, like any other good developer, I’ll just take the toolkit someone else came up with to solve the hard problems.

Enter the Kinect.

In reality, Microsoft has had a Speech API for quite some time and I’ve played with it in the past, but with the Kinect they’ve produced a beautiful piece of hardware that can do 3D depth sensing, speech recognition and directional detection, live skeletal tracking and more all in package that doesn’t cost a lot more than a high-end web cam.  This first example doesn’t actually require the Kinect, in fact, the code itself just uses the default audio input.  But it can be easily changed to use the Kinect audio stream.  Later projects I’m working on will use the Kinect camera’s to do some hopefully neat things.

Since the Kinect hacking started my brain has been churning with ideas.  Of the most pragmatic, was the thought to tie in speech recognition to an MVVM application.  The nice thing about a well implemented screen using MVVM is that you have your UI described seperately in XAML on the front end and a class (ViewModel) containing your library of commands that can be executed.  Using a Command object you can tie a specific element, like a button, to perform a specific command like Save very cleanly and easily.

This clean seperation of concerns mean you don’t really care how a command is invoked, whether it’s a button press, keyboard shortcut, or voice command it all works the same.  Your ViewModel executes the command and the UI happily updates through the powerful databinding of XAML.

Aside from the obvious scifi references that this brings to mind, it could also help by making programs more accessible to the vision or mobility impaired.  Also, it could be just plain more efficient in some scenarios.

Most of the work is done for us by the commanding infrastructure in WPF.  So first I’d like ta take a look at how this implementation will be used.  Below is a standard button declaration with a command attached.

<Button Content="Save" Command="{Binding Save}" />;

The other great thing about XAML is the extensibility, so by the time we’ve implemented this speech API the only thing that will change is this:

<Button Content="Save" Command="{Binding Save}" 
             voice:SpeechCommand.Phrase="save" />;

One simple property added and that’s pretty much all the end developer needs to do.  The only other thing we need for using the speech recognition is something I call a SpeechCommand, which is basically just an implementation of the standard DelegateCommand found in MVVM frameworks.  The SpeechCommand acts exactly like the standard commands, but it is also the place for the Phrase AttachedProperty to live and is the glue that bridges the application to my wrapper around the Speech API.

In the next post I’ll walkthrough how I built the app and post some source code.  Until then, I leave you with a screenshot.  Please note, that no mouse or keyboards were harmed (or used Smile) in the taking of this screenshot.

voice

 

Overlaying Controls in WPF with Adorners

One of the common things that comes up on multiple projects using WPF is the ability to overlay the screen or a certain portion of it.  Either to create a richer modal-type experience than a message box provides or to block access to a certain portion of the screen while an asynchronous or long running operation is happening.

There are a number of ways to do this but the one I’ve settled on after tackling it on a few projects is an adorner that automatically overlays and control with any content you want.

Other options include using the Popup control, which is problematic because popups are not part of the normal visual layout.  They are always on top of all other content and don’t move when you resize or move the window, at least not automatically.  Another way you can do it is put everything inside a grid, and add the content you want to overlay with at the end of the Grid’s content with no Row or Column specification.  You can set the visibility to collapsed and show or hide based on databinding or triggers, etc.  This works better than the popup for resizing, but is not as reusable.  Even though the adorner is a bit more code, I think it’s more reusable and better than the Popup option.

The way I use it is I create a UserControl that will be my overlay, let’s call it ProgressMessage.  I’ve got a Grid I want to overlay called LayoutRoot.  I can then call OverlayAdorner<ProgressMessage>.Overlay(LayoutRoot).  Now my grid will be overlaid with the ProgressMessage user control.  I’ve also provided an override of the Overlay method so you can actually pass in an instance of the content you want to overlay with.

I use a factory pattern and how IDisposable/using statements work to automatically create/remove the adorner.  You could also store the IDisposable that’s returned and call Dispose later to remove the AdornerLayer

using (OverlayAdorner<ProgressMessage>.Overlay(LayoutRoot)) 
{ 
   // do some stuff here while overlaid 
}

A couple of quick notes, because of the way WPF layout and hit-testing works, you should not have any height or width set on your overlay content, and the background needs to be non-transparent.  To get a semi-transparent background use the alpha-portion of the aRGB color format on your background.  So instead of Black, use #44000000 and that gives you a semi-transparent gray background.  Additionally, all these methods block mouse input, but the keyboard navigation remains active.  I’ve started playing with lost focus events and other methods to intercept losing focus and retain that.  Otherwise the user can tab through the controls underneath the overlay and activate them using arrow keys, enter and space bar.  You can either solve this, or once I straighten it out I’ll post what I come up with

 

Here is the rest of the class, OverlayAdorner.cs

    /// <summary> 
    /// Overlays a control with the specified content 
    /// </summary> 
    /// <typeparam name="TOverlay">The type of content to create the overlay from</typeparam> 
    public class OverlayAdorner<TOverlay> : Adorner, IDisposable where TOverlay : UIElement, new()
    {
        private UIElement _adorningElement; private AdornerLayer _layer; /// <summary> /// Overlay the specified element /// </summary> /// <param name="elementToAdorn">The element to overlay</param> /// <returns></returns> public static IDisposable Overlay(UIElement elementToAdorn) { return Overlay(elementToAdorn, new TOverlay()); } 
        /// <summary> 
        /// Overlays the element with the specified instance of TOverlay 
        /// </summary> 
        /// <param name="elementToAdorn">Element to overlay</param> 
        /// <param name="adorningElement">The content of the overlay</param> 
        /// <returns></returns> 
        public static IDisposable Overlay(UIElement elementToAdorn, TOverlay adorningElement)
        {
            var adorner = new OverlayAdorner<TOverlay>(elementToAdorn, adorningElement);
            adorner._layer = AdornerLayer.GetAdornerLayer(elementToAdorn);
            adorner._layer.Add(adorner);
            return adorner as IDisposable;
        }

        private OverlayAdorner(UIElement elementToAdorn, UIElement adorningElement)
            : base(elementToAdorn)
        {
            this._adorningElement = adorningElement;
            if (adorningElement != null)
            {
                AddVisualChild(adorningElement);
            }
            Focusable = true;
        }

        protected override int VisualChildrenCount
        {
            get { return _adorningElement == null ? 0 : 1; }
        }

        protected override Size ArrangeOverride(Size finalSize)
        {
            if (_adorningElement != null)
            {
                Point adorningPoint = new Point(0, 0);
                _adorningElement.Arrange(new Rect(adorningPoint, this.AdornedElement.DesiredSize));
            }
            return finalSize;
        }

        protected override Visual GetVisualChild(int index)
        {
            if (index == 0 && _adorningElement != null)
            {
                return _adorningElement;
            }
            return base.GetVisualChild(index);
        }
        public void Dispose()
        {
            _layer.Remove(this);
        }
    }

Code Generation with T4

I was first exposed to T4 in January/February of 2008 when I was ramping up for a project that used the Guidance Automation packages.  Even though that fell through, it was still worthwhile for getting exposure to a cool little utility like this.

T4 stands for Text Template Transformation Tool and is a built-in feature for Visual Studio 2008 and as an add-on for 2005.  It’s an engine that can be used for generating code from any data source you want.

Add a template
In any project in VS, right click and Add -> New Item., select a text file and rename it so it has the extension .tt.  This keys Visual Studio into the fact that it’s dealing with a T4 file.

Now would also be a good time to install the Clarius add-on for T4.  http://www.visualt4.com/

This tool gives you a bit more design time support than you normally get with T4 out of the box.  Note to the faint of heart: Unless your willing to shell out for the not-free editions your about to step back in time to a place before Intellisense, AutoComplete, Syntax highlighting and other things we’re all accustomed to.

The best way to think about T4 is back to web scripting languages like PHP, ASP, etc.  You have your template, and you have blocks of code for control flow for logic.  Let’s take a look at a simple template

[code lang="xml"]
<#@ template language="C#" debug="true" hostspecific="true" #>
<#@ output extension="txt" #>
<#
for(int i=0; i < 10; i++)
{
#>
I'm a template on line <#= i #>
<#
}
#>
[/code]

Which tells us the control flow is done in C# and the output is a text file.  When you save the file the template engine will run.  You can also run the engine by right clicking and choosing “Run Custom Tool”.

You can see our output is now nested under our item in Solution Explorer.

t4-1

There are two tags to be aware of here <# #> and <#= #>, the first allows for control flow code to be inserted and the second one is for outputting values.

That’s enough for a start, I’ll be back in another post about going a little more in depth.

Nice Kindle 2 Case

I don’t plan on product endorsements often on here, but thought I’d share this.  Magenic gave all of us Kindle 2′s for the holidays this year.  I’ve been hunting for a case since I got mine and wasn’t impressed with a lot of the stuff I was seeing on Amazon.  The quality wasn’t there or it was just too expensive.

I was in Staples today and spotted this: http://www.amazon.com/exec/obidos/ASIN/B0007VPG6U/growinglifestyle/ref=nosim  It’s a Swiss zip folio that has a perfect area for a kindle and a couple other pockets.  Only $30 too.

Repeating Tablix Headers In SSRS 2008

So I’ve been trying to figure out haw to repeat headers in a Tablix.  I don’t do a lot of SSRS stuff and the report I’ve got is moderately complex, it does a bit of an involved grouping situation with a few sub reports for details and then a Tablix to repeat individuals under each details section.

The Tablix has Repeat Column/Row Headers properties in the property pane but they are useless.  During my searches I saw something about these properties being for when the report is too wide, not too long.

Anyways, at the bottom of your report designer there should be the grouping info pane.  Click on the black arrow in the upper-right corner of the pane to enable “Advance Mode”  Doing this shows static group items in your grouping pane for things like header rows.  Find the static item that corresponds to your header row and check the property pane.  There will be a “RepeatOnNewPage” property, set it to true and headers should repeat, at least they did for me.

Hint: If you can’t figure out which static grouping could be your row header, watch the report designer as you click on the different groups, it will highlight the one you just selected in the designer.

Blog Refresh

So this used to be my personal blog.  After a while that moved so my fiancee and I could blog together, this site has been languishing for quite a while.  I’ve kept meaning to repurpose it as a tech blog, and now I finally got around to it.

I’ve upgraded to WordPress 2.7 and am working on getting this mirrored at Magenic’s blog server.  The first thing I’ll probably be talking about is my trials in writing IM functionality into a kiosk using the UCC SDK.

Writing an IM client for Office Communications Server

So I’ve been working on adding in IM functionality for the kiosk software I wrote and maintain at Magenic. At first I planned on using the Office Communicator Automation API. It all worked in my proof of concept, except for one thing.

I couldn’t figure out how to determine when I was receiving messages. I could start conversations, detect when they’d been started with me, but not when I actually got a message.

Not too mention it felt kinda dirty because you’re really just automating the Office Communicator App, so you have windows popping up all over the place. In my case they would have been all behind the kiosk app so I wasn’t too concerned, but it still felt unclean.

My research led me to the Unified Communications Client API, which sounds like it’s what I wanted from the start.

Moral of the story, if you want to do more then simple presence and conversation initiation in your application, use the UCC API.

About the UCC API:
http://msdn.microsoft.com/en-us/library/bb878684.aspx