Integrating Javascript Unit Tests with Visual Studio – Wrapping Up

Welcome to the grand finale of this four part series. So far you’ve got your javascript unit tests, you’ve got them all nicely organized and indexed using the info from the previous post, now how do I get to see those results when I’m inside Visual Studio? For that last cherry on the pie we’re going to use a browser automation tool walled Watin and a data-driven unit test in C#. This was borrowed from another blog and tweaked to use the XML indexing trick.

http://blogs.imeta.co.uk/MGodfrey/archive/2010/07/15/874.aspx

http://watin.org/

First things first, setting up your data driven unit test. If you are not familiar, this is a unit test in MSTest that you can pass in a dataset have rerun against all the values in that dataset. Here, our “dataset” is going to be the results of our javascript unit tests. In the class initialize of our unit test we go out and download the xml data that holds the location of all our test pages. From there, we use Watin to visit each page and parse out the results of each test. Because our unit tests run when a browser hits the test page, this is all we need to do run our unit tests. One thing you may need to tweak is the Watin dll has needs a reference to Interop.SHDocVw.dll, if you get errors loading that DLL, check it’s properties in your project reference and make sure that Embed Interop is false and Copy Local is true


Then we’re going to add a single test file called JavaScriptTests.cs to our unit test project. In the class initialize we’re going to setup the code that downloads our xml test index and parses out each test page’s results

    private static string _baseUrl = "http://localhost:8000/Test/";
    private static IE _ie;

    [ClassInitialize]
    public static void ClassInit(TestContext context)
    {
        var xdoc = XDocument.Load(_baseUrl + "?notransform");
       
        _ie = new IE();
        _ie.ShowWindow(NativeMethods.WindowShowStyle.Hide);

        var resultsDoc = new XDocument(new XElement("Rows"));
        
        foreach (var testPage in xdoc.Descendants("Row"))
        {
            _ie.GoTo(_baseUrl + testPage.Descendants("TestPageUrl").First().Value);
            _ie.WaitForComplete(5);
            var results = _ie.ElementsWithTag("li").Filter(x => x.Parent.Id == "qunit-tests");
            var xResults = from r in results
                           select new XElement("Row",
                               new XElement("name", GetTestName(r)),
                               new XElement("result", r.ClassName),
                               new XElement("summary", r.OuterText));
            
            resultsDoc.Root.Add(xResults);
        }
        resultsDoc.Save("JavascriptTests.xml");
    }

Quick summary, we load the xml index into a XDocument, parse each url, use Watin to load the page, and scrape out the result for each test on the page. This is all added to a separate XDocument and saved in the local test run folder as JavascriptTests.xml. This will be the dataset that gets passed into our data driven unit test and ends up looking something like this:

<?xml version="1.0" encoding="utf-8"?>
<Rows>
  <Row>
    <name>My First Tests_FullNameTest</name>
    <result>pass</result>
    <summary>My First Tests: FullNameTest (0, 1, 1)Rerun
full name built properlyExpected: "Nick Olson"</summary>
  </Row>
  <Row>
    <name>My First Tests_capitalizeTest</name>
    <result>pass</result>
    <summary>My First Tests: capitalizeTest (0, 1, 1)Rerun
capitalize worksExpected: "OLSON"</summary>
  </Row>
  <Row>
    <name>My Second Tests_FullNameTest</name>
    <result>pass</result>
    <summary>My Second Tests: FullNameTest (0, 1, 1)Rerun
full name built properlyExpected: "Brian Olson"</summary>
  </Row>
  <Row>
    <name>My Second Tests_FullNameFailTest</name>
    <result>fail</result>
    <summary>My Second Tests: FullNameFailTest (1, 0, 1)Rerun
full name built properlyExpected: "Brian Olson"
Result: "Nick Olson"
Diff: "Brian "Nick  Olson" </summary>
  </Row>
</Rows>

This is the aggregated test results for both of our test pages. The last little bit is a simple data driven unit test that takes this xml file and basically just asserts on the result element of each Row in the xml document

    [DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML", "|DataDirectory|\\JavascriptTests.xml", "Row", DataAccessMethod.Sequential), TestMethod]
    public void JavascriptTestRunner()
    {
        var testName = TestContext.DataRow["name"].ToString();
        var testResult = TestContext.DataRow["result"].ToString();
        var summary = TestContext.DataRow["summary"].ToString();

        TestContext.WriteLine("Testing {0} - {1}", testName,testResult);
        if (testResult != "pass")
            Assert.Fail("{0} failed: {1}", testName,summary);
    }

That’s it! You’re all done. Now when you run your tests in Visual Studio, it will run all the C# unit tests, including your new data driven javascript test and aggregate all these into your normal Test Results window. If you looked at my sample code you may have noticed that I have a failing test in javascript. If I run my tests in Visual Studio you can it sitting, failed, next to some C# unit tests, but it only tells me my data driven test failed, no other info

But if we click on that test it will open up a detail window and give us the results for every record in the data set for that data driven test, which is a lot nicer

Double clicking on that will give us even more detail yet

So there you go, test in the browser, test in Visual Studio it’s up to you. Generally I like to work on the html test page while I’m working and writing my tests and let the MSTest integration work for me on a build server or something else when the tests are run.

Here is the complete little sample app, JsTestComplete.zip

Integrating Javascript Unit Tests with Visual Studio – Organizing Your Tests

So now that your javascript is tested, we need work on getting your test results into Visual Studio. The first step is to organize your tests. You probably don’t have one giant C# file for all your unit tests and you probably are going to want different files for your javascript tests. I took an approach discussed by Phil Haack here and extended it a little bit.

http://haacked.com/archive/2011/12/10/using-qunit-with-razor-layouts.aspx

The basic idea is to use controller-less views in Razor to rollup all your test files into one nice little index so you can access them all at once.  In my MVC project I create a new Test folder and add a subfolder called TestPages, in the root of the Test folder I have three files I will walk through next

The Index.cshtml is a standard view, with a twist.  Instead of a standard HTML layout of head, body, etc tags, I actually set it up to be a straight XML file.  This is my big addition to the whole take on this type of integration.  It uses some simple file access to get the list of pages in the TestPages folder.  These are the pages that actually contain our Qunit tests.  The Index creates a simple XML document that looks like this

<!--?xml version="1.0" encoding="utf-8" ?-->
<!--?xml-stylesheet type="text/xsl" href="http://localhost:51844/Test/transform"?-->
<Rows>
        <Row>
            <TestPage>TestOne</TestPage>
            <TestPageUrl>TestPages/TestOne</TestPageUrl>
        </Row>
        <Row>
            <TestPage>TestTwo</TestPage>
            <TestPageUrl>TestPages/TestTwo</TestPageUrl>
        </Row>
</Rows>

The transform.cshtml is a simple XSLT that works on the XML that Index.cshtml generates so when you view it in a browser it looks like this.

Clicking one of the pages brings you to that test page in the TestPages folders and runs the Qunit unit tests and displays the results.  This is where the real benefit of your Knockout viewmodels comes in. Because it let’s you right pure javascript and rely on databinding to manipulate the DOM, you can simply include your viewmodel scripts and start unit testing them in your test pages. If you javascript was littered with class selectors and hardcoded ids you would have a much harder time and would probably have to create mock elements on your test pages for all of those elements.

For now, that should all you need to get your tests all organized and runnable in the browser.  From here the next step will be integrating them into Visual Studio so when you run your unit tests you get both native C# or VB test results, but your javascript results as well.

To download a simple test solution, check it out here
JsTest.zip

Integrating Javascript Unit Tests with Visual Studio – Testing your Javascript

First, let’s pause…the point of this post series is how to integrate your javascript unit tests with Visual Studio, not to teach you how to use the frameworks I’m discussing.  The rest of this post is going to assume you’re familiar with KnockoutJS and Qunit.

Knockout is a framework in javascript that let’s you create javascript viewmodels for your html page.  When done correctly, your view model simply maintains it’s own state and is blissfully unaware of the existence of any HTML elements. This is one of the death knells of your javascript unit tests.  When your javascript knows about html structure and elements it forces you to mock your view out when unit testing otherwise you can’t properly test it.  Knockout let’s you write clean testable javascript and move the glue between your code and the HTML DOM into a declaritive data binding syntax in the HTML.  Again, not here to teach you Knockout, much better places to learn that, like so:

http://learn.knockoutjs.com/

Likewise, Qunit is the javascript unit testing framework used by jQuery.  It seems like the modern internet wouldn’t exist without jQuery, so we might as well run with that, even though there a number of javascript unit test frameworks out there.

http://docs.jquery.com/QUnit

To integrate our tests we’re going to need some code to tests and tests to test it first, so let’s start there.  I’ve stolen the following view model from the knockout tutorial site:

function AppViewModel() {
    this.firstName = ko.observable("Nick");
    this.lastName = ko.observable("Olson");

    this.fullName = ko.computed(function() {
        return this.firstName() + " " + this.lastName();
    }, this);

    this.capitalizeLastName = function() {
        var currentVal = this.lastName();        // Read the current value
        this.lastName(currentVal.toUpperCase()); // Write back a modified value
    };
}

What should we unit test? Well, we’ve got the fullName property which has some logic, and the function to capitalize the last name, so let’s start there.

$(function () {
        module("My First Tests");

        test("FullNameTest", function () {
            var model = new AppViewModel();
            equal("Nick Olson",model.fullName(), "full name built properly");
        });

        test("capitalizeTest", function()
        {
            var model = new AppViewModel();
            model.capitalizeLastName();
            equal("OLSON",model.lastName(), "capitalize works");
        });
    });

To see it all in action, check it out here:
http://jsfiddle.net/x6be5/2/

 

 

Integrating Javascript Unit Tests with Visual Studio – Intro

I’ve been working on a project over the last few months that quickly evolved from a Silverlight project to an ASP .NET MVC3. If you’re shifting from Silverlight to MVC and want to maintain that rich client interaction, it means that you’re probably going to be writing a lot of javascript code.

In fact, we quickly realized that we were really writing a javascript client application with a .NET backend. This can be a scary proposition when it’s your first real foray into heavy javascript development. It seems like the first thing to go is the logical structuring and thought that you’d put into your code if this were a strongly typed language. Something about that <script/> tag that just makes you want use it and throw thousands of lines of javascript at it.

As a result, next to go are your unit tests if you had any to begin with.  I will confess, I’ve never been a huge unit tester.  But once you start working with dynamic languages and your warm fuzzy “Build Succeeded” blanket is taken away, I’ll reach for whatever comfort I can get.

The goal of this series of posts is to walkthrough what I did to unit test my javascript and get those test results into Visual Studio’s test output.  Although I may touch briefly on how to use the frameworks I’m talking about, the focus will be gluing them together.

That said here’s what we’ll be working with:

  • MVC3 w/ Razor
  • Qunit
  • KnockoutJS
  • XML/XSLT
  • Watin
  • C# Data-driven unit tests
Some of these ideas were cobbled, stolen and enhanced from other places, so if you can’t wait here is where I started off:

Simplify Unit Testing with IDisposable and the Using Statement

I’m sure I’m not the first to do this or something similar, but I’ll pat myself on the back all the same. Recently, I’ve been doing a lot of IoC and unit testing on some WCF services we’re working on. The services take implementations of our IRepository for data access which I’ve mocked out for testing purposes.

Most people are familiar with the using statement and it’s purpose to assure objects that implement IDisposable are properly cleaned up. A lot of people are also aware that you can use this interface and statement to do automatic scoping of some operations in a nice consistent manner. The first time I ever saw this type of pattern was as a shortcut to set the Cursor in a WinForms app to the WaitCursor and then back to the default once some long running operation was complete.

For Example:

using (CursorScope.SetCursor(this, Cursors.WaitCursor))
{
    // long running operation
}

Where the CursorScope class looks like this:

class CursorScope : IDisposable
{
    private Cursor _originalCursor;
    private Control _control;

    private CursorScope(Control control, Cursor newCursor)
    {
        _control = control;
        _originalCursor = _control.Cursor;
        _control.Cursor = newCursor;
    }

    public static CursorScope SetCursor(Control control, Cursor newCursor)
    {
        return new CursorScope(control, newCursor);
    }

    public void Dispose()
    {
        _control.Cursor = _originalCursor;   
    }
}

So as we enter the using statement, the cursor will be changed for the control, and as we leave the using statement and our object is “disposed” it will be set back to the original value regardless of any exceptions or errors that may occur between. This syntax and behavior is useful in a number situations. The most recent place I’ve been using this, as the title mentions is in some of my unit tests. To test out my services, I’ve built some mock repositories for the services to call. There are two scenarios where this has come in handy.

The first is when I’m trying to test the error and exception handling in my service. To get complete code coverage you need to make sure your tests also throw exceptions where you’re expecting them to be handled otherwise your catch block will never be tested. To support this, I added a boolean flag on my mock repository called ThrowException, when this is set to true, any operation you try to take on the repository with throw an exception I could use this in one of two ways, I could try to remember always setting the flag to true and then back to false when I’m done testing it. The problem there is depending on how you’re unit tests are run and set up, if you forget to set the flag back to false, it may cause your mock repository to throw exceptions in other unit tests. To help avoid this situation I made the flag private and added a ThrowException() method to the mock repository that returns an IDisposable.

private bool AllMethodsThrowExceptions { get; set; }

public IDisposable ThrowExceptions()
{
    AllMethodsThrowExceptions = true;
    return new ExceptionDisposable(this);
}

class ExceptionDisposable : IDisposable
{
    public MockRepository Repository { get; set; }
    public ExceptionDisposable(MockRepository repo)
    {
        Repository = repo;
    }

    public void Dispose()
    {
        Repository.AllMethodsThrowExceptions = false;    
    }
}

And then in our unit test we can test the exception path like so:

[TestMethod]
public void GetUserExceptionTest()
{
    using (_mockUserRepo.ThrowExceptions())
    {
        _log.ClearLog();
        var response = _service.GetUser(new GetUserRequest());
        Assert.IsNull(response.User);
        Assert.AreEqual(1, response.ErrorMessages.Count);
    }
}

Which will cause our exceptions to be thrown so we can test to make sure our code handles them properly. Then, when we’re done with that part of our unit testing, it will reset the flag back so any other tests that run subsequently, everything will work correctly.

The other area where this comes in useful is because our mock repository uses in-memory data, and as we manipulate this could throw off our tests as well, so we implement the same pattern, but this time instead of setting a flag, we copy our repository’s data and reset it once the test is complete.

public IDisposable ChangingData()
{
    return new ChangingDataDisposable(this);
}

public class ChangingDataDisposable : IDisposable
{
    public MockRepository<TEntity> Repository { get; set; }
    private List<TEntity> OriginalData;
    public ChangingDataDisposable(MockRepository<TEntity> repo)
    {
        Repository = repo;
        OriginalData = CloneData(Repository.Data);
    }

    private T CloneData<T>(T data)
    {
        if (!typeof(T).IsSerializable)
            throw new ArgumentException("Must be serializable.", "data");

        if (Object.ReferenceEquals(data, null))
            return default(T);

        var s = new DataContractSerializer(typeof(T));
        using (var stream = new MemoryStream())
        {
            s.WriteObject(stream, data);
            stream.Seek(0, SeekOrigin.Begin);
            return (T)s.ReadObject(stream) ;
        }
    }

    public void Dispose()
    {
        Repository.Data = OriginalData;
    }
}

Here when you call ChangingData() I clone the data using serialization, and again, when you’ve disposed the object it resets the data.

[TestMethod]
public void UpdateUserTest()
{
    using (_mockUserRepo.ChangingData()) 
    {
        var user = _mockUserRepo.Data.FirstOrDefault(i => i.Id == 1);
        Assert.AreEqual("Nick", user.FirstName);

        user.FirstName = "Brian";

        var request = new UpdateUserRequest();
        request.User = user;

        var response = _service.UpdateUser(request);
        Assert.AreEqual("Brian", user.FirstName);
        Assert.AreEqual(1, response.EntityId);
    }
}

And all the data is reset back to the original values and ready for the next test to work off it.

MVVM and Speech using the Kinect-Pt. II

In the last post I talked about what I recently done with Speech recognition and tying it in with MVVM’s concepts of Commands.

In this post, I want to walkthrough, step by step of how I set things up.  To get everything installed I just followed the directions for setting up the Kinect SDK, which also included the direction on setting up the Speech API.  Google that and you’ll be well on your way.

After getting it setup, I recommend you give the Kinect SDK samples a try to make sure everything installed correctly.  From there I took a look at what the Kinect speech sample was doing and modified it to work with the default audio source instead of the Kinect.  Mostly, because my Kinect needs to pull double duty between my hacking and actually letting me play on the Xbox.  Not sure how I can convince the wife we need a second one just yet.

Note that some of the code examples use some extensions methods in a little library of mine.  So you might not be able to directly copy/paste. Hit up the continue reading link for the rest…

Continue reading

MVVM and Speech using the Kinect–Pt. I

I have always been fascinated by interacting with a computer in ways beyond the traditional keyboard and mouse.  The problem is, as I learned while getting my degree in computer science, some problems are really hard.

I didn’t want to dedicate years to obtaining PhD in computer vision or spend my life doing statistics. But, like any other good developer, I’ll just take the toolkit someone else came up with to solve the hard problems.

Enter the Kinect.

In reality, Microsoft has had a Speech API for quite some time and I’ve played with it in the past, but with the Kinect they’ve produced a beautiful piece of hardware that can do 3D depth sensing, speech recognition and directional detection, live skeletal tracking and more all in package that doesn’t cost a lot more than a high-end web cam.  This first example doesn’t actually require the Kinect, in fact, the code itself just uses the default audio input.  But it can be easily changed to use the Kinect audio stream.  Later projects I’m working on will use the Kinect camera’s to do some hopefully neat things.

Since the Kinect hacking started my brain has been churning with ideas.  Of the most pragmatic, was the thought to tie in speech recognition to an MVVM application.  The nice thing about a well implemented screen using MVVM is that you have your UI described seperately in XAML on the front end and a class (ViewModel) containing your library of commands that can be executed.  Using a Command object you can tie a specific element, like a button, to perform a specific command like Save very cleanly and easily.

This clean seperation of concerns mean you don’t really care how a command is invoked, whether it’s a button press, keyboard shortcut, or voice command it all works the same.  Your ViewModel executes the command and the UI happily updates through the powerful databinding of XAML.

Aside from the obvious scifi references that this brings to mind, it could also help by making programs more accessible to the vision or mobility impaired.  Also, it could be just plain more efficient in some scenarios.

Most of the work is done for us by the commanding infrastructure in WPF.  So first I’d like ta take a look at how this implementation will be used.  Below is a standard button declaration with a command attached.

<Button Content="Save" Command="{Binding Save}" />;

The other great thing about XAML is the extensibility, so by the time we’ve implemented this speech API the only thing that will change is this:

<Button Content="Save" Command="{Binding Save}" 
             voice:SpeechCommand.Phrase="save" />;

One simple property added and that’s pretty much all the end developer needs to do.  The only other thing we need for using the speech recognition is something I call a SpeechCommand, which is basically just an implementation of the standard DelegateCommand found in MVVM frameworks.  The SpeechCommand acts exactly like the standard commands, but it is also the place for the Phrase AttachedProperty to live and is the glue that bridges the application to my wrapper around the Speech API.

In the next post I’ll walkthrough how I built the app and post some source code.  Until then, I leave you with a screenshot.  Please note, that no mouse or keyboards were harmed (or used Smile) in the taking of this screenshot.

voice

 

Custom Markup Extension To Replace IValueConverter

Something that’s been around in WPF for a long time but is just seeing the light of day in the upcoming Silverlight 5 release is the concept of a custom MarkupExtension.  For those unfamiliar with the concept, a MarkupExtension is anything within the curly braces “{}” of your xaml.

Some examples include, Binding, Static- and DynamicResource, Type, Static, etc.  You can look at the code below and see some examples:

<Grid x:Name="LayoutRoot" > 
    <ContentControl Style="{StaticResource BgBlueTop}"/> 
        <Border > 
            <ItemsControl ItemsSource="{Binding Messages}"> ... </ItemsControl> 
        </Border> 
</Grid>

Basically, the xaml parser can interpret an attribute as a string literal value or convert it to an object through some means.  Markup extensions allow you to do the deciding about how the value you’re setting should be interpreted by the xaml parser.  For a much more in depth look at how this all works, a decent place to start is this link on MSDN:

http://msdn.microsoft.com/en-us/library/ms747254.aspx

One interesting use of MarkupExtensions I’ve been playing with is a reimplementation of the Binding markup extension to provide an alternative to the IValueConverter interface. Sometimes you really only need to do a conversion once or a couple times on a specific screen.  When you implement a value converter it feels like you are taking a tiny bit of view or business logic and stuffing it in an unrelated portion of your application.  That being said, a standardized library of value converters like boolean-to-visibility and so on can make developing on a project much easier to use.  But for the one off scenarios, it’d be nice to keep all that logic contained in your ViewModel and not have to worry about spinning up a new class for such a simple thing.

I used the post here as a starting point for my custom binding because working with Binding or BindingBase was not going so well.  From there I created my own Binding class and added a ConverterDelegate property.  This property looks for a method on the same DataContext of the binding and uses a generic IValueConverter behind the scenes to call that method of the DataContext.  This helps get rid of all those little one-off classes that you have to create for specific conversion scenarios.

The code is far from production ready but the gist of it is that a custom MarkupExtension overrides the ProvideValue method of the abstract MarkupExtension class. In the background I look up the method on my DataContext and use a IValueConverter that calls that method to do the conversion. So to wire up my converter I can simply do this:

<Window x:Class="CustomMarkup.Window1" 
	xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
	xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
	xmlns:local="clr-namespace:CustomMarkup" Title="Window1" Height="300" Width="300"> 
	<Grid> 
           <Button Content="{local:ExBinding Path=ButtonText, ConverterDelegate=ToUpper}" Height="25" Width="100"/> 
	</Grid> 
</Window>

You can see I have a converter delegate called ToUpper and in my code behind or ViewModel I can simply write whatever methods I need to do conversions like this:

public partial class Window1 : Window
{
	public Window1()
	{
		DataContext = this;
		InitializeComponent();
	}

	// our converter method
	public object ToUpper(object value)
	{
		return (object)value.ToString().ToUpper();
	}

	public string ButtonText { get; set;}
}

I can post the sample project if I get interest but the real point is to give a small showing of what’s possible with custom MarkupExtensions in WPF and SL5.

Switching Windows Versions – “Downgrading” to Enterprise

So I recently bought a new laptop and asked the IT people at RBA about which version of Windows to install on it. They said it was all on our network so I went out and saw we had both Ultimate and Enterprise versions. Being a sucker for marketing gimmicks I grabbed the Ultimate version because it sounded cooler.

Unfortunately, I chose wrong. I should have used the Enterprise version because that’s what we are licensed for. In the end, they are the same version, but Enterprise is sold only to businesses and Ultimate is sold to home users.

So you think it would be an easy task to switch from one to the other, but the only officially supported way is to completely reinstall and start over. I had just spent the entire night setting up all my software, so I didn’t really feel like doing it twice two nights in a row.

Luckily, there is a way to workaround the limitation, I’ve pulled and simplified from here: http://www.mydigitallife.info/2009/11/03/hack-to-in-place-downgrade-from-windows-7-ultimate-or-professional-to-less-premium-editions/

The condensed version is to:

  1. Run RegEdit on your computer and navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\Current Version
  2. In the right hand side change the word “Ultimate” to “Enterprise” under both the EditionID and ProductName entries
  3. From there I inserted the Windows 7 DVD and ran setup and chose upgrade and everything worked flawlessly

Note: I reran the setup from inside of Windows. I did not boot from the DVD to do the upgrade so I don’t know if it works that way or not.

Wii Remote Force Calculator

Recently, I’ve been doing a fun side project.  It’s always fun when you can come up with an excuse to combine your hobbies or interests.   So I came up with the idea of using a Wii remote to calculate the force of a strike on a heavy bag.

Screenshot

I’ve had to dredge up old memories from my college physics classes, and I’m not entirely sure what I’ve got is hundred percent accurate, but I feel with proper setup it’s pretty good.  The calculation is easy enough, to get the force you just multiply the mass times the acceleration of the object.  The accelerometers in the Wii remote are accurate to about  +/- 3 Gs with about a ten percent range of error.  Not too terrible for a fifty dollar bluetooth enable accelerometer you can pick up at any store in town.  The problem is, if you try to hold it in your hand you can generate massive amounts of acceleration well above 3 Gs.  So instead,  I tied the remote onto a heavy bag.  Much more mass, means you get the same amount of force with acceleration levels well within the remotes acceptable range.

Force = mass * acceleration

Because I’m not punching any harder, the force stays the same.  The heavy bag has a lot more mass, so the acceleration has to be a lot less to equal the same force.

The thing I’m not sure of is whether I have to take into account the fact the bag is on a chain so it actually swings up in an arc.  I don’t know how much that affects the calculations, but I don’t think it would be a lot with the small amount of time range we’re talking about.

My next steps are to try and rig up some testing system to try to determine the accuracy of the measurements.  I also want to get the opinion of some physics people to get my calculations even more accurate.  It’s been a fun project so far and I’d like to get it to the point where I could bring it into the gym and have my students play around with it a little bit.

The application itself is written in C#, using WPF, MVVM, and Wiimote Lib

I’ll try to post any updates here as they develop.