Category Archives: Technology

Displaying XPS Documents in Silverlight

I’ve recently been involved on a project that has a requirement to create and view XPS documents in Silverlight.  The application needs to display the XPS file in a full screen window together with zoom and navigation features.

After a little searching on how to do this, I was able to get a head start with this great post from David Anson, which includes sample code for viewing XPS files in Silverlight 2 beta 2.  As his updated post attests to however, there were a few issues with Silverlight 2 due to a breaking change with the ways that font resources could be referenced from within a Silverlight assembly.  Fortunately, this post from Li Chen pointed me in the right direction, separating out the ODTTF fonts into separate XAP files which can be referenced at runtime. 

The sample code provided by Li Chen works well with the sample XPS files provided, but I couldn’t get it working with an XPS file generated using Microsoft Word.  After a little debugging this weekend, I found a few subtleties with how the code dealt with XPS files geneated from Microsoft Word:

Firstly, the root document of the XPS file is called FixedDoc.fdoc instead of FixedDocument.fdoc (which is the name when using the XPS Printer Driver).  This was fairly easy to correct using a simple check:

// Added to support "Save as XPS" from Microsoft Word
if (resourceInfo == null)
    resourceInfo = Application.GetResourceStream(_streamResourceInfo, ConvertPartName("/Documents/1/FixedDoc.fdoc"));

Secondly, the .fdoc file refers to pages using relative links.  Instead of the absolute link (e.g. /Documents/1/Pages/, a relative link (e.g. /Page/ was causing the sample to be unable to find the pages.  A small piece of code to append the full path quickly fixes this also.

// Update the page names for "Save as XPS" from Microsoft Word
List<string> _newPageNames = new List<string>();
foreach (String pageName in _pageNames)
    if (pageName.StartsWith("Page"))
        _newPageNames.Add("/Documents/1/" + pageName);

Finally, it looks like Word includes a page attribute called BidiLevel, which isn’t recognized by the Canvas element.  Adding an additional exclusion line into the sample code quickly fixed it.

_elementAttributesToRemove.Add("Glyphs", new List<string> { "BidiLevel" });

The result seems to work quite well, with an XPS file saved from Microsoft Word viewable within Silverlight.  


If you are interested in trying this yourself, I’ve posted a version of the sample code here which contains the above modifications.  To use the modified sample with your own XPS file, do the following: 

1.  Go into Microsoft Word and “Save as / XPS” to create a new XPS file.

2.  Download and compile the sample (note: only works in Visual Studio 2008). 

3.  Copy the XPS into your Visual Studio project, in the SimpleSilverlightXpsViewer_Web project. 

4.  In Windows Explorer, rename the original XPS file to a ZIP file.  Ignore the warning about changing the file extension.

5.  Open the zip file and go into the /Resources directory.  Look for font files ending in ODTTF – these are embedded font files for the XPS file and must be referenced separately in the Silverlight project.  Copy all of these ODTTF files to the SampleWordGenerated project in the solution.  Also note that the ODTTF files are dynamically named when the XPS file is generated.  This means that although you may have already imported the correct ODTTF files for previous XPS file that use the same fonts, you’ll still need to re-import the fonts again to handle a new XPS file. 

6.  Edit default.html in SimpleSilverlightXpsViewer_Web and on line 70 change the xpsDocument=SampleWordGenerated.xps to the correct name of your XPS file. 

7.  F5 to run and you should see your XPS document displayed within a Silverlight control!

There is still a bit of work to do with the sample code, which I think would be worth taking into a CodePlex project.  For example, the code should initially read the XPS root file (FixedDocSeq.fdseq) instead of looking or FixedDoc.fdoc or FixedDocument.fdoc directly.  It would also be great to figure out a better way of extracting the fonts more dynamically at runtime.  Other than that though I found this to be a good solution to display XPS files in Silverlight applications, especially useful as Silverlight doesn’t support the FlowDocument element (which is commonly used in WPF applications for creating documents and report generation).

Cloud Computing Talks at TechEd 2009

I wanted to share the details of two cloud computing sessions that I will be presenting at TechEd 2009 in Los Angeles next month: 

ISB204:  The Impact of the Cloud and Software as a Service – 5/11/2009 4:30PM-5:45PM

ARC308:  Patterns for Moving to the Cloud – 5/12/2009 8:30AM-9:45AM

The first session (ISB204) is part of the IT Manager track.  The goal of this 200 level session is to explore the significance of cloud computing for IT decision makers.  This will include looking at different types of applications and understanding whether they make sense for the cloud, and why – and then investigating some of the tradeoffs and risks for moving certain applications to the cloud. 

The second session (ARC308) is the next level deeper:  an architect/developer focused session that will cover a collection of design and implementation patterns for cloud based applications.  For example, how the cloud can enable parallelization across multiple machines, blob storage in the cloud, and some of the finer points of identity management within applications.

We are still a few weeks away from the event, so if there are questions that would like to see answered or other items that you think would make sense in either presentation, do get in touch and let me know. 

And to those that will be attending TechEd 2009, I look forward to seeing you there!

Twitter for Office Communicator

One of my observations from MIX09 last week was the sheer number of attendees using Twitter at the event.  In addition to the “flotzam” wall (a realtime display of tweets) during the keynote session, I couldn’t turn a corner without running into someone tweeting about something. 

Although I must admit that “I still don’t get twitter”, on the plane ride back to Seattle, I thought it would be interesting to create a twitter client that works in conjunction with Office Communicator

As you may know, Office Communicator provides the ability to set presence information as part of the UI.  Presence information is useful for out of the office details such as “At Chicago office today”.


After having used this in the past, I wondered how easy it would be to take this presence information and publish it to Twitter.  i.e. When I update the presence text in Office Communicator it would also update my twitter account with the same information.  That way I can provide short presence information in one place and have it published to both internal users of Office Communicator and my (small number of) followers on Twitter. 

The result is Twitter4OC, a small client app that runs in the background and listens for presence updates from Office Communicator, creating new tweets as appropriate.  If you are interested in seeing how it works, you can find the binaries here and the sample code here

Twitter4OC uses the OCSDKWrapper libraries from this excellent project on codeplex to listen to status changes in Office Communicator. 

MOCAutomation _moc = null;

if (_moc == null)
    _moc = MOCAutomation.Instance;
    _moc.MyStatusChange += new EventHandler<OCSDKWrapper.MOCEventArgs.MyStatusChangeEventArgs>(_moc_MyStatusChange);

If it detects that the presence information has been updated, it firsts calls the TinyURL API to change any URLs into a smaller format, and then makes a call to the Twitter API to tweet the updated status. 

    private string Post(string url, string username, string password, string data)
        // Prevents HTTP-417 error code from API
        ServicePointManager.Expect100Continue = false;

        // Construct the WebRequest
        WebRequest request = WebRequest.Create(url);
        request.Credentials = new NetworkCredential(username, password);
        request.ContentType = "application/x-www-form-urlencoded";
        request.Method = "POST";

        byte[] bytes = Encoding.UTF8.GetBytes(data);
        request.ContentLength = bytes.Length;
        using (Stream requestStream = request.GetRequestStream())
            requestStream.Write(bytes, 0, bytes.Length);

            using (WebResponse response = request.GetResponse())
                using (StreamReader reader = new StreamReader(response.GetResponseStream()))
                    return reader.ReadToEnd();

    public string Update(string username, string password, string presence)
        // First tinyfy the URLs
        presence = TinyURL.TinyfyPresence(presence);

        // Check the length of the status < 140 chars
        if (presence.Length > 140) throw new StatusTooLongException();

            string url = TWITTER_UPDATE_URL;
            string data = string.Format("status={0}", HttpUtility.UrlEncode(presence));
            return Post(url, username, password, data);
        catch (Exception e)
            throw new UpdateFailedException(e.ToString());

The application runs as a small exe that can be launched at startup and takes parameters via a simple credential form or via settings in the App.Config file.

Using Windows 7 to host PHP applications in 5 easy steps!

A few people have asked me recently whether it’s possible to setup Windows 7 as a PHP server (for development purposes).  The answer is absolutely yes, and it’s a breeze to setup.  Follow these five simple steps to get PHP up and running in minutes:

1.  In the Programs and Features control panel, click on the Turn Windows features on or off link:


2.  In the list of Windows Features, expand Internet Information Services, World Wide Web Services, and the Application Deployment Features.  If it’s not already, select the CGI checkbox and click OK.  (The most reliable way of hosting PHP applications on Windows 7 is to use the built in FastCGI interface for IIS – checking this box installs it together with any pre-requisites.)


3.  Download the non-thread-safe (NTS) version of PHP from  The current version (as of time of writing is 5.2.9).  (The thread safe (TS) version will also work, but generally NTS is faster, and thread safety is not an issue under FastCGI).  Expand the zip to an installation directory of your choice – e.g. c:devphp

4.  Copy the php.ini-recommended file to php.ini in the PHP directory.  Edit the php.ini file and add correctly configure extension_dir, pointing to the PHP extensions directory (normally the .ext folder of the PHP installation – e.g c:devphpext).  You can also configure other php.ini options and modules here if required.

5.  Run Internet Information Services Manager by typing inetmgr in the Start menu.  You can either set the global settings of the server, or (recommended) add a new web site to run the PHP applications.  Once you’ve done this, double click on the Handler Mappings for the site and add a new module mapping with the following settings:


Request path should be set to *.php.  Module should be FastCgiModule.  Executable should be {php_install_dir}php-cgi.exe.  Name can be anything – I use “PHP via Fast CGI”.

That’s it! Simply start/restart IIS and you are ready to go.  The easiest way to test that everything is working is to create a simple info.php file with a single line:

<?php phpinfo(); ?>

When you access this page from a browser (e.g. http://localhost:8081/info.php), you should see the PHP info screen:


Validate that the server API is using CGI/FastCGI and that the loaded configuration file is the one in your installation directory.

“Micro Architectures”

Jim Wilt and I had an interesting discussion today, around the role of software architecture in the current economy.  I shared some thoughts around something I’ve been thinking that I call “micro architectures” (for lack of a better name).

Let me start with a personal dilemma:  I’m debating moving my blog (currently running an old version of Community Server) to something different (either a different provider or upgrading to the latest version of Community Server).  Although it’s been very reliable, the thing that concerns me about my blog is that I don’t intimately know how it works.  I’ve looked through a lot of the forums, and even other open source blog providers, but the architecture for everything that I’ve seen so far seems just too unwieldy for what I’m trying to accomplish.

While searching, I began asking myself the question – “Instead of the most architecturally correct design, what would be the smallest design that supports my need?  And more importantly, how would these two be different?”  Small in this instance refers to the number of modules, configuration files, lines of code, and other parts of the design.  I think as architects and developers we have a habit of defaulting to configuration files, extensibility, and dependency injection into our designs from day one – even though the core use cases of the design don’t immediately demand it.  We design too much in for the future or for edge cases which ends up in “I’ve abstracted this setting into this_obscure_setting_config.xml just in case we need to switch the setting in the future”.  Nice extensibility – but will anyone ever actually switch that setting?  Really?  And if someone did, does a recompile of the code really add that much headache over the additional abstraction and testing required for the extensibility?  Jeffrey Pallermo covers an element of this recently in his post about hard coding.

Coming back to my blog example, what would a “micro architecture” for my blog look like?  I would assert that I could do the following:

Eliminate Elaborate Database Access Code.  Do I really need it?  Do I really need a database abstraction layer (myDal) that inherits from an interface (IDal), uses a configuration file (database_config.xml) and some dependency injection under the covers so that I can switch out the driver at some point in the future?  Probably not. 

Question the Need for a Database.  Talking of which, do I actually need the database itself?  Access to the database (or lack thereof) seems to be the root cause of issues that I have when my blog goes down.  Two primary considerations for using a database are performance and indexing.  Performance?  I would like to think that millions of people visit my blog every day, but the reality is somewhat different.  Even with 50 comments attached to a blog post, a file system solution would probably perform well enough for anyone reading the blog.  Indexing?  Sure, I would like search enabled on my blog, but why not just redirect to (or embed) an existing Google search, parameterized to my domain? 

Create a Minimal User Interface.  I got thinking about what HTML controls I would need to supply to enable updates and edits to posts – the question is, do I really need a fully functioning Admin UI to update the blog?  Would it not be simpler to only expose a MetaWeblog or ATOM publishing API instead and use something like Windows Live Writer to create and edit my posts? 

No Admin UI for Creating “About” and Other Pages.  Again, do I really need the administration overhead for handling this?  Can I not just create a new .ASPX or PHP page and attach it to the site.  Seriously?

Remove Skins and Styles from the Code.  No brainer.  Reference a CSS and be done with it.  The blog’s responsibility should be to only output well formatted HTML that can be styled with CSS.

I’m sure there’s more that I’m missing, but hopefully you get the idea.  To sum this up and conclude, I would argue that a “micro architecture” could have the following principles:

It’s OK to ignore edge cases.  The architecture is designed only against core use cases, and nothing else.  With the exception of input validation, edge cases are not considered.

It’s OK to write code – as long as that functionality doesn’t exist in another solution that can be reused.  Subsystems are written only when there is not a valid external solution that can be used.

It’s OK to hardcode configuration values.  Hardcoding is OK for core use cases (providing that it doesn’t invalidate security – for example, you don’t want to be hardcoding usernames and passwords, of course)

It’s OK to recompile.  Recompiling is really OK if edge cases are introduced at a later point in the future.  I actually think this is healthy because it encourages developers to open up the solution (and possibly improve the solution as a result of what they’ve learned since they last wrote the code). 

It’s OK to unit test.  Because a greater focus is given to the code of an application (as opposed to 50 million different configuration files), unit tests and test driven development become even more important. 

Maybe I’ll actually try this out and see what happens?

PDC2008 Symposium Announced!


Coming to PDC?  Interested in what it means to move to the cloud?  If so, you should definitely check out the PDC symposium this year. 

The symposium is three sessions that wrap up the last day of the PDC – the goal of which is to "connect the dots" between everything that you’ll have heard at the event, and look at some of the next steps and challenges for making it real.  The sessions cover expanding applications to the cloud, making enterprise-grade cloud applications, and emerging patterns that take into consideration some of the physical aspects of moving applications to the cloud.

Not coming to PDC? There’s still time to register and attend!

Architecture Journal Issue 16 Released, and Issue 18 Call For Papers

I wanted to share a couple of updates about the Microsoft Architecture Journal:

1.  Issue 16 has just been released to the web!  You can find it here, and can also download a PDF version of the magazine here.  The theme of issue 16 is Identity and Access, and Diego explains more about the articles in his blog post.

2.  The call for papers for Issue 18 has just been released.  Again, Diego uncovers the details here, the theme of this upcoming issue is Green Computing.  I’m really looking forward to this issue, especially given all the work that is happening in this space, especially some of the advances in infrastructure architecture.   

Why Architects should care about Robots…

Today, Marc Mercuri, an architect in my team, launched one of the most exciting things I’ve seen for sometime:  RoboChamps


RoboChamps is an online, virtual robotics competition that makes use of Microsoft Robotics Developer Studio 2008 and Microsoft Visual Studio 2008.  The concept of the competition is to program a series of robots to complete a number of challenges, which include a maze, the surface of Mars, and an urban environment – all of which set in a virtual environment so that you don’t need specialized robotics hardware.  When you’ve completed the challenge, you can upload your solution to the RoboChamps site – and the winners of the leagues will face a showdown at PDC in October.

So, I can hear the question now – "Sounds good, but why should architects care about robots?"   On the surface, it’s difficult to make the connection, but I would encourage you to dig deeper into some of the components included with MRDS 2008, namely the Concurrency and Coordination Runtime (CCR) and the Decentralized Software Services (DSS).  CCR provides multicore, concurrent programming support for the .NET Framework, and DSS enables you to take these concurrent components and offer them as lightweight services in a decentralized environment.  If you have chance to download the MRDS examples you’ll see how they make use of both technologies.

The answer to the question therefore is how we can make use of these technologies and new concepts in software architecture – how can we use CCR to distribute workload across multiple cores, and what is possible with DSS across services on multiple machines?  I believe that these new tools will ultimately lead to several new patterns, especially in front end web farm scenarios.  In the meantime however, I look forward to seeing if you can get your robot out of that maze :)

Do Not Disturb!

Like many people, I have VOIP at home – my current provider is Vonage.  One of the features of Vonage is an international virtual number.  This gives me a local number in the UK that my family can dial, which automatically redirects to my home number here in the US.  It’s relatively cheap ($4.99 per month) and makes it really easy for friends and family to get in contact.

One of the disadvantages of having an international virtual number however is that people can accidentally dial the number – and when they do, because of the time difference (8 hours between PST and GMT) it can be a somewhat inconvenient.  Let’s just say that we’ve had about 3 or 4 wrong numbers this week, all of which have been about 2am in the morning.  Not so good if you like sleep. 

Vonage has a feature called “Do Not Disturb”, which redirects all calls to voicemail – and which can be activated via their web page.  As simple as this sounds, I can be forgetful at times, so enabling this before going to bed (and disabling when I get up) can be an issue.  To overcome this I decided to put on my “web application test” hat and see if I could use Visual Studio 2008 to write an automated web test that logs into the Vonage site, and activates / deactivates this feature for me.  First, I created a new test project in VS2008, added a new test project and a new web test (from the test menu).  I recorded the action to enable the do not disturb feature:


I was able to strip down the required pages to just three requests (one to login, one to query for the phone number, and one to set the feature) and made sure that it worked within the IDE.  I then duplicated this test to create a second that re-enabled the feature.  (I’m sure there is a nice way of passing parameters to webtests, but just haven’t worked it out yet).  The test works as follows:

The first request is a HTTP POST that submits my Vonage username and password to the site.

<Request Method=”POST” Version=”1.1″ Url=”” ThinkTime=”6″ Timeout=”300″ ParseDependentRequests=”True” FollowRedirects=”True” RecordResult=”True” Cache=”False” ResponseTimeGoal=”0″ Encoding=”Windows-1252″ ExpectedHttpStatusCode=”0″ ExpectedResponseUrl=””>
    <FormPostParameter Name=”countryURLs” Value=”” RecordedValue=”” CorrelationBinding=”” UrlEncode=”True” />
    <FormPostParameter Name=”username” Value=”USERNAME” RecordedValue=”USERNAME” CorrelationBinding=”” UrlEncode=”True” />
    <FormPostParameter Name=”password” Value=”PASSWORD” RecordedValue=”PASSWORD” CorrelationBinding=”” UrlEncode=”True” />
    <FormPostParameter Name=”submit.x” Value=”30″ RecordedValue=”30″ CorrelationBinding=”” UrlEncode=”True” />
    <FormPostParameter Name=”submit.y” Value=”13″ RecordedValue=”13″ CorrelationBinding=”” UrlEncode=”True” />

The second request (this time a HTTP GET) brings up the do not disturb features page.  This is required to obtain the phone number as a FORM parameter that we use in the next request:

<Request Method=”GET” Version=”1.1″ Url=”” ThinkTime=”4″ Timeout=”300″ ParseDependentRequests=”True” FollowRedirects=”True” RecordResult=”True” Cache=”False” ResponseTimeGoal=”0″ Encoding=”iso-8859-1″ ExpectedHttpStatusCode=”0″ ExpectedResponseUrl=””>
    <ExtractionRule Classname=”Microsoft.VisualStudio.TestTools.WebTesting.Rules.ExtractHiddenFields, Microsoft.VisualStudio.QualityTools.WebTestFramework, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a” VariableName=”1″ DisplayName=”Extract Hidden Fields” Description=”Extract all hidden fields from the response and place them into the test context.”>
        <RuleParameter Name=”Required” Value=”True” />
        <RuleParameter Name=”HtmlDecode” Value=”True” />
    <ExtractionRule Classname=”Microsoft.VisualStudio.TestTools.WebTesting.Rules.ExtractFormField, Microsoft.VisualStudio.QualityTools.WebTestFramework, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a” VariableName=”FormPostParam1.on” DisplayName=”” Description=””>
        <RuleParameter Name=”Name” Value=”on” />
        <RuleParameter Name=”HtmlDecode” Value=”True” />
        <RuleParameter Name=”Required” Value=”False” />
    <QueryStringParameter Name=”did” Value=”PHONENUM” RecordedValue=”PHONENUM” CorrelationBinding=”” UrlEncode=”False” UseToGroupResults=”False” />
    <QueryStringParameter Name=”dndButton” Value=”Configure” RecordedValue=”Configure” CorrelationBinding=”” UrlEncode=”False” UseToGroupResults=”False” />

The third request is a HTTP POST that sets the Do Not Disturb feature on or off depending on the value of the “on” parameter:

<Request Method=”POST” Version=”1.1″ Url=”” ThinkTime=”0″ Timeout=”300″ ParseDependentRequests=”True” FollowRedirects=”True” RecordResult=”True” Cache=”False” ResponseTimeGoal=”0″ Encoding=”iso-8859-1″ ExpectedHttpStatusCode=”0″ ExpectedResponseUrl=;success=true&amp;settings=com.vonage.service.feature.DndSettingsDTO+%7BphoneNumber%3DPHONENUM%2C+isOn%3Dtrue%7D>
    <FormPostParameter Name=”on” Value=”True” RecordedValue=”true” CorrelationBinding=”{{FormPostParam1.on}}” UrlEncode=”True” />
    <FormPostParameter Name=”phoneNumber” Value=”{{$HIDDEN1.phoneNumber}}” RecordedValue=”PHONENUM” CorrelationBinding=”” UrlEncode=”True” />

After the tests were co
mplete, I put together two batch files (DNDOn.bat and DNDOff.bat) that c
all the webtests from the command line.  Using MSTest.exe, it’s possible to run webtests from outside the IDE:

set VSINSTALLDIR=C:Program FilesMicrosoft Visual Studio 9.0
“%VSINSTALLDIR%Common7IDEMSTest.exe” /noresults /testcontainer:bindebugDNDOn.webtest

Once the batch files were working, I simply added a new scheduled tasks (in Vista, type “Task Scheduler” from the start menu to reach this) to run the DNDOn.bat file at 10pm at night, and DNDOff.bat at 7am in the morning. 


(btw, this is the first time I’ve really used the new task scheduler in Vista.  Two things surprised me:  1.  How easy it was to create a new task that ran the first time (I have too many nightmares creating Backup batch files on NT4!).   2.  How many tasks there are enabled in Vista – I need to spend some time figuring out what these do!)

…and that was it!  A quick solution that I can deploy to my new home server that will avoid more unwanted wrong number phone calls in the middle of the night.

If you want to check it out, you can find my test file here.  You’ll of course need to replace USERNAME, PASSWORD, and PHONENUM (10 digits, remember to include the 1) with your specific values for your account. 

Hierarchical View Carousel Source Code Released

Good news from Karsten’s blog.  The source code for the hierarchical view carousel has been released.  I played around with this WPF code a couple of years ago and found it very useful for displaying navigable, hierarchical sets of data.


As Karsten mentions, the code is based on an older drop of the framework, and is pretty crusty (his quote, not mine!) but might be worth checking out regardless.