Quantcast
Channel: Rick Strahl's Web Log
Viewing all 664 articles
Browse latest View live

Request Limit Length Limits for IIS’s requestFiltering Module

$
0
0

Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless.

However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this:

http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl

A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong?

If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so:

<configuration>
  <security>    
      <requestFiltering>
        <requestLimits maxQueryString="2048">
        </requestLimits>
      </requestFiltering>
    </security>
</configuration>

This fixed me right up and made the requests work.

How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down.

Finding the Problem

The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it.

If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page.
IISErrorPage

The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page.

Using Failed Request Tracing to retrieve Error Info

The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this:

Find the Failed Request Tracing Rules in the IIS Service Manager.

IISManagerFreb 

Freb

Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down.

Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site.

ErrorLogFiles

These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:

 errorDisplayIe[4]

Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive.

Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries…

Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  Security  
kick it on DotNetKicks.com

Microsoft and jQuery

$
0
0

The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com.

For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today.

As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library.

Microsoft and jQuery – making Friends

jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform.

PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft.

In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance.

Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement.

Microsoft has come a long way indeed!

What the Microsoft Involvement with jQuery means to you

For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place.

ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions.

Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery.

The Microsoft jQuery Plug-ins – Solid Core Features

One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5).

The following plug-ins are provided by Microsoft:

  • jQuery Templates – a client side template rendering engine
  • jQuery Data Link – a client side databinder that can synchronize changes without code
  • jQuery Globalization – provides formatting and conversion features for dates and numbers

The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins.

jQuery Templates

http://api.jquery.com/category/plugins/templates/

Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed.

Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element.

I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications.

jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code.

To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document:

<script id="stockTemplate" type="text/x-jquery-tmpl">    
    <div id="divStockQuote" class="errordisplay" style="width: 500px;">
        <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div>
        <div class="label">Last Price:</div><div>${LastPrice}</div>
        <div class="label">Net Change:</div><div>
            {{if NetChange > 0}}
            <b style="color:green" >${NetChange}</b>
            {{else}}
            <b style="color:red" >${NetChange}</b>
        {{/if}}
        </div>
        <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div>
    </div>
</script>   

The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation.

To load up this data you can use code like the following:

<script type="text/javascript">
    //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/");
    $(document).ready(function () {
        $("#btnGetQuotes").click(GetQuotes);
    });

    function GetQuotes() {
        var symbols = $("#txtSymbols").val().split(",");
        $.ajax({
            url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes",
            data: JSON.stringify({ symbols: symbols }), // parameter map
            type: "POST", // data has to be POSTed                
            contentType: "application/json",
            timeout: 10000,
            dataType: "json",
            success: function (result) {
                var quotes = result.d;
                var jEl = $("#stockTemplate").tmpl(quotes);
                $("#quoteDisplay").empty().append(jEl);
            },
            error: function (xhr, status) {
                alert(status + "\r\n" + xhr.responseText);
            }
        });
    };
</script>

In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with:

var jEl = $("#stockTemplate").tmpl(quotes);

which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page.

The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful.

Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options

The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality.

jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation:

  • Lack of Error Handling
    Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered.
  • No String Output
    Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output.
  • Limited JavaScript Access
    Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic.

Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway.

jQuery Data Link

http://api.jquery.com/category/plugins/data-link/

jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations.

In theory this sounds great, however in reality this plug-in has some serious usability issues.

Using the plug-in you can do things like the following to bind data:

person = { firstName: "rick", lastName: "strahl"};
$(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); });

This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one.

The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two.

You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this:

function updateFirstName() {
    $(person).data("firstName", person.firstName + " (code updated)");
}

This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal.

As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding.

Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations.

As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point.

jQuery Globalization

http://github.com/jquery/jquery-global

Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features.

While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server.

You can write code like the following for example to format numbers and dates:

var date = new Date();
var output =
    $.format(date, "MMM. dd, yy") + "\r\n" +
    $.format(date, "d") + "\r\n" +  // 10/25/2010
    $.format(1222.32213, "N2") + "\r\n" + 
    $.format(1222.33, "c") + "\r\n";

alert(output);

This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied:

<script id="stockTemplate" type="text/x-jquery-tmpl">    
    <div id="divStockQuote" class="errordisplay" style="width: 500px;">
        <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div>
        <div class="label">Last Price:</div>
<div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div>
<div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script>

There are also parsing methods that can parse dates and numbers from strings into numbers easily:

alert($.parseDate("25.10.2010"));
alert($.parseInt("12.222")); // de-DE uses . for thousands separators

As you can see culture specific options are taken into account when parsing.

The globalization plugin provides rich support for a variety of locales:

  • Get a list of all available cultures
  • Query cultures for culture items (like currency symbol, separators etc.)
  • Localized string names for all calendar related items (days of week, months)
  • Generated off of .NET’s supported locales

In short you get much of the same functionality that you already might be using in .NET on the server side.

The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about.

Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications.

Changes for Microsoft

It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in jQuery  ASP.NET  
kick it on DotNetKicks.com

Adding proper THEAD sections to a GridView

$
0
0

I’m working on some legacy code for a customer today and dealing with a page that has my favorite ‘friend’ on it: A GridView control. The ASP.NET GridView control (and also the older DataGrid control) creates some pretty messed up HTML. One of the more annoying things it does is to generate all rows including the header into the page in the <tbody> section of the document rather than in a properly separated <thead> section.

Here’s is typical GridView generated HTML output:

<table class="tablesorter blackborder" cellspacing="0" rules="all" border="1" 
id="Table1" style="border-collapse:collapse;"> <tr> <th scope="col">Name</th>
<
th scope="col">Company</th
>
<
th scope="col">Entered</th><th scope="col">Balance</th
> </tr>
<
tr> <td>Frank Hobson</td><td>Hobson Inc.</td>
<
td>10/20/2010 12:00:00 AM</td><td>240.00</td> </tr> ... </table>

Notice that all content – both the headers and the body of the table – are generated directly under the <table> tag and there’s no explicit use of <tbody> or <thead> (or <tfooter> for that matter).

When the browser renders this the document some default settings kick in and the DOM tree turns into something like this:

<table>
   <tbody>
      <tr>  <-- header
      <tr>  <—detail row
      <tr>  <—detail row
   </tbody>
</table>

Now if you’re just rendering the Grid server side and you’re applying all your styles through CssClass assignments this isn’t much of a problem. However, if you want to style your grid more generically using hierarchical CSS selectors it gets a lot more tricky to format tables that don’t properly delineate headers and body content. Also many plug-ins and other JavaScript utilities that work on tables require a properly formed table layout, and many of these simple won’t work out of the box with a GridView. For example, one of the things I wanted to do for this app is use the jQuery TableSorter plug-in which – not surprisingly – requires to work of table headers in the DOM document. Out of the box, the TableSorter plug-in doesn’t work with GridView controls, because the lack of a <thead> section to work on.

Luckily with a little help of some jQuery scripting there’s a real easy fix to this problem. Basically, if we know the GridView generated table has a header in it, code like the following will move the headers from <tbody> to <thead>:

<script type="text/javascript">
    $(document).ready(function () {
        // Fix up GridView to support THEAD tags            
        $("#gvCustomers tbody").before("<thead><tr></tr></thead>");            
        $("#gvCustomers thead tr").append($("#gvCustomers th"));
        $("#gvCustomers tbody tr:first").remove();

        $("#gvCustomers").tablesorter({ sortList: [[1, 0]] });
    });
</script>

And voila you have a table that now works with the TableSorter plug-in.

If you use GridView’s a lot you might want something a little more generic so the following does the same thing but should work more generically on any GridView/DataGrid missing its <thead> tag:

function fixGridView(tableEl) {       
    var jTbl = $(tableEl);
   
    if(jTbl.find("tbody>tr>th").length > 0) {
        jTbl.find("tbody").before("<thead><tr></tr></thead>");
        jTbl.find("thead tr").append(jTbl.find("th"));
        jTbl.find("tbody tr:first").remove();
    }
}

which you can call like this:

   $(document).ready(function () {
        fixGridView( $("#gvCustomers") );
        $("#gvCustomers").tablesorter({ sortList: [[1, 0]] });
    });

Server Side THEAD Rendering
[updated from comments 11/21/2010]

Several commenters pointed out that you can also do this on the server side by using the GridView.HeaderRow.TableSection property to force rendering with a proper table header. I was unaware of this option actually – not exactly an easy one to discover. One issue here is that timing of this needs to happen during the databinding process so you need to use an event handler:

this.gvCustomers.DataBound += (object o, EventArgs ev) =>                        
{                
    gvCustomers.HeaderRow.TableSection = TableRowSection.TableHeader;
};

this.gvCustomers.DataSource = custList;
this.gvCustomers.DataBind();

You can apply the same logic for the FooterRow. It’s beyond me why this rendering mode isn’t the default for a GridView – why would you ever want to have a table that doesn’t use a THEAD section??? But I disgress :-)

I don’t use GridViews much anymore – opting for more flexible approaches using ListViews or even plain code based views or other custom displays that allow more control over layout, but I still see a lot of old code that does use them old clunkers including my own :) (gulp) and this does make life a little bit easier especially if you’re working with any of the jQuery table related plug-ins that expect a proper table structure.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  jQuery  
kick it on DotNetKicks.com

A Closable jQuery Plug-in

$
0
0

In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out.

Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout.

/*
* 
* Closable
*
* Makes selected DOM elements closable by making them 
* invisible when close icon is clicked
*
* Version 1.01
* @requires jQuery v1.3 or later
* 
* Copyright (c) 2007-2010 Rick Strahl 
* http://www.west-wind.com/
*
* Licensed under the MIT license:
* http://www.opensource.org/licenses/mit-license.php

Support CSS:

.closebox
{
    position: absolute;        
    right: 4px;
    top: 4px;
    background-image: url(images/close.gif);
    background-repeat: no-repeat;
    width: 14px;
    height: 14px;
    cursor: pointer;        
    opacity: 0.60;
    filter: alpha(opacity="80");
} 
.closebox:hover 
{
    opacity: 0.95;
    filter: alpha(opacity="100");
}

Options:

* handle
Element to place closebox into (like say a header). Use if main element 
and closebox container are two different elements.

* closeHandler
Function called when the close box is clicked. Return true to close the box
return false to keep it visible.

* cssClass
The CSS class to apply to the close box DIV or IMG tag.

* imageUrl
Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling.

* fadeOut
Optional provide fadeOut speed. Default no fade out occurs
*/
(function ($) {

    $.fn.closable = function (options) {
        var opt = { handle: null,
            closeHandler: null,
            cssClass: "closebox",
            imageUrl: null,
            fadeOut: null
        };
        $.extend(opt, options);

        return this.each(function (i) {
            var el = $(this);
            var pos = el.css("position");
            if (!pos || pos == "static")
                el.css("position", "relative");
            var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el;

            var div = opt.imageUrl ?
                    $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") :
                    $("<div>");
            div.addClass(opt.cssClass)
                    .click(function (e) {
                        if (opt.closeHandler)
                            if (!opt.closeHandler.call(this, e))
                                return;
                        if (opt.fadeOut)
                            $(el).fadeOut(opt.fadeOut);
                        else $(el).hide();
                    });
            if (opt.imageUrl) div.css("background-image", "none");
            h.append(div);
        });
    }

})(jQuery);

The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this:

.closebox
{
    position: absolute;        
    right: 4px;
    top: 4px;
    background-image: url(images/close.gif);
    background-repeat: no-repeat;
    width: 14px;
    height: 14px;
    cursor: pointer;        
    opacity: 0.60;
    filter: alpha(opacity="80");
} 
.closebox:hover 
{
    opacity: 0.95;
    filter: alpha(opacity="100");
}

Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this:

Closable

you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx

For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag:

$().ready(function() {
    $('#ctl00_MainContent_panEditBook')
        .closable({ handle: $('#divEditBook_Header') })
        .draggable({ dragDelay: 100, handle: '#divEditBook_Header' })
        .shadow({ opacity: 0.25, offset: 6 });
})

The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file:

.closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint',
handle: $('#divEditBook_Header')})

so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel.

More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this:

function loadBooks() {

    showProgress();
    
    // Clear the content
    $("#divBookListWrapper").empty();    
    
    var filter = $("#" + scriptVars.lstFiltersId).val();
    
    Proxy.GetBooks(filter, function(books) {
        $(books).each(function(i) {
            updateBook(this); 
            showProgress(true); 
        });
    }, onPageError);    
}
function updateBook(book,highlight)
{    
    // try to retrieve the single item in the list by tag attribute id
    var item = $(".bookitem[tag=" +book.Pk +"]");

    // grab and evaluate the template
    
    var html = parseTemplate(template, book);

    var newItem = $(html)
                    .attr("tag", book.Pk.toString())
                    .click(function() {
                        var pk = $(this).attr("tag");
                        editBook(this, parseInt(pk));
                    })
                    .closable({ closeHandler: function(e) {
                            removeBook(this, e);
                        },
                        imageUrl: "../../images/remove.gif"
                    });
                    

    if (item.length > 0) 
        item.after(newItem).remove();        
    else 
        newItem.appendTo($("#divBookListWrapper"));
    
    if (highlight) {
        newItem
            .addClass("pulse")
            .effect("bounce", { distance: 15, times: 3 }, 400);
        setTimeout(function() { newItem.removeClass("pulse"); }, 1200);            
    }    
}

Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image.

This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window.

You can find more information about the .closable class behavior and options here:
.closable Documentation

Plug-ins make Server Control JavaScript much easier

I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior:

if (this.Closable && !string.IsNullOrEmpty(DragHandleID) )
{
    string imageUrl = this.CloseBoxImage;
    if (imageUrl == "WebResource" )
        imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE);
    
    StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'");

    if (!string.IsNullOrEmpty(this.DragHandleID))
        closableOptions.Append(",handle: $('#" + this.DragHandleID + "')");

    if (!string.IsNullOrEmpty(this.ClientDialogHandler))
        closableOptions.Append(",handler: " + this.ClientDialogHandler);
       
    if (this.FadeOnClose)
        closableOptions.Append(",fadeOut: 'slow'");
    
    startupScript.Append(@"   .closable({ " + closableOptions + "})");
}

The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least.

I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control.

Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have.

Related Links

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in jQuery   ASP.NET  JavaScript  
kick it on DotNetKicks.com

Finding a Relative Path in .NET

$
0
0

Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory.

I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method:

/// <summary>
/// Returns a relative path string from a full path based on a base path
/// provided.
/// </summary>
/// <param name="fullPath">The path to convert. Can be either a file or a directory</param>
/// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param>
/// <returns>
/// String of the relative path.
/// 
/// Examples of returned values:
///  test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt
/// </returns>
public static string GetRelativePath(string fullPath, string basePath ) 
{
    // Require trailing backslash for path
    if (!basePath.EndsWith("\\"))
        basePath += "\\";

    Uri baseUri = new Uri(basePath);
    Uri fullUri = new Uri(fullPath);

    Uri relativeUri = baseUri.MakeRelativeUri(fullUri);

    // Uri's use forward slashes so convert back to backward slashes
    return relativeUri.ToString().Replace("/", "\\");

}

You can then call it like this:

string relPath = FileUtils.GetRelativePath("c:\temp\templates\subdir\test.txt","c:\temp\templates")

It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. I often forget that the URI class can basically work of file paths as well to slice and dice the path portions which is a good thing to remember at times. The only thing to remember if you do use it that it turns file system backwards slashes into forward slashes so that has to be reversed out.

I needed this because right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case.

Not the kind of thing you use every day, but useful to remember.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  CSharp  
kick it on DotNetKicks.com

Hosting the Razor Engine for Templating in Non-Web Applications

$
0
0

Microsoft’s new Razor HTML Rendering Engine that is currently shipping with ASP.NET MVC previews can be used outside of ASP.NET. Razor is an alternative view engine that can be used instead of the ASP.NET Page engine that currently works with ASP.NET WebForms and MVC. It provides a simpler and more readable markup syntax and is much more light weight in terms of functionality than the full blown WebForms Page engine, focusing only on features that are more along the lines of a pure view engine (or classic ASP!) with focus on expression and code rendering rather than a complex control/object model. Like the Page engine though, the parser understands .NET code syntax which can be embedded into templates, and behind the scenes the engine compiles markup and script code into an executing piece of .NET code in an assembly.

Although it ships as part of the ASP.NET MVC and WebMatrix the Razor Engine itself is not directly dependent on ASP.NET or IIS or HTTP in any way. And although there are some markup and rendering features that are optimized for HTML based output generation, Razor is essentially a free standing template engine. And what’s really nice is that unlike the ASP.NET Runtime, Razor is fairly easy to host inside of your own non-Web applications to provide templating functionality.

Templating in non-Web Applications? Yes please!

So why might you host a template engine in your non-Web application? Template rendering is useful in many places and I have a number of applications that make heavy use of it. One of my applications – West Wind Html Help Builder - exclusively uses template based rendering to merge user supplied help text content into customizable and executable HTML markup templates that provide HTML output for CHM style HTML Help. This is an older product and it’s not actually using .NET at the moment – and this is one reason I’m looking at Razor for script hosting at the moment.

For a few .NET applications though I’ve actually used the ASP.NET Runtime hosting to provide templating and mail merge style functionality and while that works reasonably well it’s a very heavy handed approach. It’s very resource intensive and has potential issues with versioning in various different versions of .NET. The generic implementation I created in the article above requires a lot of fix up to mimic an HTTP request in a non-HTTP environment and there are a lot of little things that have to happen to ensure that the ASP.NET runtime works properly most of it having nothing to do with the templating aspect but just satisfying ASP.NET’s requirements.

The Razor Engine on the other hand is fairly light weight and completely decoupled from the ASP.NET runtime and the HTTP processing. Rather it’s a pure template engine whose sole purpose is to render text templates. Hosting this engine in your own applications can be accomplished with a reasonable amount of code (actually just a few lines with the tools I’m about to describe) and without having to fake HTTP requests. It’s also much lighter on resource usage and you can easily attach custom properties to your base template implementation to easily pass context from the parent application into templates all of which was rather complicated with ASP.NET runtime hosting.

Installing the Razor Template Engine

You can get Razor as part of the MVC 3 (RC and later) or Web Matrix. Both are available as downloadable components from the Web Platform Installer Version 3.0 (!important – V2 doesn’t show these components). If you already have that version of the WPI installed just fire it up. You can get the latest version of the Web Platform Installer from here:

http://www.microsoft.com/web/gallery/install.aspx

Once the platform Installer 3.0 is installed install either MVC 3 or ASP.NET Web Pages. Once installed you’ll find a System.Web.Razor assembly in C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\System.Web.Razor.dll which you can add as a reference to your project.

Creating a Wrapper

The basic Razor Hosting API is pretty simple and you can host Razor with a (large-ish) handful of lines of code. I’ll show the basics of it later in this article. However, if you want to customize the rendering and handle assembly and namespace includes for the markup as well as deal with text and file inputs as well as forcing Razor to run in a separate AppDomain so you can unload the code-generated assemblies and deal with assembly caching for re-used templates little more work is required to create something that is more easily reusable.

For this reason I created a Razor Hosting wrapper project that combines a bunch of this functionality into an easy to use hosting class, a hosting factory that can load the engine in a separate AppDomain and a couple of hosting containers that provided folder based and string based caching for templates for an easily embeddable and reusable engine with easy to use syntax.

If you just want the code and play with the samples and source go grab the latest code from the Subversion Repository at:

http://www.west-wind.com:8080/svn/articles/trunk/RazorHosting/

or a snapshot from:

http://www.west-wind.com/files/tools/RazorHosting.zip

Getting Started

Before I get into how hosting with Razor works, let’s take a look at how you can get up and running quickly with the wrapper classes provided. It only takes a few lines of code. Let’s start with a very simple template that displays some simple expressions, some code blocks and demonstrates rendering some data from contextual data that you pass to the template in the form of a ‘context’.

Here’s a simple Razor template:

@inherits RazorHosting.RazorTemplateBase
@using System.Reflection
Hello @Context.FirstName! Your entry was entered on: @Context.Entered

@{
    // Code block: Update the host Windows Form passed in through the context
    Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString();
}

AppDomain Id:
    @AppDomain.CurrentDomain.FriendlyName
            
Assembly:
    @Assembly.GetExecutingAssembly().FullName

Code based output: 
@{
    // Write output with Response object from code
    string output = string.Empty;
    for (int i = 0; i < 10; i++)
    {
        output += i.ToString() + " ";
    }
    Response.Write(output);
}

Note: the @inherits above provides a way to override the default template type used by Razor and this wrapper. It’s optional, but it also provides Intellisense for your custom template in Visual Studio so it’s a good idea to provide the type in the template itself.

Pretty easy to see what’s going on here. The only unusual thing in this code is the Context object which is an arbitrary object I’m passing from the host to the template by way of the template base class. I’m also displaying the current AppDomain and the executing Assembly name so you can see how compiling and running a template actually loads up new assemblies. Also note that as part of my context I’m passing a reference to the current Windows Form down to the template and changing the title from within the script. It’s a silly example, but it demonstrates two-way communication between host and template and back which can be very powerful.

The easiest way to quickly render this template is to use the RazorEngine<TTemplateBase> class. The generic parameter specifies a template base class type that is used by Razor internally to generate the class it generates from a template. The default implementation provided in my RazorHosting wrapper is RazorTemplateBase.

Here’s a simple example that renders from a string and outputs a string:

var engine = new RazorEngine<RazorTemplateBase>();

// we can pass any object as context - here create a custom context
var context = new CustomContext()
{
    WinForm = this,
FirstName = "Rick",
Entered = DateTime.Now.AddDays(-10) }; string output = engine.RenderTemplate(this.txtSource.Text
new string[] { "System.Windows.Forms.dll" },
context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output;

Simple enough. This code renders a template from a string input and returns a result back as a string. It  creates a custom context and passes that to the template which can then access the Context’s properties. Note that anything passed as ‘context’ must be serializable (or MarshalByRefObject) – otherwise you get an exception when passing the reference over AppDomain boundaries (discussed later). Passing a context is optional, but is a key feature in being able to share data between the host application and the template. Note that we use the Context object to access FirstName, Entered and even the host Windows Form object which is used in the template to change the Window caption from within the script!

In the code above all the work happens in the RenderTemplate method which provide a variety of overloads to read and write to and from strings, files and TextReaders/Writers. Here’s another example that renders from a file input using a TextReader:

using (reader = new StreamReader("templates\\simple.csHtml", true))
{
    result = host.RenderTemplate(reader,  new string[] { "System.Windows.Forms.dll" }, 
this.CustomContext); }

RenderTemplate() is fairly high level and it handles loading of the runtime, compiling into an assembly and rendering of the template. If you want more control you can use the lower level methods to control each step of the way which is important for the HostContainers I’ll discuss later. Basically for those scenarios you want to separate out loading of the engine, compiling into an assembly and then rendering the template from the assembly. Why? So we can keep assemblies cached. In the code above a new assembly is created for each template rendered which is inefficient and uses up resources. Depending on the size of your templates and how often you fire them you can chew through memory very quickly.

This slighter lower level approach is only a couple of extra steps:

// we can pass any object as context - here create a custom context
var context = new CustomContext()
{
    WinForm = this,
FirstName = "Rick", Entered = DateTime.Now.AddDays(-10) }; var engine = new RazorEngine<RazorTemplateBase>(); string assId = null; using (StringReader reader = new StringReader(this.txtSource.Text)) { assId = engine.ParseAndCompileTemplate(new string[] { "System.Windows.Forms.dll" }, reader); } string output = engine.RenderTemplateFromAssembly(assId, context); if (output == null) this.txtResult.Text = "*** ERROR:\r\n" + engine.ErrorMessage; else this.txtResult.Text = output;

The difference here is that you can capture the assembly – or rather an Id to it – and potentially hold on to it to render again later assuming the template hasn’t changed. The HostContainers take advantage of this feature to cache the assemblies based on certain criteria like a filename and file time step or a string hash that if not change indicate that an assembly can be reused.

Note that ParseAndCompileTemplate returns an assembly Id rather than the assembly itself. This is done so that that the assembly always stays in the host’s AppDomain and is not passed across AppDomain boundaries which would cause load failures. We’ll talk more about this in a minute but for now just realize that assemblies references are stored in a list and are accessible by this ID to allow locating and re-executing of the assembly based on that id. Reuse of the assembly avoids recompilation overhead and creation of yet another assembly that loads into the current AppDomain.

You can play around with several different versions of the above code in the main sample form:

RazorSampleForm 

Using Hosting Containers for more Control and Caching

The above examples simply render templates into assemblies each and every time they are executed. While this works and is even reasonably fast, it’s not terribly efficient. If you render templates more than once it would be nice if you could cache the generated assemblies for example to avoid re-compiling and creating of a new assembly each time. Additionally it would be nice to load template assemblies into a separate AppDomain optionally to be able to be able to unload assembli es and also to protect your host application from scripting attacks with malicious template code. Hosting containers provide also provide a wrapper around the RazorEngine<T> instance, a factory (which allows creation in separate AppDomains) and an easy way to start and stop the container ‘runtime’.

The Razor Hosting samples provide two hosting containers: RazorFolderHostContainer and StringHostContainer. The folder host provides a simple runtime environment for a folder structure similar in the way that the ASP.NET runtime handles a virtual directory as it’s ‘application' root. Templates are loaded from disk in relative paths and the resulting assemblies are cached unless the template on disk is changed. The string host also caches templates based on string hashes – if the same string is passed a second time a cached version of the assembly is used.

Here’s how HostContainers work. I’ll use the FolderHostContainer because it’s likely the most common way you’d use templates – from disk based templates that can be easily edited and maintained on disk. The first step is to create an instance of it and keep it around somewhere (in the example it’s attached as a property to the Form):

RazorFolderHostContainer Host = new RazorFolderHostContainer();
public RazorFolderHostForm()
{
    InitializeComponent();
    
    // The base path for templates - templates are rendered with relative paths
    // based on this path.
    Host.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); 

    // Add any assemblies you want reference in your templates
    Host.ReferencedAssemblies.Add("System.Windows.Forms.dll");
    
    // Start up the host container
    Host.Start();
}

Next anytime you want to render a template you can use simple code like this:

private void RenderTemplate(string fileName)
{
    // Pass the template path via the Context            
    var relativePath = Utilities.GetRelativePath(fileName, Host.TemplatePath);

    if (!Host.RenderTemplate(relativePath, this.Context, Host.RenderingOutputFile))
    {
        MessageBox.Show("Error: " + Host.ErrorMessage);
        return;
    }

    this.webBrowser1.Navigate("file://" + Host.RenderingOutputFile);
}

You can also render the output to a string instead of to a file:

string result = Host.RenderTemplateToString(relativePath,context);

Finally if you want to release the engine and shut down the hosting AppDomain you can simply do:

Host.Stop();

Stopping the AppDomain and restarting it (ie. calling Stop(); followed by Start()) is also a nice way to release all resources in the AppDomain.

The FolderBased domain also supports partial Rendering based on root path based relative paths with the same caching characteristics as the main templates. From within a template you can call out to a partial like this:

@RenderPartial(@"partials\PartialRendering.cshtml", Context)

where partials\PartialRendering.cshtml is a relative to the template root folder. The folder host example lets you load up templates from disk and display the result in a Web Browser control which demonstrates using Razor HTML output from templates that contain HTML syntax which happens to me my target scenario for Html Help Builder.

folderhostsample[4] 

The Razor Engine Wrapper Project

The project I created to wrap Razor hosting has a fair bit of code and a number of classes associated with it. Most of the components are internally used and as you can see using the final RazorEngine<T> and HostContainer classes is pretty easy. The classes are extensible and I suspect developers will want to build more customized host containers for their applications. Host containers are the key to wrapping up all functionality – Engine, BaseTemplate, AppDomain Hosting, Caching etc in a logical piece that is ready to be plugged into an application.

When looking at the code there are a couple of core features provided:

  • Core Razor Engine Hosting
    This is the core Razor hosting which provides the basics of loading a template, compiling it into an assembly and executing it. This is fairly straightforward, but without a host container that can cache assemblies based on some criteria templates are recompiled and re-created each time which is inefficient (although pretty fast). The base engine wrapper implementation also supports hosting the Razor runtime in a separate AppDomain for security and the ability to unload it on demand.
  • Host Containers
    The engine hosting itself doesn’t provide any sort of ‘runtime’ service like picking up files from disk, caching assemblies and so forth. So my implementation provides two HostContainers: RazorFolderHostContainer and RazorStringHostContainer. The FolderHost works off a base directory and loads templates based on relative paths (sort of like the ASP.NET runtime does off a virtual). The HostContainers also deal with caching of template assemblies – for the folder host the file date is tracked and checked for updates and unless the template is changed a cached assembly is reused. The StringHostContainer similiarily checks string hashes to figure out whether a particular string template was previously compiled and executed. The HostContainers also act as a simple startup environment and a single reference to easily store and reuse in an application.
  • TemplateBase Classes
    The template base classes are the base classes that from which the Razor engine generates .NET code. A template is parsed into a class with an Execute() method and the class is based on this template type you can specify. RazorEngine<TBaseTemplate> can receive this type and the HostContainers default to specific templates in their base implementations. Template classes are customizable to allow you to create templates that provide application specific features and interaction from the template to your host application.

    How does the RazorEngine wrapper work?

    You can browse the source code in the links above or in the repository or download the source, but I’ll highlight some key features here. Here’s part of the RazorEngine implementation that can be used to host the runtime and that demonstrates the key code required to host the Razor runtime.

    The RazorEngine class is implemented as a generic class to reflect the Template base class type:

       public class RazorEngine<TBaseTemplateType> : MarshalByRefObject
            where TBaseTemplateType : RazorTemplateBase
    

    The generic type is used to internally provide easier access to the template type and assignments on it as part of the template processing. As mentioned earlier the actual type of the page may be overridden by the template via the @inherits keyword – the generic type provides the default type used if it’s not provided in the template itself.

    The class also inherits MarshalByRefObject to allow execution over AppDomain boundaries – something that all the classes discussed here need to do since there is much interaction between the host and the template.

    The first two key methods deal with creating a template assembly:

    /// <summary>
    /// Creates an instance of the RazorHost with various options applied.
    /// Applies basic namespace imports and the name of the class to generate
    /// </summary>
    /// <param name="generatedNamespace"></param>
    /// <param name="generatedClass"></param>
    /// <returns></returns>
    protected RazorTemplateEngine CreateHost(string generatedNamespace, string generatedClass)
    {     
        Type baseClassType = typeof(TBaseTemplateType);
    
        RazorEngineHost host = new RazorEngineHost(new CSharpRazorCodeLanguage());
        host.DefaultBaseClass = baseClassType.FullName;
        host.DefaultClassName = generatedClass;
        host.DefaultNamespace = generatedNamespace;
        host.NamespaceImports.Add("System");
        host.NamespaceImports.Add("System.Text");
        host.NamespaceImports.Add("System.Collections.Generic");
        host.NamespaceImports.Add("System.Linq");
        host.NamespaceImports.Add("System.IO");
        
        return new RazorTemplateEngine(host);            
    }
    
    
    /// <summary>
    /// Parses and compiles a markup template into an assembly and returns
    /// an assembly name. The name is an ID that can be passed to 
    /// ExecuteTemplateByAssembly which picks up a cached instance of the
    /// loaded assembly.
    /// 
    /// </summary>
    /// <param name="namespaceOfGeneratedClass">The namespace of the class to generate from the template</param>
    /// <param name="generatedClassName">The name of the class to generate from the template</param>
    /// <param name="ReferencedAssemblies">Any referenced assemblies by dll name only. Assemblies must be in execution path of host or in GAC.</param>
    /// <param name="templateSourceReader">Textreader that loads the template</param>
    /// <remarks>
    /// The actual assembly isn't returned here to allow for cross-AppDomain
    /// operation. If the assembly was returned it would fail for cross-AppDomain
    /// calls.
    /// </remarks>
    /// <returns>An assembly Id. The Assembly is cached in memory and can be used with RenderFromAssembly.</returns>
    public string ParseAndCompileTemplate(
                string namespaceOfGeneratedClass,
                string generatedClassName,
                string[] ReferencedAssemblies,
                TextReader templateSourceReader)
    {
        RazorTemplateEngine engine = CreateHost(namespaceOfGeneratedClass, generatedClassName);
    
        // Generate the template class as CodeDom  
        GeneratorResults razorResults = engine.GenerateCode(templateSourceReader);
                    
        // Create code from the codeDom and compile
        CSharpCodeProvider codeProvider = new CSharpCodeProvider();
        CodeGeneratorOptions options = new CodeGeneratorOptions();
    
        // Capture Code Generated as a string for error info
        // and debugging
        LastGeneratedCode = null;
        using (StringWriter writer = new StringWriter())
        {
            codeProvider.GenerateCodeFromCompileUnit(razorResults.GeneratedCode, writer, options);
            LastGeneratedCode = writer.ToString();                
        }
        
        CompilerParameters compilerParameters = new CompilerParameters(ReferencedAssemblies);
        
        // Standard Assembly References
        compilerParameters.ReferencedAssemblies.Add("System.dll");
        compilerParameters.ReferencedAssemblies.Add("System.Core.dll");
        compilerParameters.ReferencedAssemblies.Add("Microsoft.CSharp.dll");   // dynamic support!                     
    
        // Also add the current assembly so RazorTemplateBase is available
        compilerParameters.ReferencedAssemblies.Add(Assembly.GetExecutingAssembly().CodeBase.Substring(8));
    
        compilerParameters.GenerateInMemory = Configuration.CompileToMemory;
        if (!Configuration.CompileToMemory)
            compilerParameters.OutputAssembly = Path.Combine(Configuration.TempAssemblyPath, "_" 
    + Guid.NewGuid().ToString("n") + ".dll"); CompilerResults compilerResults = codeProvider.CompileAssemblyFromDom(compilerParameters,
    razorResults.GeneratedCode); if (compilerResults.Errors.Count > 0) { var compileErrors = new StringBuilder(); foreach (System.CodeDom.Compiler.CompilerError compileError in compilerResults.Errors) compileErrors.Append(String.Format(Resources.LineX0TColX1TErrorX2RN,
    compileError.Line, compileError.Column, compileError.ErrorText)); this.SetError(compileErrors.ToString() + "\r\n" + LastGeneratedCode); return null; } AssemblyCache.Add(compilerResults.CompiledAssembly.FullName, compilerResults.CompiledAssembly); return compilerResults.CompiledAssembly.FullName; }

    Think of the internal CreateHost() method as setting up the assembly generated from each template. Each template compiles into a separate assembly. It sets up namespaces, and assembly references, the base class used and the name and namespace for the generated class. ParseAndCompileTemplate() then calls the CreateHost() method to receive the template engine generator which effectively generates a CodeDom from the template – the template is turned into .NET code. The code generated from our earlier example looks something like this:
    //------------------------------------------------------------------------------
    // <auto-generated>
    //     This code was generated by a tool.
    //     Runtime Version:4.0.30319.1
    //
    //     Changes to this file may cause incorrect behavior and will be lost if
    //     the code is regenerated.
    // </auto-generated>
    //------------------------------------------------------------------------------
    namespace RazorTest
    {
        using System;
        using System.Text;
        using System.Collections.Generic;
        using System.Linq;
        using System.IO;
        using System.Reflection;
    
        public class RazorTemplate : RazorHosting.RazorTemplateBase
        {
    
    #line hidden
    
            public RazorTemplate()
            {
            }
    
            public override void Execute()
            {
    
                WriteLiteral("Hello ");
    
    
                Write(Context.FirstName);
    
                WriteLiteral("! Your entry was entered on: ");
    
    
                Write(Context.Entered);
    
                WriteLiteral("\r\n\r\n");
    
    
    
                // Code block: Update the host Windows Form passed in through the context
                Context.WinForm.Text = "Hello World from Razor at " + DateTime.Now.ToString();
    
    
                WriteLiteral("\r\nAppDomain Id:\r\n    ");
    
    
                Write(AppDomain.CurrentDomain.FriendlyName);
    
                WriteLiteral("\r\n            \r\nAssembly:\r\n    ");
    
    
                Write(Assembly.GetExecutingAssembly().FullName);
    
                WriteLiteral("\r\n\r\nCode based output: \r\n");
    
    
    
                // Write output with Response object from code
                string output = string.Empty;
                for (int i = 0; i < 10; i++)
                {
                    output += i.ToString() + " ";
                }
            }
        }
    }

    Basically the template’s body is turned into code in an Execute method that is called. Internally the template’s Write method is fired to actually generate the output. Note that the class inherits from RazorTemplateBase which is the generic parameter I used to specify the base class when creating an instance in my RazorEngine host:

    var engine = new RazorEngine<RazorTemplateBase>();

    This template class must be provided and it must implement an Execute() and Write() method. Beyond that you can create any class you chose and attach your own properties. My RazorTemplateBase class implementation is very simple:

    public class RazorTemplateBase : MarshalByRefObject, IDisposable
    {
        
        /// <summary>
        /// You can pass in a generic context object
        /// to use in your template code
        /// </summary>
        public dynamic Context { get; set; }
    
    
        /// <summary>
        /// Class that generates output. Currently ultra simple
        /// with only Response.Write() implementation.
        /// </summary>
        public RazorResponse Response { get; set; }
    
        public object HostContainer {get; set; }
        public object Engine { get; set; }
    
                    
        public RazorTemplateBase()
        {
            Response = new RazorResponse();
        }
    
        public virtual void Write(object value)
        {
            Response.Write(value);
        }
    
        public virtual void WriteLiteral(object value)
        {
            Response.Write(value);
        }
    
         /// <summary>
        /// Razor Parser implements this method
        /// </summary>
        public virtual void Execute() {}
    
    
        public virtual void Dispose()
        {        
            if (Response != null)
            {
                Response.Dispose();
                Response = null;
            }            
        }
    }

    Razor fills in the Execute method when it generates its subclass and uses the Write() method to output content. As you can see I use a RazorResponse() class here to generate output. This isn’t necessary really, as you could use a StringBuilder or StringWriter() directly, but I prefer using Response object so I can extend the Response behavior as needed.

    The RazorResponse class is also very simple and merely acts as a wrapper around a TextWriter:

    public class RazorResponse : IDisposable        
    {
    
        /// <summary>
        /// Internal text writer - default to StringWriter()
        /// </summary>
        public TextWriter Writer = new StringWriter();
    
    
        public virtual void Write(object value)
        {
            Writer.Write(value);
        }
    
        public virtual void WriteLine(object value)
        {
            Write(value);
            Write("\r\n");
        }
    
        public virtual void WriteFormat(string format, params object[] args)
        {
            Write(string.Format(format, args));        
        }
    
        public override string ToString()
        {
            return Writer.ToString();
        }
    
        public virtual void Dispose()
        {
            Writer.Close();
        }
    
        public virtual void SetTextWriter(TextWriter writer)
        {
            // Close original writer
            if (Writer != null)
                Writer.Close();
                        
            Writer = writer;
        }
    }

    The Rendering Methods of RazorEngine

    At this point I’ve talked about the assembly generation logic and the template implementation itself. What’s left is that once you’ve generated the assembly is to execute it. The code to do this is handled in the various RenderXXX methods of the RazorEngine class. Let’s look at the lowest level one of these which is RenderTemplateFromAssembly() and a couple of internal support methods that handle instantiating and invoking of the generated template method:

    public string RenderTemplateFromAssembly(
        string assemblyId,
        string generatedNamespace,
        string generatedClass,
        object context,
        TextWriter outputWriter)
    {
        this.SetError();
    
        Assembly generatedAssembly = AssemblyCache[assemblyId];
        if (generatedAssembly == null)
        {
            this.SetError(Resources.PreviouslyCompiledAssemblyNotFound);
            return null;
        }
    
        string className = generatedNamespace + "." + generatedClass;
    
        Type type;
        try
        {
            type = generatedAssembly.GetType(className);
        }   
        catch (Exception ex)
        {
            this.SetError(Resources.UnableToCreateType + className + ": " + ex.Message);
            return null;
        }
    
        // Start with empty non-error response (if we use a writer)
        string result = string.Empty;
    
        using(TBaseTemplateType instance = InstantiateTemplateClass(type))
        {
            if (instance == null)
                return null;
       
            if (outputWriter != null)
                instance.Response.SetTextWriter(outputWriter);
    
            if (!InvokeTemplateInstance(instance, context))
                return null;
    
            // Capture string output if implemented and return
            // otherwise null is returned
            if (outputWriter == null)
                result = instance.Response.ToString();
        }
        
    
        return result;
    }
    protected virtual TBaseTemplateType InstantiateTemplateClass(Type type)
    {
        TBaseTemplateType instance = Activator.CreateInstance(type) as TBaseTemplateType;
    
        if (instance == null)
        {
            SetError(Resources.CouldnTActivateTypeInstance + type.FullName);
            return null;
        }
    
        instance.Engine = this;
    
        // If a HostContainer was set pass that to the template too
        instance.HostContainer = this.HostContainer;
        
        return instance;
    }
    
    /// <summary>
    /// Internally executes an instance of the template,
    /// captures errors on execution and returns true or false
    /// </summary>
    /// <param name="instance">An instance of the generated template</param>
    /// <returns>true or false - check ErrorMessage for errors</returns>
    protected virtual bool InvokeTemplateInstance(TBaseTemplateType instance, object context)
    {
        try
        {
            instance.Context = context;
            instance.Execute();
        }
        catch (Exception ex)
        {
            this.SetError(Resources.TemplateExecutionError + ex.Message);
            return false;
        }
        finally
        {
            // Must make sure Response is closed
            instance.Response.Dispose();
        }
        return true;
    }

    The RenderTemplateFromAssembly method basically requires the namespace and class to instantate and creates an instance of the class using InstantiateTemplateClass(). It then invokes the method with InvokeTemplateInstance(). These two methods are broken out because they are re-used by various other rendering methods and also to allow subclassing and providing additional configuration tasks to set properties and pass values to templates at execution time. In the default mode instantiation sets the Engine and HostContainer (discussed later) so the template can call back into the template engine, and the context is set when the template method is invoked.

    The various RenderXXX methods use similar code although they create the assemblies first. If you’re after potentially cashing assemblies the method is the one to call and that’s exactly what the two HostContainer classes do. More on that in a minute, but before we get into HostContainers let’s talk about AppDomain hosting and the like.

    Running Templates in their own AppDomain

    With the RazorEngine class above, when a template is parsed into an assembly and executed the assembly is created (in memory or on disk – you can configure that) and cached in the current AppDomain. In .NET once an assembly has been loaded it can never be unloaded so if you’re loading lots of templates and at some time you want to release them there’s no way to do so. If however you load the assemblies in a separate AppDomain that new AppDomain can be unloaded and the assemblies loaded in it with it.

    In order to host the templates in a separate AppDomain the easiest thing to do is to run the entire RazorEngine in a separate AppDomain. Then all interaction occurs in the other AppDomain and no further changes have to be made. To facilitate this there is a RazorEngineFactory which has methods that can instantiate the RazorHost in a separate AppDomain as well as in the local AppDomain. The host creates the remote instance and then hangs on to it to keep it alive as well as providing methods to shut down the AppDomain and reload the engine.

    Sounds complicated but cross-AppDomain invocation is actually fairly easy to implement. Here’s some of the relevant code from the RazorEngineFactory class. Like the RazorEngine this class is generic and requires a template base type in the generic class name:

        public class RazorEngineFactory<TBaseTemplateType>
            where TBaseTemplateType : RazorTemplateBase
    


    Here are the key methods of interest:

    /// <summary>
    /// Creates an instance of the RazorHost in a new AppDomain. This 
    /// version creates a static singleton that that is cached and you
    /// can call UnloadRazorHostInAppDomain to unload it.
    /// </summary>
    /// <returns></returns>
    public static RazorEngine<TBaseTemplateType> CreateRazorHostInAppDomain()
    {
        if (Current == null)            
            Current = new RazorEngineFactory<TBaseTemplateType>();
    
        return Current.GetRazorHostInAppDomain();
    }
    
    public static void UnloadRazorHostInAppDomain()
    {
        if (Current != null)
            Current.UnloadHost();
        Current = null;
    }
    
    /// <summary>
    /// Instance method that creates a RazorHost in a new AppDomain.
    /// This method requires that you keep the Factory around in
    /// order to keep the AppDomain alive and be able to unload it.
    /// </summary>
    /// <returns></returns>
    public RazorEngine<TBaseTemplateType> GetRazorHostInAppDomain()
    {                        
        LocalAppDomain = CreateAppDomain(null);
        if (LocalAppDomain  == null)
            return null;
    
        /// Create the instance inside of the new AppDomain
        /// Note: remote domain uses local EXE's AppBasePath!!!
        RazorEngine<TBaseTemplateType> host  = null;
    
        try
        {
            Assembly ass = Assembly.GetExecutingAssembly();
    
            string AssemblyPath = ass.Location;
    
            host = (RazorEngine<TBaseTemplateType>) LocalAppDomain.CreateInstanceFrom(AssemblyPath,
                                                   typeof(RazorEngine<TBaseTemplateType>).FullName).Unwrap();
        }
        catch (Exception ex)
        {
            ErrorMessage = ex.Message;
            return null;
        }
    
        return host;
    }
    
    /// <summary>
    /// Internally creates a new AppDomain in which Razor templates can
    /// be run.
    /// </summary>
    /// <param name="appDomainName"></param>
    /// <returns></returns>
    private AppDomain CreateAppDomain(string appDomainName)
    {
        if (appDomainName == null)
            appDomainName = "RazorHost_" + Guid.NewGuid().ToString("n");
    
        AppDomainSetup setup = new AppDomainSetup();
    
        // *** Point at current directory
        setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory;
    
        AppDomain localDomain = AppDomain.CreateDomain(appDomainName, null, setup);
    
        return localDomain;
    }
    
    /// <summary>
    /// Allow unloading of the created AppDomain to release resources
    /// All internal resources in the AppDomain are released including
    /// in memory compiled Razor assemblies.
    /// </summary>
    public void UnloadHost()
    {
        if (this.LocalAppDomain != null)
        {
            AppDomain.Unload(this.LocalAppDomain);
            this.LocalAppDomain = null;
        }
    }

    The static CreateRazorHostInAppDomain() is the key method that startup code usually calls. It uses a Current singleton instance to an instance of itself that is created cross AppDomain and is kept alive because it’s static. GetRazorHostInAppDomain actually creates a cross-AppDomain instance which first creates a new AppDomain and then loads the RazorEngine into it. The remote Proxy instance is returned as a result to the method and can be used the same as a local instance.

    The code to run with a remote AppDomain is simple:

    private RazorEngine<RazorTemplateBase> CreateHost()
         {
             if (this.Host != null)
                 return this.Host;
    
             // Use Static Methods - no error message if host doesn't load
             this.Host = RazorEngineFactory<RazorTemplateBase>.CreateRazorHostInAppDomain();
             if (this.Host == null)
             {
                 MessageBox.Show("Unable to load Razor Template Host",
                                 "Razor Hosting", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
             }    
    
             return this.Host;
         }

    This code relies on a local reference of the Host which is kept around for the duration of the app (in this case a form reference). To use this you’d simply do:

    this.Host = CreateHost();
    if (host == null)
                return;
    
    string result = host.RenderTemplate(
                    this.txtSource.Text,
                    new string[] { "System.Windows.Forms.dll", "Westwind.Utilities.dll" },
                    this.CustomContext);
    
    if (result == null)
    {
       MessageBox.Show(host.ErrorMessage, "Template Execution Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
       return;
    }
    
    this.txtResult.Text = result;

    Now all templates run in a remote AppDomain and can be unloaded with simple code like this:

    RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain();
    this.Host = null;

    One Step further – Providing a caching ‘Runtime’

    Once we can load templates in a remote AppDomain we can add some additional functionality like assembly caching based on application specific features. One of my typical scenarios is to render templates out of a scripts folder. So all templates live in a folder and they change infrequently. So a Folder based host that can compile these templates once and then only recompile them if something changes would be ideal.

    Enter host containers which are basically wrappers around the RazorEngine<t> and RazorEngineFactory<t>. They provide additional logic for things like file caching based on changes on disk or string hashes for string based template inputs. The folder host also provides for partial rendering logic through a custom template base implementation.

    There’s a base implementation in RazorBaseHostContainer, which provides the basics for hosting a RazorEngine, which includes the ability to start and stop the engine, cache assemblies and add references:

    public abstract class RazorBaseHostContainer<TBaseTemplateType> : MarshalByRefObject
            where TBaseTemplateType : RazorTemplateBase, new()
    {
            
        public RazorBaseHostContainer()
        {
            UseAppDomain = true;
            GeneratedNamespace = "__RazorHost";
        }
    
        /// <summary>
        /// Determines whether the Container hosts Razor
        /// in a separate AppDomain. Seperate AppDomain 
        /// hosting allows unloading and releasing of 
        /// resources.
        /// </summary>
        public bool UseAppDomain { get; set; }
    
        /// <summary>
        /// Base folder location where the AppDomain 
        /// is hosted. By default uses the same folder
        /// as the host application.
        /// 
        /// Determines where binary dependencies are
        /// found for assembly references.
        /// </summary>
        public string BaseBinaryFolder { get; set; }
    
        /// <summary>
        /// List of referenced assemblies as string values.
        /// Must be in GAC or in the current folder of the host app/
        /// base BinaryFolder        
        /// </summary>
        public List<string> ReferencedAssemblies = new List<string>();
    
    
        /// <summary>
        /// Name of the generated namespace for template classes
        /// </summary>
        public string GeneratedNamespace {get; set; }
            
        /// <summary>
        /// Any error messages
        /// </summary>
        public string ErrorMessage { get; set; }
    
        /// <summary>
        /// Cached instance of the Host. Required to keep the
        /// reference to the host alive for multiple uses.
        /// </summary>
        public RazorEngine<TBaseTemplateType> Engine;
    
        /// <summary>
        /// Cached instance of the Host Factory - so we can unload
        /// the host and its associated AppDomain.
        /// </summary>
        protected RazorEngineFactory<TBaseTemplateType> EngineFactory;
    
        /// <summary>
        /// Keep track of each compiled assembly
        /// and when it was compiled.
        /// 
        /// Use a hash of the string to identify string
        /// changes.
        /// </summary>
        protected Dictionary<int, CompiledAssemblyItem> LoadedAssemblies = 
    new Dictionary<int, CompiledAssemblyItem>(); /// <summary> /// Call to start the Host running. Follow by a calls to RenderTemplate to /// render individual templates. Call Stop when done. /// </summary> /// <returns>true or false - check ErrorMessage on false </returns> public virtual bool Start() { if (Engine == null) { if (UseAppDomain) Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHostInAppDomain(); else Engine = RazorEngineFactory<TBaseTemplateType>.CreateRazorHost(); Engine.Configuration.CompileToMemory = true; Engine.HostContainer = this; if (Engine == null) { this.ErrorMessage = EngineFactory.ErrorMessage; return false; } } return true; } /// <summary> /// Stops the Host and releases the host AppDomain and cached /// assemblies. /// </summary> /// <returns>true or false</returns> public bool Stop() { this.LoadedAssemblies.Clear(); RazorEngineFactory<RazorTemplateBase>.UnloadRazorHostInAppDomain(); this.Engine = null; return true; }
    …
    }

    This base class provides most of the mechanics to host the runtime, but no application specific implementation for rendering. There are rendering functions but they just call the engine directly and provide no caching – there’s no context to decide how to cache and reuse templates. The key methods are Start and Stop and their main purpose is to start a new AppDomain (optionally) and shut it down when requested.

    The RazorFolderHostContainer – Folder Based Runtime Hosting

    Let’s look at the more application specific RazorFolderHostContainer implementation which is defined like this:

    public class RazorFolderHostContainer : RazorBaseHostContainer<RazorTemplateFolderHost>

    Note that a customized RazorTemplateFolderHost class template is used for this implementation that supports partial rendering in form of a RenderPartial() method that’s available to templates.

    The folder host’s features are:

    • Render templates based on a Template Base Path (a ‘virtual’ if you will)
    • Cache compiled assemblies based on the relative path and file time stamp
    • File changes on templates cause templates to be recompiled into new assemblies
    • Support for partial rendering using base folder relative pathing

    As shown in the startup examples earlier host containers require some startup code with a HostContainer tied to a persistent property (like a Form property):

    // The base path for templates - templates are rendered with relative paths
    // based on this path.
    HostContainer.TemplatePath = Path.Combine(Environment.CurrentDirectory, TemplateBaseFolder); 
    
    // Default output rendering disk location
    HostContainer.RenderingOutputFile = Path.Combine(HostContainer.TemplatePath, "__Preview.htm");
    
    // Add any assemblies you want reference in your templates
    HostContainer.ReferencedAssemblies.Add("System.Windows.Forms.dll");
    
    // Start up the host container
    HostContainer.Start();

    Once that’s done, you can render templates with the host container:

    // Pass the template path for full filename seleted with OpenFile Dialog    
    // relativepath is: subdir\file.cshtml or file.cshtml or ..\file.cshtml
    var relativePath = Utilities.GetRelativePath(fileName, HostContainer.TemplatePath); if (!HostContainer.RenderTemplate(relativePath, Context, HostContainer.RenderingOutputFile)) { MessageBox.Show("Error: " + HostContainer.ErrorMessage); return; } webBrowser1.Navigate("file://" + HostContainer.RenderingOutputFile);

    The most critical task of the RazorFolderHostContainer implementation is to retrieve a template from disk, compile and cache it and then deal with deciding whether subsequent requests need to re-compile the template or simply use a cached version. Internally the GetAssemblyFromFileAndCache() handles this task:

    /// <summary>
     /// Internally checks if a cached assembly exists and if it does uses it
     /// else creates and compiles one. Returns an assembly Id to be 
     /// used with the LoadedAssembly list.
     /// </summary>
     /// <param name="relativePath"></param>
     /// <param name="context"></param>
     /// <returns></returns>
     protected virtual CompiledAssemblyItem GetAssemblyFromFileAndCache(string relativePath)
     {
         string fileName = Path.Combine(TemplatePath, relativePath).ToLower();
         int fileNameHash = fileName.GetHashCode();
         if (!File.Exists(fileName))
         {
             this.SetError(Resources.TemplateFileDoesnTExist + fileName);
             return null;
         }
    
         CompiledAssemblyItem item = null;
         this.LoadedAssemblies.TryGetValue(fileNameHash, out item);
    
         string assemblyId = null;
    
         // Check for cached instance
         if (item != null)
         {
             var fileTime = File.GetLastWriteTimeUtc(fileName);
             if (fileTime <= item.CompileTimeUtc)
                 assemblyId = item.AssemblyId;
         }
         else
             item = new CompiledAssemblyItem();
    
         // No cached instance - create assembly and cache
         if (assemblyId == null)
         {
             string safeClassName = GetSafeClassName(fileName);
             StreamReader reader = null;
             try
             {
                 reader = new StreamReader(fileName, true);
             }
             catch (Exception ex)
             {
                 this.SetError(Resources.ErrorReadingTemplateFile + fileName);
                 return null;
             }
    
             assemblyId = Engine.ParseAndCompileTemplate(this.ReferencedAssemblies.ToArray(), reader);
    
             // need to ensure reader is closed
             if (reader != null)
                 reader.Close();
    
             if (assemblyId == null)
             {
                 this.SetError(Engine.ErrorMessage);
                 return null;
             }
    
             item.AssemblyId = assemblyId;
             item.CompileTimeUtc = DateTime.UtcNow;
             item.FileName = fileName;
             item.SafeClassName = safeClassName;
    
             this.LoadedAssemblies[fileNameHash] = item;
         }
    
         return item;
     }


    This code uses a LoadedAssembly dictionary which is comprised of a structure that holds a reference to a compiled assembly, a full filename and file timestamp and an assembly id. LoadedAssemblies (defined on the base class shown earlier) is essentially a cache for compiled assemblies and they are identified by a hash id. In the case of files the hash is a GetHashCode() from the full filename of the template. The template is checked for in the cache and if not found the file stamp is checked. If that’s newer than the cache’s compilation date the template is recompiled otherwise the version in the cache is used. All the core work defers to a RazorEngine<T> instance to ParseAndCompileTemplate().

    The three rendering specific methods then are rather simple implementations with just a few lines of code dealing with parameter and return value parsing:

    /// <summary>
    /// Renders a template to a TextWriter. Useful to write output into a stream or
    /// the Response object. Used for partial rendering.
    /// </summary>
    /// <param name="relativePath">Relative path to the file in the folder structure</param>
    /// <param name="context">Optional context object or null</param>
    /// <param name="writer">The textwriter to write output into</param>
    /// <returns></returns>
    public bool RenderTemplate(string relativePath, object context, TextWriter writer)
    {
    
        // Set configuration data that is to be passed to the template (any object) 
        Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration()
        {
            TemplatePath = Path.Combine(this.TemplatePath, relativePath),
            TemplateRelativePath = relativePath,
        };
    
        CompiledAssemblyItem item = GetAssemblyFromFileAndCache(relativePath);
    
        if (item == null)
        {
            writer.Close();
            return false;
        }
    
        try
        {
            // String result will be empty as output will be rendered into the
            // Response object's stream output. However a null result denotes
            // an error 
            string result = Engine.RenderTemplateFromAssembly(item.AssemblyId, context, writer);
    
            if (result == null)
            {
                this.SetError(Engine.ErrorMessage);
                return false;
            }
        }
        catch (Exception ex)
        {
            this.SetError(ex.Message);
            return false;
        }
        finally
        {
            writer.Close();
        }
    
        return true;
    }
    
    /// <summary>
    /// Render a template from a source file on disk to a specified outputfile.
    /// </summary>
    /// <param name="relativePath">Relative path off the template root folder. Format: path/filename.cshtml</param>
    /// <param name="context">Any object that will be available in the template as a dynamic of this.Context</param>
    /// <param name="outputFile">Optional - output file where output is written to. If not specified the 
    /// RenderingOutputFile property is used instead
    /// </param>       
    /// <returns>true if rendering succeeds, false on failure - check ErrorMessage</returns>
    public bool RenderTemplate(string relativePath, object context, string outputFile)
    {
        if (outputFile == null)
            outputFile = RenderingOutputFile;
    
        try
        {
            using (StreamWriter writer = new StreamWriter(outputFile, false,
                                                            Engine.Configuration.OutputEncoding,
                                                            Engine.Configuration.StreamBufferSize))
            {
                return RenderTemplate(relativePath, context, writer);
            }
        }
        catch (Exception ex)
        {
            this.SetError(ex.Message);
            return false;
        }
    
        return true;
    }
    
    
    /// <summary>
    /// Renders a template to string. Useful for RenderTemplate 
    /// </summary>
    /// <param name="relativePath"></param>
    /// <param name="context"></param>
    /// <returns></returns>
    public string RenderTemplateToString(string relativePath, object context)
    {
        string result = string.Empty;
    
        try
        {
            using (StringWriter writer = new StringWriter())
            {
                // String result will be empty as output will be rendered into the
                // Response object's stream output. However a null result denotes
                // an error 
                if (!RenderTemplate(relativePath, context, writer))
                {
                    this.SetError(Engine.ErrorMessage);
                    return null;
                }
    
                result = writer.ToString();
            }
        }
        catch (Exception ex)
        {
            this.SetError(ex.Message);
            return null;
        }
    
        return result;
    }

    The idea is that you can create custom host container implementations that do exactly what you want fairly easily. Take a look at both the RazorFolderHostContainer and RazorStringHostContainer classes for the basic concepts you can use to create custom implementations.

    Notice also that you can set the engine’s PerRequestConfigurationData() from the host container:

    // Set configuration data that is to be passed to the template (any object) 
    Engine.TemplatePerRequestConfigurationData = new RazorFolderHostTemplateConfiguration()
    {
        TemplatePath = Path.Combine(this.TemplatePath, relativePath),
        TemplateRelativePath = relativePath,
    };

    which when set to a non-null value is passed to the Template’s InitializeTemplate() method. This method receives an object parameter which you can cast as needed:

    public override void InitializeTemplate(object configurationData)
    {
        // Pick up configuration data and stuff into Request object
        RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration;
    
        this.Request.TemplatePath = config.TemplatePath;
        this.Request.TemplateRelativePath = config.TemplateRelativePath;
    }

    With this data you can then configure any custom properties or objects on your main template class. It’s an easy way to pass data from the HostContainer all the way down into the template. The type you use is of type object so you have to cast it yourself, and it must be serializable since it will likely run in a separate AppDomain.

    This might seem like an ugly way to pass data around – normally I’d use an event delegate to call back from the engine to the host, but since this is running over AppDomain boundaries events get really tricky and passing a template instance back up into the host over AppDomain boundaries doesn’t work due to serialization issues. So it’s easier to pass the data from the host down into the template using this rather clumsy approach of set and forward. It’s ugly, but it’s something that can be hidden in the host container implementation as I’ve done here. It’s also not something you have to do in every implementation so this is kind of an edge case, but I know I’ll need to pass a bunch of data in some of my applications and this will be the easiest way to do so.

    Summing Up

    Hosting the Razor runtime is something I got jazzed up about quite a bit because I have an immediate need for this type of templating/merging/scripting capability in an application I’m working on. I’ve also been using templating in many apps and it’s always been a pain to deal with. The Razor engine makes this whole experience a lot cleaner and more light weight and with these wrappers I can now plug .NET based templating into my code literally with a few lines of code. That’s something to cheer about… I hope some of you will find this useful as well…

    Resources

    The examples and code require that you download the Razor runtimes. Projects are for Visual Studio 2010 running on .NET 4.0

    © Rick Strahl, West Wind Technologies, 2005-2011
    Posted in Razor  ASP.NET  .NET  
    kick it on DotNetKicks.com

    WCF REST Service Activation Errors when AspNetCompatibility is enabled

    $
    0
    0


    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel:

    Server Error in '/AspNetClient' Application.

    Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.

    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
    Source Error:

    An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

    Stack Trace:

    [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.]
       System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0
       System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95
       System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54
       System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65
       System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69
       System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38
       System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13
       System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19
       System.Web.HttpApplication.GetFactory(String type) +81
       System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223
       System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184
    


    Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1

    What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:

    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />
    

    Arrrgh!!!!!

    After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly.

    To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:

    <add name="svc-Integrated" path="*.svc" verb="*" 
    type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
    preCondition="integratedMode" />

    and it needs to be changed to this:

    <add name="svc-Integrated" path="*.svc" verb="*" 
    type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
    preCondition="integratedMode,runtimeVersionv2.0" />

    Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.

    What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.

    So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers.

    There’s a blog entry that points at this problem with more detail:

    http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspx

    Note that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.

       

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in AJAX   ASP.NET  WCF  
      kick it on DotNetKicks.com

      Allowing Access to HttpContext in WCF REST Services

      $
      0
      0

      If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful:

      [OperationContract]
      [WebGet]
      public string HelloWorld()
      {
          var properties = OperationContext.Current.IncomingMessageProperties;
          var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty;
          string queryString = property.QueryString;
          var name = StringUtils.GetUrlEncodedKey(queryString,"Name");
      
          return "Hello World " + name; 
      }

      And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value.

      It’s a heck of a lot easier to just do this:

      [OperationContract]
      [WebGet]
      public string HelloWorld()
      {
          var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty;
          return "Hello World " + name;       
      }

      Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome.

      To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file.

      Add aspNetCompatibiltyEnabled in web.config

      First you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request.

      <system.serviceModel>
          <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"
      />
      </system.serviceModel>

      Note that this is a global setting and it will affect all of your WCF REST applications which can be problematic if some applications require and others explicitly don’t allow asp net compatibility.

      Markup your Service Implementation with AspNetCompatibilityRequirements Attribute

      Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute:

          [ServiceContract(Namespace = "RateTestService")]
          [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
          public class RestRateTestProxyService
      

      Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests.

      Be very careful with the NotAllowed and Requires settings as both of these will explicitly fail if the web.config setting does not match. I recommend you always use Allowed as it will not fail if the flag is missing or if it is set and so can still work regardless of the web.config setting. This can be a big concern if you have multiple WCF REST apps in the same IIS application.

      Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object.

      When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently:

      public HttpContext Context
      {
          get
          {
              return HttpContext.Current;
          }
      }
      public HttpRequest Request
      {
          get
          {
              return HttpContext.Current.Request;
          }
      }

      While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary.

      Should you or shouldn’t you?

      WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service.

      At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services.

      It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in ASP.NET  AJAX  WCF  
      kick it on DotNetKicks.com

      IntelliSense for Razor Hosting in non-Web Applications

      $
      0
      0

      When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required.

      If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are.

      There are two options available to you to make IntelliSense work for templates:

      • Add the templates as included files to your non-Web project
      • Add a BIN folder to your template’s folder and add all assemblies required there

      Including Templates in your Host Project

      By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:

       VisualStudioIntellisense

      Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block:

      @{
          CustomContext Model = Context as CustomContext;       
      }
      

      After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case:

      @using RazorHostingWinForm

      which has to be defined at the very top of a Razor document.

      BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed:

      @{
          ContextContainer container = Context as ContextContainer;
          CustomContext Model = container.Model;
          CustomDAO DAO = container.DAO;    
      }

      and so forth.

      IntelliSense for your Custom Template

      Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document:

      @inherits RazorHosting.RazorTemplateFolderHost
      

      By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy.

      Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template.

      For example the default implementation for RazorTemplateFolderHost does this:

      public override void InitializeTemplate(object context, object configurationData)
      {
          // Pick up configuration data and stuff into Request object
          RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration;
      
          this.Request.TemplatePath = config.TemplatePath;
          this.Request.TemplateRelativePath = config.TemplateRelativePath;
      
          // Just use the entire ConfigData as the model, but in theory 
          // configData could contain many objects or values to set on
          // template properties
          this.Model = config.ConfigData as TModel;
      }

      to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing.

      Adding a Bin folder to your Template’s Root Path

      Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical.

      Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense.

      The Bin folder should contain:

      • The RazorHosting.dll
      • Your host project’s EXE or DLL – renamed to .dll if it’s an .exe
      • Any external (bin folder) dependent assemblies

      Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday…

      For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components:

      <configuration>
        <system.web>
          <compilation debug="true">
            <assemblies>
              <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
              <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
              <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
            </assemblies>
          </compilation>
        </system.web>
      </configuration>
      

      And with that you should get full IntelliSense.

      Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-)

      Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in Razor  
      kick it on DotNetKicks.com

      CLR Version issues with CorBindRuntimeEx

      $
      0
      0

      I’m working on an older FoxPro application that’s using .NET Interop and this app loads its own copy of the .NET runtime through some of our own tools (wwDotNetBridge). This all works fine and it’s fairly straightforward to load and host the runtime and then make calls against it. I’m writing this up for myself mostly because I’ve been bitten by these issues repeatedly and spend 15 minutes each

      However, things get tricky when calling specific versions of the .NET runtime since .NET 4.0 has shipped. Basically we need to be able to support both .NET 2.0 and 4.0 and we’re currently doing it with the same assembly – a .NET 2.0 assembly that is the AppDomain entry point. This works as .NET 4.0 can easily host .NET 2.0 assemblies and the functionality in the 2.0 assembly provides all the features we need to call .NET 4.0 assemblies via Reflection.

      In wwDotnetBridge we provide a load flag that allows specification of the runtime version to use. Something like this:

      do wwDotNetBridge
      LOCAL loBridge as wwDotNetBridge
      loBridge = CreateObject("wwDotNetBridge","v4.0.30319")

      and this works just fine in most cases.  If I specify V4 internally that gets fixed up to a whole version number like “v4.0.30319” which is then actually used to host the .NET runtime. Specifically the ClrVersion setting is handled in this Win32 DLL code that handles loading the runtime for me:

      /// Starts up the CLR and creates a Default AppDomain
      DWORD WINAPI ClrLoad(char *ErrorMessage, DWORD *dwErrorSize)
      {
          if (spDefAppDomain)
              return 1;
      
          
          //Retrieve a pointer to the ICorRuntimeHost interface
          HRESULT hr = CorBindToRuntimeEx(
                          ClrVersion,    //Retrieve latest version by default
                          L"wks",    //Request a WorkStation build of the CLR
                          STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN | STARTUP_CONCURRENT_GC, 
                          CLSID_CorRuntimeHost,
                          IID_ICorRuntimeHost,
                          (void**)&spRuntimeHost
                          );
      
          if (FAILED(hr)) 
          {
              *dwErrorSize = SetError(hr,ErrorMessage);    
              return hr;
          }
      
          //Start the CLR
          hr = spRuntimeHost->Start();
      
          if (FAILED(hr))
              return hr;
      
          CComPtr<IUnknown> pUnk;
      
          WCHAR domainId[50];
          swprintf(domainId,L"%s_%i",L"wwDotNetBridge",GetTickCount());
          hr = spRuntimeHost->CreateDomain(domainId,NULL,&pUnk);
      
          hr = pUnk->QueryInterface(&spDefAppDomain.p);
          if (FAILED(hr)) 
              return hr;
          
          return 1;
      }

      CorBindToRuntimeEx allows for a specific .NET version string to be supplied which is what I’m doing via an API call from the FoxPro code.

      The behavior of CorBindToRuntimeEx is a bit finicky however. The documentation states that NULL should load the latest version of the .NET runtime available on the machine – but it actually doesn’t. As far as I can see – regardless of runtime overrides even in the .config file – NULL will always load .NET 2.0 even if 4.0 is installed.

      <supportedRuntime> .config File Settings

      Things get even more unpredictable once you start adding runtime overrides into the application’s .config file. In my scenario working inside of Visual FoxPro this would be VFP9.exe.config in the FoxPro installation folder (not the current folder). If I have a specific runtime override in the .config file like this:

      <?xml version="1.0"?>
      <configuration>
        <startup>
          <supportedRuntime version="v2.0.50727" />
        </startup>
      </configuration>

      Not surprisingly with this I can load a .NET 2.0  runtime, but I will not be able to load Version 4.0 of the .NET runtime even if I explicitly specify it in my call to ClrLoad. Worse I don’t get an error – it will just go ahead and hand me a V2 version of the runtime and assume that’s what I wanted. Yuck!

      However, if I set the supported runtime to V4 in the .config file:
      <?xml version="1.0"?>
      <configuration>
        <startup>
          <supportedRuntime version="v4.0.30319" />
        </startup>
      </configuration>
      

      Then I can load both V4 and V2 of the runtime. Specifying NULL however will STILL only give me V2 of the runtime. Again this seems pretty inconsistent.

      If you’re hosting runtimes make sure you check which version of the runtime is actually loading first to ensure you get the one you’re looking for. If the wrong version loads – say 2.0 and you want 4.0 - and you then proceed to load 4.0 assemblies they will all fail to load due to version mismatches. This is how all of this started – I had a bunch of assemblies that weren’t loading and it took a while to figure out that the host was running the wrong version of the CLR and therefore caused the assemblies loading to fail. Arrggh!

      <supportedRuntime> and Debugger Version

      <supportedRuntime> also affects the use of the .NET debugger when attached to the target application. Whichever runtime is specified in the key is the version of the debugger that fires up. This can have some interesting side effects. If you load a .NET 2.0 assembly but <supportedRuntime> points at V4.0 (or vice versa) the debugger will never fire because it can only debug in the appropriate runtime version. This has bitten me on several occasions where code runs just fine but the debugger will just breeze by breakpoints without notice.

      The default version for the debugger is the latest version installed on the system if <supportedRuntime> is not set.

      Summary

      Besides all the hassels, I’m thankful I can build a .NET 2.0 assembly and have it host .NET 4.0 and call .NET 4.0 code. This way we’re able to ship a single assembly that provides functionality that supports both .NET 2 and 4 without having to have separate DLLs for both which would be a deployment and update nightmare.

      The MSDN documentation does point at newer hosting API’s specifically for .NET 4.0 which are way more complicated and even less documented but that doesn’t help here because the runtime needs to be able to host both .NET 4.0 and 2.0. Not pleased about that – the new APIs look way more complex and of course they’re not available with older versions of the runtime installed which in our case makes them useless to me in this scenario where I have to support .NET 2.0 hosting (to provide greater ‘built-in’ platform support).

      Once you know the behavior above, it’s manageable. However, it’s quite easy to get tripped up here because there are multiple combinations that can really screw up behaviors.

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in .NET  FoxPro  
      kick it on DotNetKicks.com

      Retrieve the full ASP.NET Form Buffer as a String

      $
      0
      0

      Did it again today: For logging purposes I needed to capture the full Request.Form data as a string and while it’s pretty easy to retrieve the buffer it always takes me a few minutes to remember how to do it. So I finally wrote a small helper function to accomplish this since this comes up rather frequently especially in debugging scenarios or in the immediate window.

      Here’s the quick function to get the form buffer as string:

      /// <summary>
      /// Returns the content of the POST buffer as string
      /// </summary>
      /// <returns></returns>
      public static string FormBufferToString()
      {
          HttpRequest Request = HttpContext.Current.Request;            
      
          if (Request.TotalBytes > 0)            
              return Encoding.Default.GetString(Request.BinaryRead(Request.TotalBytes));
      
          return string.Empty;
      }

      Clearly a simple task, but handy to have in your library for reuse. You probably don’t want to call this if you have a massive inbound form buffer, or if the data you’re retrieving is binary. It’s probably a good idea to check the inbound content type before calling this function with something like this:

      var formBuffer = string.Empty;
      
      if (Request.ContentType.StartsWith("text/") || 
          Request.ContentType == "application/x-www-form-urlencoded") )
      {
          formBuffer = FormBufferToString();
      }

      to ensure you’re working only on content types you can actually view as text.

      Now if I can only remember the name of this function in my library – it’s part of the static WebUtils class in the West Wind Web Toolkit if you want to check out a number of other useful Web helper functions.

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in ASP.NET  
      kick it on DotNetKicks.com

      Changing an HTML Form's Target with jQuery

      $
      0
      0

      This is a question that comes up quite frequently: I have a form with several submit or link buttons and one or more of the buttons needs to open a new Window. How do I get several buttons to all post to the right window?

      If you're building ASP.NET forms you probably know that by default the Web Forms engine sends button clicks back to the server as a POST operation. A server form has a <form> tag which expands to this:

      <form method="post" action="default.aspx" id="form1">

      Now you CAN change the target of the form and point it to a different window or frame, but the problem with that is that it still affects ALL submissions of the current form. If you multiple buttons/links and they need to go to different target windows/frames you can't do it easily through the <form runat="server"> tag.

      Although this discussion uses ASP.NET WebForms as an example, realistically this is a general HTML problem although likely more common in WebForms due to the single form metaphor it uses. In ASP.NET MVC for example you'd have more options by breaking out each button into separate forms with its own distinct target tag. However, even with that option it's not always possible to break up forms - for example if multiple targets are required but all targets require the same form data to the be posted.

      A common scenario here is that you might have a button (or link) that you click where you still want some server code to fire but at the end of the request you actually want to display the content in a new window. A common operation where this happens is report generation: You click a button and the server generates a report say in PDF format and you then want to display the PDF result in a new window without killing the content in the current window. Assuming you have other buttons on the same Page that need to post to base window how do you get the button click to go to a new window?

      Can't  you just use a LinkButton or other Link Control?

      At first glance you might think an easy way to do this is to use an ASP.NET LinkButton to do this - after all a LinkButton creates a hyper link that CAN accept a target and it also posts back to the server, right? However, there's no Target property, although you can set the target HTML attribute easily enough.

      Code like this looks reasonable:

           <asp:LinkButton runat="server" ID="btnNewTarget" Text="New Target" 
                           target="_blank"  OnClick="bnNewTarget_Click" />
      

      But if you try this you'll find that it doesn't work. Why? Because ASP.NET creates postbacks with JavaScript code that operates on the current window/frame:

      <a id="btnNewTarget" target="_blank" 
      href="javascript:__doPostBack(&#39;btnNewTarget&#39;,&#39;&#39;)">New Target</a>

      What happens with a target tag is that before the JavaScript actually executes a new window is opened and the focus shifts to the new window. The new window of course is empty and has no __doPostBack() function nor access to the old document. So when you click the link a new window opens but the window remains blank without content - no server postback actually occurs.

      Natch that idea.

      Setting the Form Target for a Button Control or LinkButton

      So, in order to send Postback link controls and buttons to another window/frame, both require that the target of the form gets changed dynamically when the button or link is clicked. Luckily this is rather easy to do however using a little bit of script code and jQuery.

      Imagine you have two buttons like this that should go to another window:

              <asp:LinkButton runat="server" ID="btnNewTarget" Text="New Target" 
                              OnClick="ClickHandler" />
      
              <asp:Button runat="server" ID="btnButtonNewTarget" Text="New Target Button" 
                          OnClick="ClickHandler" />
      

      ClickHandler in this case is any routine that generates the output you want to display in the new window. Generally this output will not come from the current page markup but is generated externally - like a PDF report or some report generated by another application component or tool. The output generally will be either generated by hand or something that was generated to disk to be displayed with Response.Redirect() or Response.TransmitFile() etc.

      Here's the dummy handler that just generates some HTML by hand and displays it:

      protected void ClickHandler(object sender, EventArgs e)
      {
          // Perform some operation that generates HTML or Redirects somewhere else
          Response.Write("Some custom output would be generated here (PDF, non-Page HTML etc.)");
          
          // Make sure this response doesn't display the page content
          // Call Response.End() or Response.Redirect()
          Response.End();
      }

      To route this oh so sophisticated output to an alternate window for both the LinkButton and Button Controls, you can use the following simple script code:

          <script type="text/javascript">
              $("#btnButtonNewTarget,#btnNewTarget").click(function () {
                  $("form").attr("target", "_blank");
              });
          </script>
      

      So why does this work where the target attribute did not? The difference here is that the script fires BEFORE the target is changed to the new window. When you put a target attribute on a link or form the target is changed as the very first thing before the link actually executes. IOW, the link literally executes in the new window when it's done this way.

      By attaching a click handler, though we're not navigating yet so all the operations the script code performs (ie. __doPostBack()) and the collection of Form variables to post to the server all occurs in the current page. By changing the target from within script code the target change fires as part of the form submission process which means it runs in the correct context of the current page. IOW - the input for the POST is from the current page, but the output is routed to a new window/frame. Just what we want in this scenario.

      Voila you can dynamically route output to the appropriate window.

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in ASP.NET  HTML  jQuery  
      kick it on DotNetKicks.com

      HttpWebRequest and Ignoring SSL Certificate Errors

      $
      0
      0

      Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky.

      So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest.

      Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up:

      public bool CreateWebRequestObject(string Url) 
      {
          try 
          {
              this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);
       
              if (this.IgnoreCertificateErrors)
                  ServicePointManager.CertificatePolicy = delegate { return true; };
      }

      One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.

      Another way to deal with this is in you application .config file.

      <configuration>

        <system.net>

          <settings>

            <servicePointManager

                checkCertificateName="false"

                checkCertificateRevocationList="false"         

            />

          </settings>

        </system.net>

      </configuration>


      This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug.

      Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in .NET  CSharp  HTTP  
      kick it on DotNetKicks.com

      jQuery CSS Property Monitoring Plug-in updated

      $
      0
      0

      A few weeks back I had talked about the need to watch properties of an object and be able to take action when certain values changed. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects. One example is a drop shadow - if I add a shadow behavior to an object I want the shadow to be pinned to that object so when that object moves I also want the shadow to move with it, or when the panel is hidden the shadow should hide with it - automatically without having to explicitly hook up monitoring code to the panel.

      For example, in my shadow plug-in I can now do something like this (where el is the element that has the shadow attached and sh is the shadow):

      if (!exists) // if shadow was created 
          el.watch("left,top,width,height,display",
                   function() {                         
                       if (el.is(":visible"))
                           $(this).shadow(opt);   // redraw
                       else
                           sh.hide();
                   },
                   100, "_shadowMove");

      The code now monitors several properties and if any of them change the provided function is called. So when the target object is moved or hidden or resized the watcher function is called and the shadow can be redrawn or hidden in the case of visibility going away. So if you run any of the following code:

      $("#box")
          .shadow()
          .draggable({ handle: ".blockheader" });
          
          
      // drag around the box - shadow should follow
      
      // hide the box - shadow should disappear with box
      setTimeout(function() { $("#box").hide(); }, 4000);
      
      // show the box - shadow should come back too
      setTimeout(function() { $("#box").show(); }, 8000);

      This can be very handy functionality when you're dealing with objects or operations that you need to track generically and there are no native events for them. For example, with a generic shadow object that attaches itself to any another element there's no way that I know of to track whether the object has been moved or hidden either via some UI operation (like dragging) or via code. While some UI operations like jQuery.ui.draggable would allow events to fire when the mouse is moved nothing of the sort exists if you modify locations in code. Even tracking the object in drag mode this is hardly generic behavior - a generic shadow implementation can't know when dragging is hooked up.

      So the watcher provides an alternative that basically gives an Observer like pattern that notifies you when something you're interested in changes.

      In the watcher hookup code (in the shadow() plugin) above  a check is made if the object is visible and if it is the shadow is redrawn. Otherwise the shadow is hidden. The first parameter is a list of CSS properties to be monitored followed by the function that is called. The function called receives this as the element that's been changed and receives two parameters: The array of watched objects with their current values, plus an index to the object that caused the change function to fire.

      How does it work

      When I wrote it about this last time I started out with a simple timer that would poll for changes at a fixed interval with setInterval(). A few folks commented that there are is a DOM API - DOMAttrmodified in Mozilla and propertychange in IE that allow notification whenever any property changes which is much more efficient and smooth than the setInterval approach I used previously. On browser that support these events (FireFox and IE basically - WebKit has the DOMAttrModified event but it doesn't appear to work) the shadow effect is instant - no 'drag behind' of the shadow. Running on a browser that doesn't support still uses setInterval() and the shadow movement is slightly delayed which looks sloppy.

      There are a few additional changes to this code - it also supports monitoring multiple CSS properties now so a single object can monitor a host of CSS properties rather than one object per property which is easier to work with. For display purposes position, bounds and visibility will be common properties that are to be watched.

      Here's what the new version looks like:

      $.fn.watch = function (props, func, interval, id) {
              /// <summary>
              /// Allows you to monitor changes in a specific
              /// CSS property of an element by polling the value.
              /// when the value changes a function is called.
              /// The function called is called in the context
              /// of the selected element (ie. this)
              /// </summary>    
              /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param>    
              /// <param name="func" type="Function">
              /// Function called when the value has changed.
              /// </param>    
              /// <param name="interval" type="Number">
              /// Optional interval for browsers that don't support DOMAttrModified or propertychange events.
              /// Determines the interval used for setInterval calls.
              /// </param>
              /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param>  
              /// <returns type="jQuery" /> 
              if (!interval)
                  interval = 200;
              if (!id)
                  id = "_watcher";
      
              return this.each(function () {
                  var _t = this;
                  var el$ = $(this);
                  var fnc = function () { __watcher.call(_t, id) };
                  var itId = null;
      
                  var data = { id: id,
                      props: props.split(","),
                      func: func,
                      vals: [props.split(",").length],
                      fnc: fnc,
                      origProps: props,
                      interval: interval
                  };
                  $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); });
                  el$.data(id, data);
      
                  hookChange(el$, id, data.fnc);
      
              });
      
              function hookChange(el$, id, fnc) {
                  el$.each(function () {
                      var el = $(this);
                      if (typeof (el.get(0).onpropertychange) == "object")
                          el.bind("propertychange." + id, fnc);
                      else if ($.browser.mozilla)
                          el.bind("DOMAttrModified." + id, fnc);
                      else
                          itId = setInterval(fnc, interval);
                  });
              }
              function __watcher(id) {
                  var el$ = $(this);
                  var w = el$.data(id);
                  if (!w) return;
                  var _t = this;
      
                  if (!w.func)
                      return;
      
                  // must unbind or else unwanted recursion may occur
                  el$.unwatch(id);
      
                  var changed = false;
                  var i = 0;
                  for (i; i < w.props.length; i++) {
                      var newVal = el$.css(w.props[i]);
                      if (w.vals[i] != newVal) {
                          w.vals[i] = newVal;
                          changed = true;
                          break;
                      }
                  }
                  if (changed)
                      w.func.call(_t, w, i);
      
                  // rebind event
                  hookChange(el$, id, w.fnc);
              }
          }
          $.fn.unwatch = function (id) {
              this.each(function () {
                  var el = $(this);
                  var fnc = el.data(id).fnc;
                  try {
                      if (typeof (this.onpropertychange) == "object")
                          el.unbind("propertychange." + id, fnc);
                      else if ($.browser.mozilla)
                          el.unbind("DOMAttrModified." + id, fnc);
                      else
                          clearInterval(id);
                  }
                  // ignore if element was already unbound
                  catch (e) { }
              });
              return this;
          }

      There are basically two jQuery functions - watch and unwatch.

      jQuery.fn.watch(props,func,interval,id)

      Starts watching an element for changes in the properties specified.

      props
      The CSS properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired.

      func (watchData,index)
      The function fired in response to a changed property. Receives this as the element changed and object that represents the watched properties and their respective values. The first parameter is passed in this structure:

         { id: itId, props: [], func: func, vals: [] };

      A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property value that has changed.

      interval
      The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds.

      id
      An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified.

      jQuery.fn.unwatch(id)

      Unhooks watching of the element by disconnecting the event handlers.

      id
      Optional watcher id that was specified in the call to watch. This value can be omitted to use the default value of _watcher.

      You can also grab the latest version of the  code for this plug-in as well as the shadow in the full library at:
      http://www.west-wind.com:8080/svn/jquery/trunk/jQueryControls/Resources/ww.jquery.js

      watcher has no other dependencies although it lives in this larger library. The shadow plug-in depends on watcher.

      © Rick Strahl, West Wind Technologies, 2005-2011
      kick it on DotNetKicks.com

      Monitoring Html Element CSS Changes in JavaScript

      $
      0
      0

      [ updated Feb 15, 2011: Added event unbinding to avoid unintended recursion ]

      Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. For example, I have a some HTML element behavior plugins like a drop shadow that attaches to any HTML element, but I then need to be able to automatically keep the shadow in sync with the window if the  element dragged around the window or moved via code.

      Unfortunately there's no move event for HTML elements so you can't tell when it's location changes. So I've been looking around for some way to keep track of the element and a specific CSS property, but no luck. I suspect there's nothing native to do this so the only way I could think of is to use a timer and poll rather frequently for the property.

      I ended up with a generic jQuery plugin that looks like this:

      (function($){
      $.fn.watch = function (props, func, interval, id) {
          /// <summary>
          /// Allows you to monitor changes in a specific
          /// CSS property of an element by polling the value.
          /// when the value changes a function is called.
          /// The function called is called in the context
          /// of the selected element (ie. this)
          /// </summary>    
          /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param>    
          /// <param name="func" type="Function">
          /// Function called when the value has changed.
          /// </param>    
          /// <param name="interval" type="Number">
          /// Optional interval for browsers that don't support DOMAttrModified or propertychange events.
          /// Determines the interval used for setInterval calls.
          /// </param>
          /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param>  
          /// <returns type="jQuery" /> 
          if (!interval)
              interval = 200;
          if (!id)
              id = "_watcher";
      
          return this.each(function () {
              var _t = this;
              var el$ = $(this);
              var fnc = function () { __watcher.call(_t, id) };
              var itId = null;
      
              var data = { id: id,
                  props: props.split(","),
                  func: func,
                  vals: [props.split(",").length],
                  fnc: fnc,
                  origProps: props,
                  interval: interval
              };
              $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); });
              el$.data(id, data);
      
              hookChange(el$, id, data.fnc);
      
          });
      
          function hookChange(el$, id, fnc) {
              el$.each(function () {
                  var el = $(this);
                  if (typeof (el.get(0).onpropertychange) == "object")
                      el.bind("propertychange." + id, fnc);
                  else if ($.browser.mozilla)
                      el.bind("DOMAttrModified." + id, fnc);
                  else
                      itId = setInterval(fnc, interval);
              });
          }
          function __watcher(id) {
              var el$ = $(this);
              var w = el$.data(id);
              if (!w) return;
              var _t = this;
      
              if (!w.func)
                  return;
      
              // must unbind or else unwanted recursion may occur
              el$.unwatch(id);
      
              var changed = false;
              var i = 0;
              for (i; i < w.props.length; i++) {
                  var newVal = el$.css(w.props[i]);
                  if (w.vals[i] != newVal) {
                      w.vals[i] = newVal;
                      changed = true;
                      break;
                  }
              }
              if (changed)
                  w.func.call(_t, w, i);
      
              // rebind event
              hookChange(el$, id, w.fnc);
          }
      }
      $.fn.unwatch = function (id) {
          this.each(function () {
              var el = $(this);
              var fnc = el.data(id).fnc;
              try {
                  if (typeof (this.onpropertychange) == "object")
                      el.unbind("propertychange." + id, fnc);
                  else if ($.browser.mozilla)
                      el.unbind("DOMAttrModified." + id, fnc);
                  else
                      clearInterval(id);
              }
              // ignore if element was already unbound
              catch (e) { }
          });
          return this;
      }
      })(jQuery);

      With this I can now monitor movement by monitoring say the top CSS property of the element. The following code creates a box and uses the draggable (jquery.ui) plugin and a couple of custom plugins that center and create a shadow. Here's how I can set this up with the watcher:

      $("#box")
              .draggable()
              .centerInClient()
              .shadow()
              .watch("top", function() {
                          $(this).shadow();
                     },70,"_shadow");
                      
      ... 
      $("#box")
      .unwatch("_shadow")
      .shadow("remove");

      This code basically sets up the window to be draggable and initially centered and then a shadow is added. The .watch() call then assigns a CSS property to monitor (top in this case) and a function to call in response. The component now sets up a setInterval call and keeps on pinging this property every time. When the top value changes the supplied function is called.

      While this works and I can now drag my window around with the shadow following suit it's not perfect by a long shot. The shadow move is delayed and so drags behind the window, but using a higher timer value is not appropriate either as the UI starts getting jumpy if the timer's set with too small of an increment.

      This sort of monitor can be useful for other things as well where operations are maybe not quite as time critical as a UI operation taking place.

      Can anybody see a better a better way of capturing movement of an element on the page?

      © Rick Strahl, West Wind Technologies, 2005-2011
      Posted in ASP.NET  JavaScript  jQuery  
      kick it on DotNetKicks.com


      A jQuery Plug-in to monitor Html Element CSS Changes

      $
      0
      0

      Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects and monitor changes on the ‘parent’ object so the dependent object can adjust itself accordingly.

      What I wanted to create is a jQuery plug-in that allows me to specify a list of CSS properties to monitor and have a function fire in response to any change to any of those CSS properties. The result are the .watch() and .unwatch() jQuery plug-ins. Here’s a simple example page of this plug-in that demonstrates tracking changes to an element being moved with draggable and closable behavior:

      http://www.west-wind.com/WestWindWebToolkit/samples/Ajax/jQueryPluginSamples/WatcherPlugin.htm

      Try it with different browsers – IE and FireFox use the DOM event handlers and Chrome, Safari and Opera use setInterval handlers to manage this behavior. It should work in all of them but all but IE and FireFox will show a bit of lag between the changes in the main element and the shadow.

      The relevant HTML for this example is this fragment of a main <div> (#notebox) and an element that is to mimic a shadow (#shadow).

      <div class="containercontent">
         <div id="notebox" style="width: 200px; height: 150px;position: absolute; 
      z-index: 20; padding: 20px; background-color: lightsteelblue;"> Go ahead drag me around and close me! </div> <div id="shadow" style="background-color: Gray; z-index: 19;position:absolute;display: none;"> </div> </div>

      The watcher plug in is then applied to the main <div> which keeps the shadow in sync with the main element. The following plug-in code demonstrates:

      <script type="text/javascript">
          $(document).ready(function () {
              var counter = 0;
              $("#notebox").watch("top,left,height,width,display,opacity", function (data, i) {
                  var el = $(this);
                  var sh = $("#shadow");
      
                  var propChanged = data.props[i];
                  var valChanged = data.vals[i];
      
                  counter++;
                  showStatus("Prop: " + propChanged + "  value: " + valChanged + "  " + counter);
      
                  var pos = el.position();
                  var w = el.outerWidth();
                  var h = el.outerHeight();
      
                  sh.css({
                      width: w,
                      height: h,
                      left: pos.left + 5,
                      top: pos.top + 5,
                      display: el.css("display"),
                      opacity: el.css("opacity")
                  });
              })
              .draggable()
              .closable()
              .css("left", 10);
          });
      </script>

      When you run this page as you drag the #notebox element the #shadow element will maintain and stay pinned underneath the #notebox element effectively keeping the shadow attached to the main element. Likewise, if you hide or fadeOut() the #notebox element the shadow will also go away – show the #notebox element and the shadow also re-appears because we are assigning the display property from the parent on the shadow.

      Note we’re attaching the .watch() plug-in to the #notebox element and have it fire whenever top,left,height,width,opacity or display CSS properties are changed. The passed data element contains a props[] and vals[] array that holds the properties monitored and their current values. An index passed as the second parm tells you which property has changed and what its current value is (propChanged/valChanged in the code above). The rest of the watcher handler code then deals with figuring out the main element’s position and recalculating and setting the shadow’s position using the jQuery .css() function.

      Note that this is just an example to demonstrate the watch() behavior here – this is not the best way to create a shadow. If you’re interested in a more efficient and cleaner way to handle shadows with a plug-in check out the .shadow() plug-in in ww.jquery.js (code search for fn.shadow) which uses native CSS features when available but falls back to a tracked shadow element on browsers that don’t support it, which is how this watch() plug-in came about in the first place :-)

      How does it work?

      The plug-in works by letting the user specify a list of properties to monitor as a comma delimited string and a handler function:

      el.watch("top,left,height,width,display,opacity", function (data, i) {}, 100, id)

      You can also specify an interval (if no DOM event monitoring isn’t available in the browser) and an ID that identifies the event handler uniquely.

      The watch plug-in works by hooking up to DOMAttrModified in FireFox, to onPropertyChanged in Internet Explorer, or by using a timer with setInterval to handle the detection of changes for other browsers. Unfortunately WebKit doesn’t support DOMAttrModified consistently at the moment so Safari and Chrome currently have to use the slower setInterval mechanism. In response to a changed property (or a setInterval timer hit) a JavaScript handler is fired which then runs through all the properties monitored and determines if and which one has changed. The DOM events fire on all property/style changes so the intermediate plug-in handler filters only those hits we’re interested in. If one of our monitored properties has changed the specified event handler function is called along with a data object and an index that identifies the property that’s changed in the data.props/data.vals arrays.

      The jQuery plugin to implement this functionality looks like this:

      (function($){
      $.fn.watch = function (props, func, interval, id) {
          /// <summary>
          /// Allows you to monitor changes in a specific
          /// CSS property of an element by polling the value.
          /// when the value changes a function is called.
          /// The function called is called in the context
          /// of the selected element (ie. this)
          /// </summary>    
          /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param>    
          /// <param name="func" type="Function">
          /// Function called when the value has changed.
          /// </param>    
          /// <param name="interval" type="Number">
          /// Optional interval for browsers that don't support DOMAttrModified or propertychange events.
          /// Determines the interval used for setInterval calls.
          /// </param>
          /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param>  
          /// <returns type="jQuery" /> 
          if (!interval)
              interval = 100;
          if (!id)
              id = "_watcher";
      
          return this.each(function () {
              var _t = this;
              var el$ = $(this);
              var fnc = function () { __watcher.call(_t, id) };
      
              var data = { id: id,
                  props: props.split(","),
                  vals: [props.split(",").length],
                  func: func,
                  fnc: fnc,
                  origProps: props,
                  interval: interval,
                  intervalId: null
              };
              // store initial props and values
              $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); });
      
              el$.data(id, data);
      
              hookChange(el$, id, data);
          });
      
          function hookChange(el$, id, data) {
              el$.each(function () {
                  var el = $(this);
                  if (typeof (el.get(0).onpropertychange) == "object")
                      el.bind("propertychange." + id, data.fnc);
                  else if ($.browser.mozilla)
                      el.bind("DOMAttrModified." + id, data.fnc);
                  else
                      data.intervalId = setInterval(data.fnc, interval);
              });
          }
          function __watcher(id) {
              var el$ = $(this);
              var w = el$.data(id);
              if (!w) return;
              var _t = this;
      
              if (!w.func)
                  return;
      
              // must unbind or else unwanted recursion may occur
              el$.unwatch(id);
      
              var changed = false;
              var i = 0;
              for (i; i < w.props.length; i++) {
                  var newVal = el$.css(w.props[i]);
                  if (w.vals[i] != newVal) {
                      w.vals[i] = newVal;
                      changed = true;
                      break;
                  }
              }
              if (changed)
                  w.func.call(_t, w, i);
      
              // rebind event
              hookChange(el$, id, w);
          }
      }
      $.fn.unwatch = function (id) {
          this.each(function () {
              var el = $(this);
              var data = el.data(id);
              try {
                  if (typeof (this.onpropertychange) == "object")
                      el.unbind("propertychange." + id, data.fnc);
                  else if ($.browser.mozilla)
                      el.unbind("DOMAttrModified." + id, data.fnc);
                  else
                      clearInterval(data.intervalId);
              }
              // ignore if element was already unbound
              catch (e) { }
          });
          return this;
      }
      })(jQuery);

      Note that there’s a corresponding .unwatch() plug-in that can be used to stop monitoring properties. The ID parameter is optional both on watch() and unwatch() – a standard name is used if you don’t specify one, but it’s a good idea to use unique names for each element watched to avoid overlap in event ids especially if you’re monitoring many elements.

      The syntax is:

      $.fn.watch = function(props, func, interval, id)

      props
      A comma delimited list of CSS style properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired.

      func
      The function fired in response to a changed styles. Receives this as the element changed and an object parameter that represents the watched properties and their respective values. The first parameter is passed in this structure:

      { id: watcherId, props: [], vals: [], func: thisFunc, fnc: internalHandler, origProps: strPropertyListOnWatcher };

      A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property and changed value.

      interval
      The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds.

      id
      An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified.

      It’s been a Journey

      I started building this plug-in about two years ago and had to make many modifications to it in response to changes in jQuery and also in browser behaviors. I think the latest round of changes made should make this plug-in fairly future proof going forward (although I hope there will be better cross-browser change event notifications in the future).

      One of the big problems I ran into had to do with recursive change notifications – it looks like starting with jQuery 1.44 and later, jQuery internally modifies element properties on some calls to some .css()  property retrievals and things like outerHeight/Width(). In IE this would cause nasty lock up issues at times. In response to this I changed the code to unbind the events when the handler function is called and then rebind when it exits. This also makes user code less prone to stack overflow recursion as you can actually change properties on the base element. It also means though that if you change one of the monitors properties in the handler the watch() handler won’t fire in response – you need to resort to a setTimeout() call instead to force the code to run outside of the handler:

      $("#notebox")
      el.watch("top,left,height,width,display,opacity", function (data, i) {
          var el = $(this);
      
      // this makes el changes work setTimeout(function () { el.css("top", 10) },10); })

      Since I’ve built this component I’ve had a lot of good uses for it. The .shadow() fallback functionality is one of them.

      Resources

      The watch() plug-in is part of ww.jquery.js and the West Wind West Wind Web Toolkit. You’re free to use this code here or the code from the toolkit.

        © Rick Strahl, West Wind Technologies, 2005-2011
        Posted in ASP.NET  JavaScript  jQuery  
        kick it on DotNetKicks.com

        IIS not starting: The process cannot access the file because it is being used by another process

        $
        0
        0

        Ok, apparently a few people knew about this issue, but it is new to me and has caused me nearly an hour to track down today. What happened is that I’ve been working all day doing some final pre-deployment testing of several tools on my local dev machine. In the process I’ve been starting and stopping several IIS 7 Web sites. At some point I was done and just wanted to start my Default Web Site again and found this  little gem of an error message popping up:

        IISSkypeErrorDialog

        The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020)

        A lot of headless running around ensued after this, trying to figure out why IIS wouldn’t start. Oddly some sites started right up, others didn’t. I killed INetInfo, all worker processes, tried IISReset a million times and even rebooted – all to no avail. What gives?

        Skype, you evil Bastard!

        As it turns out the culprit is – drum roll please - Skype!  What, you may ask, does Skype have to do with IIS and Web Requests? It looks like recent versions of Skype have an option to run over Port 80 and 443 to allow running over corporate firewalls. Which is actually a nice feature that lets Skype work just about anywhere. What’s not so cool is that IIS fails to start up when another application is already using the same port that a Web site is mapped to. In the case of my dev site that’d be port 80 and Skype was hogging it.

        To fix this issue you can stop Skype from using port 80 and 443 which quickly fixes the problem. Or stop Skype. Duh! To permanently fix the problem in Skype find the option on the Options | Connection tab and uncheck the Use port 80/443 option:

        SkypeHttpOptions

        Oddly I haven’t run into this problem even though my setup hasn’t changed in quite some time. It appears that it’s bad startup timing that causes this problem to occur. Whatever the circumstance was, Skype somehow ended up starting before IIS.  If Skype is started after IIS has started it will automatically opt for other ports and not use port 80 and so there’s no problem.

        It’s easy to demonstrate this behavior if you’re looking for it:

        • Stop IIS
        • Stop Skype
        • Start Skype and make a test call
        • Start IIS

        And voila your error is ready for you!

        This really shouldn’t be a problem except that it would be really nice if IIS could give a more helpful error message when it can fire up a site because a port is blocked. “The process cannot access a file” is really not a very helpful error message in this scenario… I/O port / file ah what the heck it’s all the same to Windows. Right!

        I’ve run into this situation quite a bit with other, albeit more obvious applications like running Apache on the local machine for testing and then trying to run an IIS application. Same situation,  although it’s been a while – pre IIS 7 and I think previous versions of IIS actually gave more useful error messages for port blockages and that would be helpful.

        On the way to figuring this out I ran into some pretty humorous forum posts though with people ragging on why the hell you would be running IIS. Or Skype. The misinformed paranoia police out in full force so to say :-). It’ll be nice to start running IIS Express once Visual Studio 2010 SP1 gets released.

        Anyway, no surprise that Skype didn’t jump out at me as the culprit right away and I was left fumbling for a while until the Internet came to the rescue. I’m not the first to have found this for sure – I posted a message on Twitter and dozens of people replied they’d run into this before as well. Seems worth mentioning again though – since I’m sure to forget that this happened in a year from now when I hit that same error. Maybe I’ll even find this blog post to remind me…

        © Rick Strahl, West Wind Technologies, 2005-2011
        Posted in IIS7  Windows  
        kick it on DotNetKicks.com

        The Red Gate and .NET Reflector Debacle

        $
        0
        0

        About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing:

        Reflector

        Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view.

        For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community.

        Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me.

        Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right?

        Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded.

        People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page.

        Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true).

        In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on.

        Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations.

        There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful!

        Why do Companies do boneheaded stuff like this?

        Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells.

        Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing.

        What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right.

        It’s Expensive to give away stuff for Free

        At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products.

        Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch.

        For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades…

        Alternatives are coming

        For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore.

        Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library.

        IPSpy

        It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…

        © Rick Strahl, West Wind Technologies, 2005-2011
        kick it on DotNetKicks.com

        Loading Assemblies off Network Drives

        $
        0
        0

        .NET 4.0 introduces some nice changes that makes it easier to run .NET EXE type applications directly off network drives. Specifically the default behavior – that has caused immense amount of pain – in previous versions that refused to load assemblies from remote network shares unless CAS policy was adjusted, has been dropped and .NET EXE’s now behave more like standard Win32 applications that can load resources easily from local network drives and more easily from other trusted remote locations.

        Unfortunately if you’re using COM Interop or you manually load your own instances of the .NET Runtime these new Security Policy rules don’t apply and you’re still stuck with the old CAS policy that refuses to load assemblies off of network shares. Worse, CASPOL configuration doesn’t seem to work the way it used to either so you have a worst case scenario where you have assemblies that won’t load off net shares and no easy way to adjust the policy to get them to load.

        However, there’s a new quick workaround, via a simple .config switch:

        <configuration>
         <startup>    
            <supportedRuntime version="v4.0.30319"/>   
            <!-- supportedRuntime version="v2.0.50727"/-->    
          </startup>
          <runtime>
              <loadFromRemoteSources enabled="true"/>
          </runtime>
        </configuration>
        

        This simple .config file key setting allows overriding the possible security sandboxing of assemblies loaded from remote locations. This key is new for .NET 4.0, but oddly I found that it also works with .NET 2.0 when 4.0 is installed (which seems a bit strange). When I force the runtime used to 2.0 (using the supporteRuntime version key plus explicit loading .NET 2.0 and actively checking the version of the loaded runtime) I still was able to load an assembly off a network drive. Removing the key or setting it to false again failed those same loads in both 2.x and 4.x runtimes.

        So this is a nice addition for those of us stuck having to do Interop scenarios. This is specifically useful for one of my tools which is a Web Service proxy generator for FoxPro that uses auto-generated .NET Web Service proxies to call Web Services easily. In the course of support, I would say a good 50% of support requests had to do with people trying to load assemblies off a network share. The gyrations of going through CASPOL configuration put many of f them off altogether to chose a different solution – this much simpler fix is a big improvement.

        Security policy management is pretty damn confusing and I’m thankful that .NET 4.0 makes this at least a little simpler. If you’re interested in more background and the implications of .NET 4.0 on sandboxing and security policy, this post on the .NET Security Blog goes into much more detail – enough to make my head spin. Some of this seems really confusing, but the bottom line is that the above switch allows allows your applications to override the settings when appropriate. Options are good… Phew.

        © Rick Strahl, West Wind Technologies, 2005-2011
        Posted in .NET  Security  Windows  
        kick it on DotNetKicks.com

        ASP.NET Routing not working on IIS 7.0

        $
        0
        0

        I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error:

        IISErrorDisplay

        This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing.

        As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!!

        Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true:

          <system.webServer>
            <modules runAllManagedModulesForAllRequests="true">
              <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" />
            </modules>
          </system.webServer> 

        And lo and behold, Routing started working on the live server and IIS 7.0!

        This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work.

        Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected.

        Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out.

        Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key.

        Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all.

        More info on Extensionless URLs in IIS

        Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!

        © Rick Strahl, West Wind Technologies, 2005-2011
        Posted in IIS7  Windows  
        kick it on DotNetKicks.com
        Viewing all 664 articles
        Browse latest View live