Quantcast
Channel: Rick Strahl's Web Log
Viewing all 664 articles
Browse latest View live

Custom ASP.NET Routing to an HttpHandler

$
0
0

As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces.

New Web Forms Routing Support

In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms.

To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:

protected void Application_Start(object sender, EventArgs e)
{
    RegisterRoutes(RouteTable.Routes);
}

void RegisterRoutes(RouteCollection routes)
{
    routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx");
    routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx");
}

and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:

public partial class StockQuote1 : System.Web.UI.Page
{
    protected StockQuote Quote = null;

    protected void Page_Load(object sender, EventArgs e)
    {
        string symbol = RouteData.Values["symbol"] as string;

        StockServer server = new StockServer();
        Quote = server.GetStockQuote(symbol);
            
        // display stock data in Page View
    }
}

Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy!

How Routing Works

To handle Routing in ASP.NET involves these steps:

  • Registering Routes
  • Creating a custom RouteHandler to retrieve an HttpHandler
  • Attaching RouteData to your HttpHandler
  • Picking up Route Information in your Request code

Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched.

Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler.

At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData.

It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis.

It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules.

Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call:

RoutingCallstack

which fires just before handler resolution.

Non-Page Routing – You need to build custom RouteHandlers

If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler.

Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.

Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type.

/// <summary>
/// Retrieves an Http Handler based on the type specified in the constructor
/// </summary>
/// <param name="requestContext"></param>
/// <returns></returns>
IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext)
{
    IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler;

    // If we're dealing with a Callback Handler
    // pass the RouteData for this route to the Handler
    if (handler is CallbackHandler)
        ((CallbackHandler)handler).RouteData = requestContext.RouteData;

    return handler;
}

Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData.

If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:

IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext)
{
    IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler;
    requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData;
    return handler;
}

The code in the handler implementation can then pick up the RouteData from the context collection as needed:

RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData

This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution.

An Example of a Custom RouteHandler: Attribute Based Route Implementation

In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss.

As a quick review here’s how CallbackHandler works:

  • You create an Http Handler that derives from CallbackHandler
  • You implement methods that have a [CallbackMethod] Attribute

and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:

// RestService.ashx.cs

public class
RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } }

CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs.

Typical URLs looked like this:

http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msft
http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml

which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this:

http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msft
http://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml

(the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)
(If you use FireFox try using the JSONView plug-in make it easier to view JSON content)

So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to:
[CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")]
public StockQuote GetStockQuote(string symbol)
{
    StockServer server = new StockServer();
    return server.GetStockQuote(symbol);    
}
[CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")]
public StockQuote[] GetStockQuotes(string symbolList)
{
    StockServer server = new StockServer();
    string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries);
    return server.GetStockQuotes(symbols);
}

where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier.

So how does that work? Let’s find out…

How to Create Custom Routes

As mentioned earlier Routing is made up of several steps:

  • Creating a custom RouteHandler to create HttpHandler instances
  • Mapping the actual Routes to the RouteHandler
  • Retrieving the RouteData and actually doing something useful with it in the HttpHandler

In the CallbackHandler routing example above this works out to something like this:

  • Create a custom RouteHandler that includes a property to track the method to call
  • Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute
  • Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler

Creating a Custom Route Handler

To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute.

Here’s the code:

/// <summary>
/// Route handler that can create instances of CallbackHandler derived
/// callback classes. The route handler tracks the method name and
/// creates an instance of the service in a predictable manner
/// </summary>
/// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam>
public class CallbackHandlerRouteHandler : IRouteHandler
{
    /// <summary>
    /// Method name that is to be called on this route.
    /// Set by the automatically generated RegisterRoutes 
    /// invokation.
    /// </summary>
    public string MethodName { get; set; }

    /// <summary>
    /// The type of the handler we're going to instantiate.
    /// Needed so we can semi-generically instantiate the
    /// handler and call the method on it.
    /// </summary>
    public Type CallbackHandlerType { get; set; }


    /// <summary>
    /// Constructor to pass in the two required components we
    /// need to create an instance of our handler. 
    /// </summary>
    /// <param name="methodName"></param>
    /// <param name="callbackHandlerType"></param>
    public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType)
    {
        MethodName = methodName;
        CallbackHandlerType = callbackHandlerType;
    }

    /// <summary>
    /// Retrieves an Http Handler based on the type specified in the constructor
    /// </summary>
    /// <param name="requestContext"></param>
    /// <returns></returns>
    IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext)
    {
        IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler;

        // If we're dealing with a Callback Handler
        // pass the RouteData for this route to the Handler
        if (handler is CallbackHandler)
            ((CallbackHandler)handler).RouteData = requestContext.RouteData;

        return handler;
    }

    /// <summary>
    /// Generic method to register all routes from a CallbackHandler
    /// that have RouteUrls defined on the [CallbackMethod] attribute
    /// </summary>
    /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam>
    /// <param name="routes"></param>
    public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes)
    {
        // find all methods
        var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public);
        foreach (var method in methods)
        {
            var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false);
            if (attrs.Length < 1)
                continue;

            CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute;
            if (string.IsNullOrEmpty(attr.RouteUrl))
                continue;

            // Add the route
            routes.Add(method.Name,
                       new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler))));

        }

    }
}

The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data.

When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler.

The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform.

The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler.

To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):

protected void Application_Start(object sender, EventArgs e)
{

    // Register Routes for RestService            
    CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes);
}

If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set.

routes.Add(method.Name,
          new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) );

While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed:

protected void Application_Start(objectsender, EventArgs e)
{
    RegisterRoutes(RouteTable.Routes);
}

void RegisterRoutes(RouteCollection routes)

{ routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",
                    new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );

     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",
                    new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );
}

I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well.

Using the RouteData in the Handler

A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler.

In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:

IHttpHandler handler =  Activator.CreateInstance(CallbackHandlerType) as IHttpHandler;

if (handler is CallbackHandler)
    ((CallbackHandler)handler).RouteData = requestContext.RouteData;

return handler;

Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:

/// <summary>
/// Optionally store RouteData on this handler
/// so we can access it internally
/// </summary>
public RouteData RouteData {get; set; }

and the RouteHandler needs to set it when it creates the handler instance.

Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor):

// check for Route Data method name
if (target is CallbackHandler)
{
    var routeData = ((CallbackHandler)target).RouteData;                
    if (routeData != null)
        methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName;
}

When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters:

// look for parameters in the route
if(routeData != null)
{
    string parmString = routeData.Values[parameter.Name] as string;
    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);
}

And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler.

For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):

internal object ExecuteMethod(string method, object target, string[] parameters, 
CallbackMethodParameterType paramType,
ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; }

The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call.

What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation.

Debugging your Routes

If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes.

Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):

RouteDebuggerNuget 

which adds a RouteDebug assembly to your project.

Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:

protected void Application_Start(object sender, EventArgs e)
{
    CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes);

    // Debug your routes
    RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes);
}

Any routed URL then displays something like this:

RouteDebugger

The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins.

This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done.

Routing Around

Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values.

Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler

References

  • Sample Project
    Includes the sample CallbackHandler service discussed here along with compiled versions
    of the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010)
  • West Wind Web Toolkit
    includes full implementation of CallbackHandler and the Routing Handler
  • West Wind Web Toolkit Source Code
    Contains the full source code to the Westwind.Web and Westwind.Utilities assemblies used
    in these samples. Includes the source described in the post.
    (Latest build in the Subversion Repository)
  • CallbackHandler Source
    (Relevant code to this article tree in Westwind.Web assembly)
  • JSONView FireFoxPlugin
    A simple FireFox Plugin to easily view JSON data natively in FireFox.
    For IE you can use a registry hack to display JSON as raw text.
© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  AJAX  HTTP  
kick it on DotNetKicks.com


Displaying JSON in your Browser

$
0
0

Do you work with AJAX requests a lot and need to quickly check URLs for JSON results? Then you probably know that it’s a fairly big hassle to examine JSON results directly in the browser. Yes, you can use FireBug or Fiddler which work pretty well for actual AJAX requests, but if you just fire off a URL for quick testing in the browser you usually get hit by the Save As dialog and the download manager, followed by having to open the saved document in a text editor in FireFox.

Enter JSONView which allows you to simply display JSON results directly in the browser. For example, imagine I have a URL like this:

http://localhost/westwindwebtoolkitweb/RestService.ashx?Method=ReturnObject&format=json&Name1=Rick&Name2=John&date=12/30/2010

typed directly into the browser and that that returns a complex JSON object. With JSONView the result looks like this:

JsonViewBrowser

No fuss, no muss. It just works. Here the result is an array of Person objects that contain additional address child objects displayed right in the browser.

JSONView basically adds content type checking for application/json results and when it finds a JSON result takes over the rendering and formats the display in the browser. Note that it re-formats the raw JSON as well for a nicer display view along with collapsible regions for objects. You can still use View Source to see the raw JSON string returned.

For me this is a huge time-saver. As I work with AJAX result data using GET and REST style URLs quite a bit it’s a big timesaver. To quickly and easily display JSON is a key feature in my development day and JSONView for all its simplicity fits that bill for me. If you’re doing AJAX development and you often review URL based JSON results do yourself a favor and pick up a copy of JSONView.

Other Browsers

JSONView works only with FireFox – what about other browsers?

Chrome
Chrome actually displays raw JSON responses as plain text without any plug-ins. There’s no plug-in or configuration needed, it just works, although you won’t get any fancy formatting.

[updated from comments]
There’s also a port of JSONView available for Chrome from here:

https://chrome.google.com/webstore/detail/chklaanhfefbnpoihckbnefhakgolnmc

It looks like it works just about the same as the JSONView plug-in for FireFox. Thanks for all that pointed this out…

Internet Explorer
Internet Explorer probably has the worst response to JSON encoded content: It displays an error page as it apparently tries to render JSON as XML:

IeJson

Yeah that seems real smart – rendering JSON as an XML document. WTF? To get at the actual JSON output, you can use View Source.

To get IE to display JSON directly as text you can add a Mime type mapping in the registry:

 JsonRegistry

Create a new application/json key in:

  • HKEY_CLASSES_ROOT\MIME\Database\ContentType\application/json
  • Add a string value of CLSID with a value of {25336920-03F9-11cf-8FD0-00AA00686F13}
  • Add a DWORD value of Encoding with a value of 80000

I can’t take credit for this tip – found it here first on Sky Sander’s Blog. Note that the CLSID can be used for just about any type of text data you want to display as plain text in the IE. It’s the in-place display mechanism and it should work for most text content. For example it might also be useful for looking at CSS and JS files inside of the browser instead of downloading those documents as well.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  AJAX  
kick it on DotNetKicks.com

Error on 64 Bit Install of IIS – LoadLibraryEx failed on aspnet_filter.dll

$
0
0

I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS:

Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed

IisIsapiFilterError

This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box.

The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component.

After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks):

  • Switch IIS to run in 32 bit mode
  • Fix the filter listing in ApplicationHost.config

Switching IIS to allow 32 Bit Code

This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications:

Enable32BitMode

And voila the error message above goes away.

Fix Filters

Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-)

So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see:

ISAPIFilters

Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder.

Aha! Hence lies our problem.

You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice.

Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work).

To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this:

ConfigurationEditor

which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries:

<filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" />
<filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" />
<filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" />

The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from.

The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active.

The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries.

So how did this get misconfigured?

I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box.

In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry.

Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in IIS7   ASP.NET  
kick it on DotNetKicks.com

Rounded Corners and Shadows – Dialogs with CSS

$
0
0

Well, it looks like we’ve finally arrived at a place where at least all of the latest versions of main stream browsers support rounded corners and box shadows. The two CSS properties that make this possible are box-shadow and box-radius. Both of these CSS Properties now supported in all the major browsers as shown in this chart from QuirksMode:

BrowserSupport

In it’s simplest form you can use box-shadow and border radius like this:

.boxshadow 
{
  -moz-box-shadow: 3px 3px 5px #535353;
  -webkit-box-shadow: 3px 3px 5px #535353;       
  box-shadow: 3px 3px 5px #535353;
}
.roundbox
{  
  -moz-border-radius: 6px 6px 6px 6px;
  -webkit-border-radius: 6px;  
  border-radius: 6px 6px 6px 6px;
}

box-shadow: horizontal-shadow-pixels vertical-shadow-pixels blur-distance shadow-color

box-shadow attributes specify the the horizontal and vertical offset of the shadow, the blur distance (to give the shadow a smooth soft look) and a shadow color. The spec also supports multiple shadows separated by commas using the attributes above but we’re not using that functionality here.

box-radius: top-left-radius top-right-radius bottom-right-radius bottom-left-radius

border-radius takes a pixel size for the radius for each corner going clockwise. CSS 3 also specifies each of the individual corner elements such as border-top-left-radius, but support for these is much less prevalent so I would recommend not using them for now until support improves. Instead use the single box-radius to specify all corners.

Browser specific Support in older Browsers

Notice that there are two variations: The actual CSS 3 properties (box-shadow and box-radius) and the browser specific ones (-moz, –webkit prefixes for FireFox and Chrome/Safari respectively) which work in slightly older versions of modern browsers before official CSS 3 support was added. The goal is to spread support as widely as possible and the prefix versions extend the range slightly more to those browsers that provided early support for these features. Notice that box-shadow and border-radius are used after the browser specific versions to ensure that the latter versions get precedence if the browser supports both (last assignment wins).

Use the .boxshadow and .roundbox Styles in HTML

To use these two styles create a simple rounded box with a shadow you can use HTML like this:

<!-- Simple Box with rounded corners and shadow -->
<div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue">              
    <div class="boxcontenttext">
        Simple Rounded Corner Box.
    </div>
</div>

which looks like this in the browser:

RoundedBoxInBrowser[6]

This works across browsers and it’s pretty sweet and simple.

Watch out for nested Elements!

There are a couple of things to be aware of however when using rounded corners. Specifically, you need to be careful when you nest other non-transparent content into the rounded box. For example check out what happens when I change the inside <div> to have a colored background:

<!-- Simple Box with rounded corners and shadow -->
<div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue">              
    <div class="boxcontenttext" style="background: khaki;">
        Simple Rounded Corner Box.
    </div>
</div>

which renders like this:

RoundedInnerPokingOut[6] 

If you look closely you’ll find that the inside <div>’s corners are not rounded and so ‘poke out’ slightly over the rounded corners. It looks like the rounded corners are ‘broken’ up instead of a solid rounded line around the corner, which his pretty ugly. The bigger the radius the more drastic this effect becomes .

To fix this issue the inner <div> also has have rounded corners at the same or slightly smaller radius than the outer <div>. The simple fix for this is to simply also apply the roundbox style to the inner <div> in addition to the boxcontenttext style already applied:

<div class="boxcontenttext roundbox" style="background: khaki;">

The fixed display now looks proper:

RoundBoxInnerFixed

Separate Top and Bottom Elements

This gets even a little more tricky if you have an element at the top or bottom only of the rounded box. What if you need to add something like a header or footer <div> that have non-transparent backgrounds which is a pretty common scenario? In those cases you want only the top or bottom corners rounded and not both. To make this work a couple of additional styles to round only the top and bottom corners can be created:

.roundbox-top
{    
    -moz-border-radius: 4px 4px 0 0;
    -webkit-border-radius: 4px 4px 0 0;    
    border-radius: 4px 4px 0 0;
}
.roundbox-bottom
{    
    -moz-border-radius: 0 0 4px 4px;
    -webkit-border-radius: 0 0 4px 4px;
    border-radius: 0 0 4px 4px;
}

Notice that radius used for the ‘inside’ rounding is smaller (4px) than the outside radius (6px). This is so the inner radius fills into the outer border – if you use the same size you may have some white space showing between inner and out rounded corners. Experiment with values to see what works – in my experimenting the behavior across browsers here is consistent (thankfully).

These styles can be applied in addition to other styles to make only the top or bottom portions of an element rounded. For example imagine I have styles like this:

.gridheader, .gridheaderbig, .gridheaderleft, .gridheaderright
{    
    padding: 4px 4px 4px 4px;
    background:  #003399 url(images/vertgradient.png) repeat-x;
    text-align: center;
    font-weight: bold;
    text-decoration: none;
    color: khaki;
}
.gridheaderleft
{
    text-align: left;
}
.gridheaderright
{
    text-align: right;
}
.gridheaderbig
{    
    font-size: 135%;
}

If I just apply say gridheader by itself in HTML like this:

<div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue">              
    <div class="gridheaderleft">Box with a Header</div>
    <div class="boxcontenttext" style="background: khaki;">
        Simple Rounded Corner Box.
    </div>
</div>

This results in a pretty funky display – again due to the fact that the inner elements render square rather than rounded corners:

RoundeNoRoundingInheaderandfooter

If you look close again you can see that both the header and the main content have square edges which jumps out at the eye. To fix this you can now apply the roundbox-top and roundbox-bottom to the header and content respectively:

<div class="roundbox boxshadow" style="width: 550px; border: solid 2px steelblue">              
    <div class="gridheaderleft roundbox-top">Box with a Header</div>
    <div class="boxcontenttext roundbox-bottom" style="background: khaki;">
        Simple Rounded Corner Box.
    </div>
</div>

Which now gives the proper display with rounded corners both on the top and bottom:

ProperHeaderAndLowerBorder

All of this is sweet to be supported – at least by the newest browser – without having to resort to images and nasty JavaScripts solutions. While this is still not a mainstream feature yet for the majority of actually installed browsers, the majority of browser users are very likely to have this support as most browsers other than IE are actively pushing users to upgrade to newer versions. Since this is a ‘visual display only feature it degrades reasonably well in non-supporting browsers: You get an uninteresting square and non-shadowed browser box, but the display is still overall functional.

The main sticking point – as always is Internet Explorer versions 8.0 and down as well as older versions of other browsers. With those browsers you get a functional view that is a little less interesting to look at obviously:

NoRoundBordersNoShadow

but at least it’s still functional. Maybe that’s just one more incentive for people using older browsers to upgrade to a  more modern browser :-)

Creating Dialog Related Styles

In a lot of my AJAX based applications I use pop up windows which effectively work like dialogs. Using the simple CSS behaviors above, it’s really easy to create some fairly nice looking overlaid windows with nothing but CSS.

Here’s what a typical ‘dialog’ I use looks like:

Dialog

The beauty of this is that it’s plain CSS – no plug-ins or images (other than the gradients which are optional) required. Add jQuery-ui draggable (or ww.jquery.js as shown below) and you have a nice simple inline implementation of a dialog represented by a simple <div> tag.

Here’s the HTML for this dialog:

<div id="divDialog" class="dialog boxshadow" style="width: 450px;">
    <div class="dialog-header">
        <div class="closebox"></div>
        User Sign-in
    </div>
            
    <div class="dialog-content">
            
        <label>Username:</label>
        <input type="text" name="txtUsername" value=" " />

        <label>Password</label>
        <input type="text" name="txtPassword" value=" " />
                
        <hr />
                
        <input type="button" id="btnLogin" value="Login" />            
    </div>

    <div class="dialog-statusbar">Ready</div>
</div>

Most of this behavior is driven by the ‘dialog’ styles which are fairly basic and easy to understand. They do use a few support images for the gradients which are provided in the sample I’ve provided. Here’s what the CSS looks like:

.dialog
{
  background: White;
  overflow: hidden;
  border: solid 1px steelblue;    
  -moz-border-radius: 6px 6px 4px 4px;
  -webkit-border-radius: 6px 6px 4px 4px;
  border-radius: 6px 6px 3px 3px;    
}
.dialog-header
{    
    background-image: url(images/dialogheader.png);    
    background-repeat: repeat-x;
    text-align: left;
    color: cornsilk;
    padding: 5px;    
    padding-left: 10px;
    font-size: 1.02em;
    font-weight: bold;    
    position: relative;
    -moz-border-radius: 4px 4px 0px 0px;   
    -webkit-border-radius: 4px 4px 0px 0px;       
    border-radius: 4px 4px 0px 0px;       
}
.dialog-top
{
    -moz-border-radius: 4px 4px 0px 0px;   
    -webkit-border-radius: 4px 4px 0px 0px;       
    border-radius: 4px 4px 0px 0px;       
}
.dialog-bottom
{
    -moz-border-radius: 0 0 3px 3px;   
    -webkit-border-radius: 0 0 3px 3px;   
    border-radius: 0 0 3px 3px;   
}
.dialog-content
{
    padding: 15px;
}
.dialog-statusbar, .dialog-toolbar
{
    background: #eeeeee;
    background-image: url(images/dialogstrip.png);
    background-repeat: repeat-x;
    padding: 5px;
    padding-left: 10px;
    border-top: solid 1px silver;
    border-bottom: solid 1px silver;
    font-size: 0.8em;
}
.dialog-statusbar
{
    -moz-border-radius: 0 0 3px 3px;   
    -webkit-border-radius: 0 0 3px 3px;   
    border-radius: 0 0 3px 3px;   
    padding-right: 10px;
}
.closebox
{
    position: absolute;        
    right: 2px;
    top: 2px;
    background-image: url(images/close.gif);
    background-repeat: no-repeat;
    width: 14px;
    height: 14px;
    cursor: pointer;        
    opacity: 0.60;
    filter: alpha(opacity="80");
} 
.closebox:hover 
{
    opacity: 1;
    filter: alpha(opacity="100");
}

The main style is the dialog class which is the outer box. It has the rounded border that serves as the outline. Note that I didn’t add the box-shadow to this style because in some situations I just want the rounded box in an inline display that doesn’t have a shadow so it’s still applied separately. dialog-header, then has the rounded top corners and displays a typical dialog heading format. dialog-bottom and dialog-top then provide the same functionality as roundbox-top and roundbox-bottom described earlier but are provided mainly in the stylesheet for consistency to match the dialog’s round edges and making it easier to  remember and find in Intellisense as it shows up in the same dialog- group.

dialog-statusbar and dialog-toolbar are two elements I use a lot for floating windows – the toolbar serves for buttons and options and filters typically, while the status bar provides information specific to the floating window. Since the the status bar is always on the bottom of the dialog it automatically handles the rounding of the bottom corners.

Finally there’s  closebox style which is to be applied to an empty <div> tag in the header typically. What this does is render a close image that is by default low-lighted with a low opacity value, and then highlights when hovered over. All you’d have to do handle the close operation is handle the onclick of the <div>. Note that the <div> right aligns so typically you should specify it before any other content in the header.

Speaking of closable – some time ago I created a closable jQuery plug-in that basically automates this process and can be applied against ANY element in a page, automatically removing or closing the element with some simple script code. Using this you can leave out the <div> tag for closable and just do the following:

To make the above dialog closable (and draggable) which makes it effectively and overlay window, you’d add jQuery.js and ww.jquery.js to the page:

<script type="text/javascript" src="../../scripts/jquery.min.js"></script>
<script type="text/javascript" src="../../scripts/ww.jquery.min.js"></script>      

and then simply call:

<script type="text/javascript">
    $(document).ready(function () {
        $("#divDialog")
            .draggable({ handle: ".dialog-header" })
            .closable({ handle: ".dialog-header",
                closeHandler: function () {
                    alert("Window about to be closed.");
                    return true;  // true closes - false leaves open
                }
            });
    });
</script>        

* ww.jquery.js emulates base features in jQuery-ui’s draggable. If jQuery-ui is loaded its draggable version will be used instead

and voila you have now have a draggable and closable window – here in mid-drag:

DraggableWindow 

The dragging and closable behaviors are of course optional, but it’s the final touch that provides dialog like window behavior.

Relief for older Internet Explorer Versions with CSS Pie

If you want to get these features to work with older versions of Internet Explorer all the way back to version 6 you can check out CSS Pie. CSS Pie provides an Internet Explorer behavior file that attaches to specific CSS rules and simulates these behavior using script code in IE (mostly by implementing filters). You can simply add the behavior to each CSS style that uses box-shadow and border-radius like this:

.boxshadow
{
    -moz-box-shadow: 3px 3px 5px #535353;
    -webkit-box-shadow: 3px 3px 5px #535353;      
    box-shadow: 3px 3px 5px #535353;
    behavior: url(scripts/PIE.htc);
         
}
.roundbox

    -moz-border-radius: 6px 6px 6px 6px;
    -webkit-border-radius: 6px
    border-radius: 6px 6px 6px 6px;
    behavior: url(scripts/PIE.htc);
}

CSS Pie requires the PIE.htc on your server and referenced from each CSS style that needs it. Note that the url() for IE behaviors is NOT CSS file relative as other CSS resources, but rather PAGE relative , so if you have more than one folder you probably need to reference the HTC file with a fixed path like this:

behavior: url(/MyApp/scripts/PIE.htc);

in the style. Small price to pay, but a royal pain if you have a common CSS file you use in many applications.

Once the PIE.htc file has been copied and you have applied the behavior to each style that uses these new features Internet Explorer will render rounded corners and box shadows! Yay!

Hurray for box-shadow and border-radius

All of this functionality is very welcome natively in the browser. If you think this is all frivolous visual candy, you might be right :-), but if you take a look on the Web and search for rounded corner solutions that predate these CSS attributes you’ll find a boatload of stuff from image files, to custom drawn content to Javascript solutions that play tricks with a few images. It’s sooooo much easier to have this functionality built in and I for one am glad to see that’s it’s finally becoming standard in the box.

Still remember that when you use these new CSS features, they are not universal, and are not going to be really soon. Legacy browsers, especially old versions of Internet Explorer that can’t be updated will continue to be around and won’t work with this shiny new stuff. I say screw ‘em: Let them get a decent recent browser or see a degraded and ugly UI. We have the luxury with this functionality in that it doesn’t typically affect usability – it just doesn’t look as nice.

Resources

  • Download the Sample
    The sample includes the styles and images and sample page as well as ww.jquery.js for the draggable/closable example.
  • Online Sample
    Check out the sample described in this post online.
  • Closable and Draggable Documentation
    Documentation for the closeable and draggable plug-ins in ww.jquery.js. You can also check out
    the full documentation for all the plug-ins contained in ww.jquery.js here.
© Rick Strahl, West Wind Technologies, 2005-2011
Posted in HTML  CSS  
kick it on DotNetKicks.com

WinInet Apps failing when Internet Explorer is set to Offline Mode

$
0
0

Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to:

WinInet Error 2: The system cannot find the file specified

Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established.

In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working.

So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect.

I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy.

And the Culprit is: Internet Explorer’s Work Offline Option

The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down:

WorkOfflineMode[4]

When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients).

What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack.

This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline.

Fixing the Issue in your Code

If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle.

In FoxPro code I use:

DECLARE INTEGER InternetSetOption ;

   IN WININET.DLL ;

   INTEGER HINTERNET,;

   INTEGER dwFlags,;

   INTEGER @dwValue,;

   INTEGER cbSize

lnOptionValue = 1   && BOOL TRUE pass by reference

 

*** Set needed SSL flags

lnResult=InternetSetOption(this.hHttpSession,;

   INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77

   @lnOptionValue ,4)

 

DECLARE INTEGER HttpOpenRequest ;

   IN WININET.DLL ;

   INTEGER hHTTPHandle,;

   STRING lpzReqMethod,;

   STRING lpzPage,;

   STRING lpzVersion,;

   STRING lpzReferer,;

   STRING lpzAcceptTypes,;

   INTEGER dwFlags,;

   INTEGER dwContextw

 

 

hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;

   lcVerb,;

   tcPage,;

   NULL,NULL,NULL,;

   INTERNET_FLAG_RELOAD + ;

   IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;

   this.nHTTPServiceFlags,0)

… 

And this fixes the issue at least for IE 9…

In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in HTTP  Windows  
kick it on DotNetKicks.com

Restricting Input in HTML Textboxes to Numeric Values

$
0
0

Ok, here’s a fairly basic one – how to force a textbox to accept only numeric input. Somebody asked me this today on a support call so I did a few quick lookups online and found the solutions listed rather unsatisfying. The main problem with most of the examples I could dig up was that they only include numeric values, but that provides a rather lame user experience. You need to still allow basic operational keys for a textbox – navigation keys, backspace and delete, tab/shift tab and the Enter key - to work or else the textbox will feel very different than a standard text box.

Yes there are plug-ins that allow masked input easily enough but most are fixed width which is difficult to do with plain number input. So I took a few minutes to write a small reusable plug-in that handles this scenario. Imagine you have a couple of textboxes on a form like this:

    <div class="containercontent">
    
         <div class="label">Enter a number:</div>
        <input type="text" name="txtNumber1" id="txtNumber1" value="" class="numberinput" />

         <div class="label">Enter a number:</div>
        <input type="text" name="txtNumber2" id="txtNumber2" value="" class="numberinput" />
    </div>

and you want to restrict input to numbers. Here’s a small .forceNumeric() jQuery plug-in that does what I like to see in this case:

[Updated thanks to Elijah Manor for a couple of small tweaks for additional keys to check for]

    <script type="text/javascript">
        $(document).ready(function () {
            $(".numberinput").forceNumeric();
        });


        // forceNumeric() plug-in implementation
        jQuery.fn.forceNumeric = function () {
            return this.each(function () {
                $(this).keydown(function (e) {
                    var key = e.which;

                    // numbers are  ok
                    if ( !e.shiftKey && !e.altKey && !e.ctrlKey &&
                        key >= 48 && key <= 57 ||
                    // Backspace and Tab and Enter
                        key == 8 || key == 9 || key == 13 ||
                    // Home and End
                        key == 35 || key == 36 ||
                    // left and right arrows
                        key == 37 || key == 39 ||
                    // Del and Ins
                        key == 46 || key == 45)

                        return true;
                           
                    return false;
                });
            });
        }
    </script>

With the plug-in in place in your page or an external .js file you can now simply use a selector to apply it:

$(".numberinput").forceNumeric();

The plug-in basically goes through each selected element and hooks up a keydown() event handler. When a key is pressed the handler is fired and the keyCode of the event object is sent. Recall that jQuery normalizes the JavaScript Event object between browsers. The code basically white-lists a few key codes and rejects all others. It returns true to indicate the keypress is to go through or false to eat the keystroke and not process it which effectively removes it.

Simple and low tech, and it works without too much change of typical text box behavior.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in JavaScript  jQuery  HTML  
kick it on DotNetKicks.com

ASP.NET GZip Encoding Caveats

$
0
0

GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET.

ASP.NET GZip/Deflate Basics

Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream.

With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output).

/// <summary>
/// Determines if GZip is supported
/// </summary>
/// <returns></returns>
public static bool IsGZipSupported()
{
    string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];
    if (!string.IsNullOrEmpty(AcceptEncoding) &&
            (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate")))
        return true;
    return false;
}

/// <summary>
/// Sets up the current page or handler to use GZip through a Response.Filter
/// IMPORTANT:  
/// You have to call this method before any output is generated!
/// </summary>
public static void GZipEncodePage()
{
    HttpResponse Response = HttpContext.Current.Response;

    if (IsGZipSupported())
    {
        string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];
        if (AcceptEncoding.Contains("deflate"))
        {
            Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter,
                                        System.IO.Compression.CompressionMode.Compress);
            Response.AppendHeader("Content-Encoding", "deflate");
        }
        else
        {
            Response.Filter = new System.IO.Compression.GZipStream(Response.Filter,
                                        System.IO.Compression.CompressionMode.Compress);
            Response.AppendHeader("Content-Encoding", "gzip");                    
        }
    }            

    // Allow proxy servers to cache encoded and unencoded versions separately
    Response.AppendHeader("Vary", "Content-Encoding");
}

As you can see the actual assignment of the Filter is as simple as:

Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress);

which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding).

To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use:

protected void Page_Load(object sender, EventArgs e)
{            
    WebUtils.GZipEncodePage();

    Entry = WebLogFactory.GetEntry();

    var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries");
    if (entries == null)
        throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage);

    this.repEntries.DataSource = entries;
    this.repEntries.DataBind();

}

Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode():

if (sbOutput.Length > GZIP_ENCODE_TRESHOLD)
    WebUtils.GZipEncodePage();

Response.ContentType = ControlResources.STR_JsonContentType;
HttpContext.Current.Response.Write(sbOutput.ToString());

Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy.

Hold on there Hoss –Watch your Caching

Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box.

The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production.

You already saw the issue of Proxy servers addressed in the GZipEncodePage() function:

// Allow proxy servers to cache encoded and unencoded versions separately
Response.AppendHeader("Vary", "Content-Encoding");

This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL.

The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file:

public override string GetVaryByCustomString(HttpContext context, string custom)
{
    if (custom == "GZIP")
    {
        string acceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];
        if (acceptEncoding.Contains("deflate"))
            return "Deflate";
        else if (acceptEncoding.Contains("gzip"))
            return "GZip";
                
        return "";
    }
            
    return base.GetVaryByCustomString(context, custom);
}

In a page that use Output caching you then specify:

<%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %>

To use that custom rule.

It’s all Fun and Games until ASP.NET throws an Error

Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this:

GarbledOutput

Lovely isn’t it?

What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied:

HTTP/1.1 500 Internal Server Error
Cache-Control: private
Content-Type: text/html; charset=utf-8
Date: Sat, 30 Apr 2011 22:21:08 GMT
Content-Length: 2138
Connection: close

�`I�%&/m�{J�J��t��` … binary output striped here

Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact.

So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top:

protected void Application_Error(object sender, EventArgs e)
{
        // Remove any special filtering especially GZip filtering
        Response.Filter = null;
}

And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller.

Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments:

 protected void Application_PreSendRequestHeaders()
{
    // ensure that if GZip/Deflate Encoding is applied that headers are set
    // also works when error occurs if filters are still active
    HttpResponse response = HttpContext.Current.Response;
    if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip")
        response.AppendHeader("Content-encoding", "gzip");
    else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate")
        response.AppendHeader("Content-encoding", "deflate");
}

This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam.

It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight!

IIS and GZip

I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET   IIS7  
kick it on DotNetKicks.com

Built-in GZip/Deflate Compression on IIS 7.x

$
0
0

IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework.

While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy.

Let’s take a look at each of the two approaches available:

  • Static Compression
    Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled.
  • Dynamic Compression
    Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching.

How Compression is configured

Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression):

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    
    <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files">
      <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" />
      <dynamicTypes>
        <add mimeType="text/*" enabled="true" />
        <add mimeType="message/*" enabled="true" />
        <add mimeType="application/x-javascript" enabled="true" />
        <add mimeType="application/json" enabled="true" />
        <add mimeType="*/*" enabled="false" />
      </dynamicTypes>
      <staticTypes>
        <add mimeType="text/*" enabled="true" />
        <add mimeType="message/*" enabled="true" />
        <add mimeType="application/x-javascript" enabled="true" />
        <add mimeType="application/atom+xml" enabled="true" />
        <add mimeType="application/xaml+xml" enabled="true" />
        <add mimeType="*/*" enabled="false" />
      </staticTypes>
    </httpCompression>
    
    <urlCompression doStaticCompression="true" doDynamicCompression="true" />
    
  </system.webServer>
</configuration>

You can find documentation on the httpCompression and urlCompression keys here respectively:

http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx

http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx

The httpCompression Element – What and How to compress

Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client.

The UrlCompression Element – Enables and Disables Compression

The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it.

The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"!

Static Compression is enabled by Default, Dynamic Compression is not

Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting:

<urlCompression doDynamicCompression="true" />

in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy).

So: remember to set doDynamicCompression=”true” in web.config!!!

How Static Compression works

Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder.

The compression folder is located at:

%windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\

If you look into the subfolders you’ll find compressed files:

CompressedFileFolder

These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed.

As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element:

<scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" />

Other than that the default settings are probably just fine.

Dynamic Compression – not so fast!

By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance.

There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server:

  • dynamicCompressionDisableCpuUsage
  • dynamicCompressionEnableCpuUsage

By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment.

Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type:

<add mimeType="application/json" enabled="true" />

What about Deflate Compression?

The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to:

<scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" />

to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs.

I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available.

IIS 7 finally makes GZip Easy

In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs.

Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use

Related Content

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in IIS7   ASP.NET  
kick it on DotNetKicks.com


Web Browser Control – Specifying the IE Version

$
0
0

I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies.

One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default.

This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules.

Rendering Challenged

To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control.

IE 9 Browser:

IE9Browser 

Web Browser control in a WPF form:

WebBrowserControlWpfForm

The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page.

If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable.

Feature Delegation via Registry Hacks

Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually.

There are two different sets of keys for 32 bit and 64 bit applications.

32 bit:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION

Value Key: yourapplication.exe

64 bit:

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION

Value Key: yourapplication.exe

The value to set this key to is (taken from MSDN here) as decimal values:

9999 (0x270F)
Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive.

9000 (0x2328)
Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode.

8888 (0x22B8)
Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive.

8000 (0x1F40)
Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode.

7000 (0x1B58)
Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.

 

The added key looks something like this in the Registry Editor:

RegistryEditorEmulation

With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does.

Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great:

  • What happens when setting 9000 with IE 8 installed?
  • What happens when setting 0 with IE 8 installed?

Don’t forget to add Keys for Host Environments

If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine.

If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment.

Cleaner HTML - no more HTML mangling!

There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content.

If you do the following in the WPF application:

private void button2_Click(object sender, RoutedEventArgs e)
{
    dynamic doc = this.webBrowser.Document;

    MessageBox.Show(doc.body.outerHtml);
}

you get different output depending on the rendering mode active. With the default IE 7 rendering you get:


<BODY><DIV>
<H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1>
<DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV>
<DIV class=containercontent>
<FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow -->
<DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow">
<DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET>
<FIELDSET><LEGEND>Box with Header</LEGEND>
<DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow">
<DIV class="gridheaderleft roundbox-top">Box with a Header</DIV>
<DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET>
<FIELDSET><LEGEND>Dialog Style Window</LEGEND>
<DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2">
<DIV style="POSITION: relative" class=dialog-header>
<DIV class=closebox></DIV>User Sign-in
<DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV>
<DIV class=descriptionheader>This dialog is draggable and closable</DIV>
<DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" ">
<HR>
<INPUT id=btnLogin value=Login type=button> </DIV>
<DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV>
<SCRIPT type=text/javascript>
    $(document).ready(function () {
        $("#divDialog")
            .draggable({ handle: ".dialog-header" })
            .closable({ handle: ".dialog-header",
                closeHandler: function () {
                    alert("Window about to be closed.");
                    return true;  // true closes - false leaves open
                }
            });
    });
</SCRIPT>
</DIV></BODY>

Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing:

<body>
<div>
   
    <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>
    <div class="toolbarcontainer">
        <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>
        <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>
    </div>

   
    <div class="containercontent">

    <fieldset>
        <legend>Plain Box</legend>   
            <!-- Simple Box with rounded corners and shadow -->
            <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">             
                <div style="background: khaki;" class="boxcontenttext roundbox">
                    Simple Rounded Corner Box.
                </div>
            </div>
    </fieldset>

    <fieldset>
        <legend>Box with Header</legend>
        <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">             
            <div class="gridheaderleft roundbox-top">Box with a Header</div>
            <div style="background: khaki;" class="boxcontenttext roundbox-bottom">
                Simple Rounded Corner Box.
            </div>
        </div>
    </fieldset>

 

    <fieldset>
        <legend>Dialog Style Window</legend>


        <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">
            <div style="position: relative;" class="dialog-header">
                <div class="closebox"></div>
                User Sign-in
            <div class="closebox"></div></div>
            <div class="descriptionheader">This dialog is draggable and closable</div>       
            <div class="dialog-content">
           
                <label>Username:</label>
                <input name="txtUsername" value=" " type="text">

                <label>Password</label>
                <input name="txtPassword" value=" " type="text">
               
                <hr/>
               
                <input id="btnLogin" value="Login" type="button">           
            </div>

            <div class="dialog-statusbar">Ready</div>
        </div>

    </fieldset>

    </div>


<script type="text/javascript">
    $(document).ready(function () {
        $("#divDialog")
            .draggable({ handle: ".dialog-header" })
            .closable({ handle: ".dialog-header",
                closeHandler: function () {
                    alert("Window about to be closed.");
                    return true;  // true closes - false leaves open
                }
            });
    });
</script>       

</div>
</body>

IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from.

As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better.

HTML Editing leaves HTML formatting intact

In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format.

So if I do:

private void button3_Click(object sender, RoutedEventArgs e)
{
    dynamic doc = this.webBrowser.Document;
    doc.body.contentEditable = true;
}

and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact.

This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR).

The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited.

Caveats, Caveats, Caveats

It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return.

I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control:

loRange = loEdit.document.selection.CreateRange()

The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available.

I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck.

In the end I relented and used a JavaScript function in my editor document that returns a selection range object:

function getselectionrange() {
    var range = document.selection.createRange();
    return range;
}

and call that JavaScript function from my host applications code:

*** Use a function in the document to get around HTML Editing issues
loRange = loEdit.document.parentWindow.getselectionrange(.f.)

and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine.

The new cleaner text formatting is very welcome, but if you - like I do in one of my applications - actually post process your HTML to clean up the mess the IE 7 control creates when editing

 

IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking.

Registry Key Installation for your Application

It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application.

I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version:

Setup6432

Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application.

Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.

 

It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration…

Summary

The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features.

Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  FoxPro  Windows  
kick it on DotNetKicks.com

Building a jQuery Plug-in to make an HTML Table scrollable

$
0
0

Today I got a call from a customer and we were looking over an older application that uses a lot of tables to display financial and other assorted data. The application is mostly meta-data driven with lots of layout formatting automatically driven through meta data rather than through explicit hand coded HTML layouts. One of the problems in this apps are tables that display a non-fixed amount of data. The users of this app don't want to use paging to see more data, but instead want to display overflow data using a scrollbar. Many of the forms are very densely populated, often with multiple data tables that display a few rows of data in the UI at the most. This sort of layout does not lend itself well to paging, but works much better with scrollable data.

Unfortunately scrollable tables are not easily created. HTML Tables are mangy beasts as anybody who's done any sort of Web development knows. Tables are finicky when it comes to styling and layout, and they have many funky quirks, especially when it comes to scrolling both of the table rows themselves or even the child columns. There's no built-in way to make tables scroll and to lock headers while you do, and while you can embed a table (or anything really) into a scrolling div with something like this:

<div style="position:relative; overflow: hidden; overflow-y: scroll; height: 200px; width: 400px;">
    <table id="table" style="width: 100%" class="blackborder" >
        <thead>
        <tr class="gridheader">
            <th>Column 1</th>
            <th>Column 2</th>
            <th>Column 3</th>
            <th >Column 4</th>
        </tr>
        </thead>

        <tbody>
        <tr>
            <td>Column 1 Content</td>
            <td>Column 2 Content</td>
            <td>Column 3 Content</td>
            <td>Column 4 Content</td>                    
        </tr>
        <tr>
            <td>Column 1 Content</td>
            <td>Column 2 Content</td>
            <td>Column 3 Content</td>
            <td>Column 4 Content</td>                    
        </tr>
        </tbody>
    </table>
    </div>
</div>

that won't give a very satisfying visual experience:

ScrollTable_NoGood[4]

Both the header and body scroll which looks odd. You lose context as soon as the header scrolls off the top and when you reach the bottom of the list the bottom outline of the table shows which also looks off. The the side bar shows all the way down the length of the table yet another visual miscue. In a pinch this will work, but it's ugly.

What's out there?

Before we go further here you should know that there are a few capable grid plug-ins out there already. Among them:

But in the end none of them fit the bill of what I needed in this situation. All of these require custom CSS and some of them are fairly complex to restyle. Others are AJAX only or work better with AJAX loaded data. However, I need to actually try (as much as possible) to maintain the original styling of the tables without requiring extensive re-styling.

Building the makeTableScrollable() Plug-in

To make a table scrollable requires rearranging the table a bit. In the plug-in I built I create two <div> tags and split the table into two: one for the table header and one for the table body. The bottom <div> tag then contains only the table's row data and can be scrolled while the header stays fixed. Using jQuery the basic idea is pretty simple: You create the divs, copy the original table into the bottom, then clone the table, clear all content append the <thead> section, into new table and then copy that table into the second header <div>. Easy as pie, right?

Unfortunately it's a bit more complicated than that as it's tricky to get the width of the table right to account for the scrollbar (by adding a small column) and making sure the borders properly line up for the two tables. A lot of style settings have to be made to ensure the table is a fixed size, to remove and reattach borders, to add extra space to allow for the scrollbar and so forth.

The end result of my plug-in is a table with a scrollbar. Using the same table I used earlier the result looks like this:

TableScrolling

To create it, I use the following jQuery plug-in logic to select my table and run the makeTableScrollable() plug-in against the selector:

$("#table").makeTableScrollable( { cssClass:"blackborder"} );

Without much further ado, here's the short code for the plug-in:

(function ($) {

$.fn.makeTableScrollable = function (options) {
    return this.each(function () {
        var $table = $(this);

        var opt = {
            // height of the table
            height: "250px",
            // right padding added to support the scrollbar
            rightPadding: "10px",
            // cssclass used for the wrapper div
            cssClass: ""
        }
        $.extend(opt, options);

        var $thead = $table.find("thead");
        var $ths = $thead.find("th");
        var id = $table.attr("id");
        var cssClass = $table.attr("class");

        if (!id)
            id = "_table_" + new Date().getMilliseconds().ToString();

        $table.width("+=" + opt.rightPadding);
        $table.css("border-width", 0);

        // add a column to all rows of the table
        var first = true;
        $table.find("tr").each(function () {
            var row = $(this);
            if (first) {
                row.append($("<th>").width(opt.rightPadding));
                isFirst = false;
            }
            else
                row.append($("<td>").width(opt.rightPadding));
        });

        // force full sizing on each of the th elemnts
        $ths.each(function () {
            var $th = $(this);
            $th.css("width", $th.width());
        });

        // Create the table wrapper div
        var $tblDiv = $("<div>").css({ position: "relative",
            overflow: "hidden",
            overflowY: "scroll"
        })
                                    .addClass(opt.cssClass);
        var width = $table.width();
        $tblDiv.width(width).height(opt.height)
                .attr("id", id + "_wrapper")
                .css("border-top", "none");
        // Insert before $tblDiv
        $tblDiv.insertBefore($table);
        // then move the table into it
        $table.appendTo($tblDiv);

        // Clone the div for header
        var $hdDiv = $tblDiv.clone();
        $hdDiv.empty();
        var width = $table.width();
        $hdDiv.attr("style", "")
                .css("border-bottom", "none")
                .width(width)
                .attr("id", id + "_wrapper_header");

        // create a copy of the table and remove all children
        var $newTable = $($table).clone();
        $newTable.empty()
                    .attr("id", $table.attr("id") + "_header");

        $thead.appendTo($newTable);
        $hdDiv.insertBefore($tblDiv);
        $newTable.appendTo($hdDiv);

        $table.css("border-width", 0);
    });
}
})(jQuery);

Oh sweet spaghetti code :-)

The code starts out by dealing the parameters that can be passed in the options object map:

height

The height of the full table/structure. The height of the outside wrapper container. Defaults to 200px.

rightPadding

The padding that is added to the right of the table to account for the scrollbar.
Creates a column of this width and injects it into the table. If too small the rightmost
column might get truncated. if too large the empty column might show.

cssClass

The CSS class of the wrapping container that appears to wrap the table. If you want a border
around your table this class should probably provide it since the plug-in removes the table
border.

The rest of the code is obtuse, but pretty straight forward. It starts by creating a new column in the table to accommodate the width of the scrollbar and avoid clipping of text in the rightmost column. The width of the columns is explicitly set in the header elements to force the size of the table to be fixed and to provide the same sizing when the THEAD section is moved to a new copied table later. The table wrapper div is created, formatted and the table is moved into it. The new wrapper div is cloned for the header wrapper and configured. Finally the actual table is cloned and cleared of all elements. The original table's THEAD section is then moved into the new table. At last the new table is added to the header <div>, and the header <div> is inserted before the table wrapper <div>.

I'm always amazed how easy jQuery makes it to do this sort of re-arranging, and given of what's happening the amount of code is rather small.

Disclaimer: Your mileage may vary

A word of warning: I make no guarantees about the code above. It's a first cut and I provided this here mainly to demonstrate the concepts of decomposing and reassembling an HTML layout :-) which jQuery makes so nice and easy.

I tested this component against the typical scenarios we plan on using it for which are tables that use a few well known styles (or no styling at all). I suspect if you have complex styling on your <table> tag that things might not go so well. If you plan on using this plug-in you might want to minimize your styling of the table tag and defer any border formatting using the class passed in via the cssClass parameter, which ends up on the two wrapper div's that wrap the header and body rows.

There's also no explicit support for footers. I rarely if ever use footers (when not using paging that is), so I didn't feel the need to add footer support. However, if you need that it's not difficult to add - the logic is the same as adding the header.

The plug-in relies on a well-formatted table that has THEAD and TBODY sections along with TH tags in the header. Note that ASP.NET WebForm DataGrids and GridViews by default do not generate well-formatted table HTML. You can look at my Adding proper THEAD sections to a GridView post for more info on how to get a GridView to render properly.

The plug-in has no dependencies other than jQuery.

Even with the limitations in mind I hope this might be useful to some of you. I know I've already identified a number of places in my own existing applications where I will be plugging this in almost immediately.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in jQuery  HTML  ASP.NET  
kick it on DotNetKicks.com

ActiveX component can't create Object Error? Check 64 bit Status

$
0
0

If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as:

ActiveX component can't create object   (Error 429)

(actually without error handling the error just shows up as 500 error page)

In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this:

<%
Set banner = Server.CreateObject("wwBanner.aspBanner")
banner.BannerFile = "wwsitebanners"
Response.Write(banner.GetBanner(-1))
%>

Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging:

<%
ON ERROR RESUME NEXT
Set banner = Server.CreateObject("wwBanner.aspBanner")

Response.Write(err.Number & " - " & err.Description)

banner.BannerFile = "wwsitebanners"
Response.Write(banner.GetBanner(-1))
%>

which results in:

429 - ActiveX component can't create object

which at least gives you a slight clue.

In ASP.NET invoking the same COM object with code like this:

<%
dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic;
banner.cBANNERFILE = "wwsitebanners";
Response.Write(banner.getBanner(-1));
 %>    

results in:

Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).

The class is in fact registered though and the COM server loads fine from a command prompt or other COM client.

This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course.

  • The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server)
  • Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service)
  • The COM server fails to load during initialization ie. failing during startup

One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with:

loBanners = CREATEOBJECT("wwBanner.aspBanner")
loBanners.cBannerFile = "wwsitebanners"
? loBanners.GetBanner(-1)

and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt:

Set banner = Server.CreateObject("wwBanner.aspBanner")
banner.BannerFile = "wwsitebanners"
MsgBox(banner.getBanner(-1))

Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go.

So now what?

64 bit Servers Ahoy

A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless.

Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object.

It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance.

To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications:

32bitAppPool

Once this is done the COM objects started working correctly again.

64 bit ASP and ASP.NET with DCOM Servers

This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing.

Application Pool Isolation is your Friend

For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down).

Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget.

To 64bit or Not

Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in COM   ASP.NET  FoxPro  
kick it on DotNetKicks.com

Getting the innermost .NET Exception

$
0
0

Here's a trivial but quite useful function that I frequently need in dynamic execution of code: Finding the innermost exception when an exception occurs, because for many operations (for example Reflection invocations or Web Service calls) the top level errors returned can be rather generic.

A good example - common with errors in Reflection making a method invocation - is this generic error:

Exception has been thrown by the target of an invocation

In the debugger it looks like this:

VsException

In this case this is an AJAX callback, which dynamically executes a method (ExecuteMethod code) which in turn calls into an Amazon Web Service using the old Amazon WSE101 Web service extensions for .NET. An error occurs in the Web Service call and the innermost exception holds the useful error information which in this case points at an invalid web.config key value related to the System.Net connection APIs.

The "Exception has been thrown by the target of an invocation" error is the Reflection APIs generic error message that gets fired when you execute a method dynamically and that method fails internally. The messages basically says: "Your code blew up in my face when I tried to run it!". Which of course is not very useful to tell you what actually happened. If you drill down the InnerExceptions eventually you'll get a more detailed exception that points at the original error and code that caused the exception. In the code above the actually useful exception is two innerExceptions down.

In most (but not all) cases when inner exceptions are returned, it's the innermost exception that has the information that is really useful.

It's of course a fairly trivial task to do this in code, but I do it so frequently that I use a small helper method for this:

/// <summary>
/// Returns the innermost Exception for an object
/// </summary>
/// <param name="ex"></param>
/// <returns></returns>
public static Exception GetInnerMostException(Exception ex)
{
    Exception currentEx = ex;
    while (currentEx.InnerException != null)
    {
        currentEx = currentEx.InnerException;
    }

    return currentEx;
}

Update:
As it turns out .NET already provides for this functionality via Exception.GetBaseException() (see comments). Ah EGG ON MY FACE, but only shows that it's so easy to miss useful functionality in the base framework when it is so rich :-)

 

This code just loops through all the inner exceptions (if any) and assigns them to a temporary variable until there are no more inner exceptions. The end result is that you get the innermost exception returned from the original exception.

It's easy to use this code then in a try/catch handler like this (from the example above) to retrieve the more important innermost exception:

object result = null;
string stringResult = null;
try
{
    if (parameterList != null)
        // use the supplied parameter list
        result = helper.ExecuteMethod(methodToCall,target, parameterList.ToArray(),
                            CallbackMethodParameterType.Json,ref attr);
    else
        // grab the info out of QueryString Values or POST buffer during parameter parsing 
        // for optimization
        result = helper.ExecuteMethod(methodToCall, target, null, 
                                      CallbackMethodParameterType.Json, ref attr);
}
catch (Exception ex)
{
    Exception activeException = DebugUtils.GetInnerMostException(ex);
    WriteErrorResponse(activeException.Message,
                      ( HttpContext.Current.IsDebuggingEnabled ? ex.StackTrace : null ) );
    return;
}

Another function that is useful to me from time to time is one that returns all inner exceptions and the original exception as an array:

/// <summary>
/// Returns an array of the entire exception list in reverse order
/// (innermost to outermost exception)
/// </summary>
/// <param name="ex">The original exception to work off</param>
/// <returns>Array of Exceptions from innermost to outermost</returns>
public static Exception[] GetInnerExceptions(Exception ex)
{
    List<Exception> exceptions = new List<Exception>();
    exceptions.Add(ex);
 
    Exception currentEx = ex;
    while (currentEx.InnerException != null)
    {
        exceptions.Add(currentEx);
    }
 
    // Reverse the order to the innermost is first
    exceptions.Reverse();
 
    return exceptions.ToArray();
}

This function loops through all the InnerExceptions and returns them and then reverses the order of the array returning the innermost exception first. This can be useful in certain error scenarios where exceptions stack and you need to display information from more than one of the exceptions in order to create a useful error message. This is rare but certain database exceptions bury their exception info in mutliple inner exceptions and it's easier to parse through them in an array then to manually walk the exception stack. It's also useful if you need to log errors and want to see the all of the error detail from all exceptions.

None of this is rocket science, but it's useful to have some helpers that make retrieval of the critical exception info trivial.

Resources

DebugUtils.cs utility class in the West Wind Web Toolkit

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in CSharp  .NET  
kick it on DotNetKicks.com

COM ByteArray and Dynamic type issues in .NET

$
0
0

In .NET 4.0 the dynamic type has made it A LOT easier to work with COM interop components.  I'm still working quite a bit with customers that need to work with FoxPro COM objects in .NET and there are some extra fun issues to deal with when working with COM SafeArrays (byte arrays) returned from a COM client. Currently I'm working with a customer who needs to work with Web Services that publish and capture a lot of binary data. A couple of interesting things came up when returning byte arrays over COM to .NET via a dynamic type.

I've written about COM binary arrays before in a previous post. The issue is that when you make a COM call to retrieve a binary result from a COM object the result gets returned as byte[*] type rather than byte[].

To demonstrate imagine you have a simple COM client that returns a binary result as a SafeArray of bytes. The following dumbed down example is a FoxPro method in  COM published component where an object is returned that contains a ByteResult property that holds binary data:

************************************************************************
*  SendBinaryFile
****************************************
FUNCTION SendBinaryFile(lcPath) 
LOCAL lvBinary, lcBinary, loResult as ServiceResult

TRY
    *** Grab the file from disk into a string
    lcBinary = FILETOSTR(lcPath)

    *** Turn the string to binary - over COM this turns into a SAFEARRAY
    lvBinary = CREATEBINARY(lcBinary)  && OR CAST(lcBinary as Q)

    *** Return value is standard Service Response object
    loResult = CREATEOBJECT("ServiceResult")

    *** Attach the binary response to the ServiceResult
    loResult.ByteResult = lvBinary
CATCH TO loException
    loResult = this.ErrorToServiceResult(loException,20)
ENDTRY

RETURN loResult
ENDFUNC

The interesting part in relation to this post is the binary result which in this case is the content of a file. Internally FoxPro treats binary values as strings since it's stuck with ANSI codepages, so string can easily contain binary data. However FoxPro cannot return that string as a binary value over COM. In order to return a binary value the value needs to be explicitly converted using CREATEBINARY() or CAST(val as Q) which turns it into a SAFEARRAY when passed back over COM. This binary value is then assigned to a ServiceResult object that returns actual result value(s) back to the .NET Web Service which in turn publishes these values through the Service interface.

In .NET the sample Web Service method that handles publishing of this data looks like this. Note that this code as written does not work although it probably looks like it should. The code that doesn't work is shown in bold (two separate issues):

[WebMethod]
public ServiceResult SendBinaryFile()
{
    var serviceResult = new ServiceResult();

    try
    {
        // Returns a COM ServiceResult - result.ByteResult is a COM SafeArray
        dynamic result = ComProxy.SendBinaryFile(Server.MapPath("~/images/sailbig.jpg"));

        byte[] byteResult = ComArrayToByteArray(result.ByteResult);        

        //serviceResult.FromServiceResult(result);
    }
    catch (Exception ex)
    {
        // Assign the error infor from Exception to ServiceResult
        serviceResult.FromException(ex);
    }
    return serviceResult;
}

byte[] ComArrayToByteArray(object comArray)
{
    byte[] content = comArray as byte[];
    return content;
}

Initially this code fails on the actual ComArrayToByteArray() method call, while actually calling the method with the dynamic COM byte array object as a parameter. The code fails with:

Unable to cast object of type 'System.Byte[*]' to type 'System.Byte[]'.

What's interesting - and different than my previous post and solution - is that the call here fails simply calling the method with a dynamic value that is a COM byte array. The error fires before any code in the method fires. The target method - ComArrayToByteArray() accepts an object parameter and yet it fails!!! It's not the code in the method that fails, it's the actual method call. Completely unexpected!

How to fix this? You have to explicitly cast the result.ByteResult - the COM byte array - to type object:       

byte[] byteResult = ComArrayToByteArray(result.ByteResult as object);    

and then the call to the method call works. Unfortunately there's another problem and the method call still fails now on:

byte[] content = comArray as byte[];

The issue here is that the comArray parameter value passed in as object from the dynamic byte array, is actually of type byte[*] which cannot just be cast to byte[]. A little more work is required to convert this value to a byte[] araray. The ComArrayToByteArray() method needs to be rewritten like this:

byte[] ComArrayToByteArray(object comArray)
{
    Array ct = (Array) comArray;
    byte[] content = new byte[ct.Length];
    ct.CopyTo(content, 0);
    return content;
}

and now finally the code works as expected. The complete code that works in converting the COM byte array from a dynamic into byte[] looks like this:

[WebMethod]
public ServiceResult SendBinaryFile()
{
    var serviceResult = new ServiceResult();

    try
    {
        // Returns a COM ServiceResult - result.ByteResult is a COM SafeArray
        dynamic result = ComProxy.SendBinaryFile(Server.MapPath("~/images/sailbig.jpg"));

        byte[] byteResult = ComArrayToByteArray(result.ByteResult as object);        

        //serviceResult.FromServiceResult(result);
    }
    catch (Exception ex)
    {
        // Assign the error infor from Exception to ServiceResult
        serviceResult.FromException(ex);
    }
    return serviceResult;
}

byte[] ComArrayToByteArray(object comArray)
{
    Array ct = (Array)comArray;
    byte[] content = new byte[ct.Length];
    ct.CopyTo(content, 0);
    return content;
}

Summary

As you might imagine this can bite you in many unsuspecting ways especially when using dynamic types since these are going to be runtime type conversion errors. Just be aware that when dealing with byte arrays returned over COM and especially when returning them into Dynamic types there are additional things you need to check for.

In summary there are two issues here:

  1. A COM Byte array in a Dynamic instance cannot be passed as a parameter unless it's explicitly cast to object
  2. A COM Byte array cannot be cast directly to byte[] but requires conversion to an array and copying of each byte

It's not a super complicated workaround, but rather unexpected behavior. The difference between byte[*] and byte[] seems inconsequential until you end up having to cast between the two. Not sure why you'd ever want to have a type of byte[*] - since it appears you can't do anything with it other than copy the explicit bytes around. <shrug>

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  COM  FoxPro  
kick it on DotNetKicks.com

Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

$
0
0

Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties.

Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable.

Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog:

internetsettings

This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings.

Loading the Settings Dialog Programmatically

The settings dialog is a Control Panel applet with the name of:

inetcpl.cpl

and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings.

In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled:

Process.Start("inetcpl.cpl",",4");

In FoxPro you can simply use the RUN command to execute inetcpl.cpl:

lcCmd = "inetcpl.cpl ,4"
RUN &lcCmd

Using the Default Connection/Proxy Settings

When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here):

hInetConnection=;
   InternetOpen(THIS.cUserAgent,0,;
   THIS.chttpproxyname,THIS.chttpproxybypass,0)

The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work.

In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file:

<configuration>
  <system.net>
    <defaultProxy>
      <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" />
    </defaultProxy>
  </system.net>
</configuration>

You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with:

StoreService.Proxy = WebProxy.GetDefaultProxy();

All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in Windows  HTTP  .NET  FoxPro  
kick it on DotNetKicks.com

Translating with Google Translate without API and C# Code

$
0
0

Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish.

Here's what this looks like in the resource administration form:

LocalizationAdmin

When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going.

Ch… Ch… Changes

Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken.

Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).  

However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular

Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement.

Here's the code:

/// <summary>
/// Translates a string into another language using Google's translate API JSON calls.
/// <seealso>Class TranslationServices</seealso>
/// </summary>
/// <param name="Text">Text to translate. Should be a single word or sentence.</param>
/// <param name="FromCulture">
/// Two letter culture (en of en-us, fr of fr-ca, de of de-ch)
/// </param>
/// <param name="ToCulture">
/// Two letter culture (as for FromCulture)
/// </param>
public string TranslateGoogle(string text, string fromCulture, string toCulture)
{
    fromCulture = fromCulture.ToLower();
    toCulture = toCulture.ToLower();

    // normalize the culture in case something like en-us was passed 
    // retrieve only en since Google doesn't support sub-locales
    string[] tokens = fromCulture.Split('-');
    if (tokens.Length > 1)
        fromCulture = tokens[0];
    
    // normalize ToCulture
    tokens = toCulture.Split('-');
    if (tokens.Length > 1)
        toCulture = tokens[0];
    
    string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}",                                     
                               HttpUtility.UrlEncode(text),fromCulture,toCulture);

    // Retrieve Translation with HTTP GET call
    string html = null;
    try
    {
        WebClient web = new WebClient();

        // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?)
        web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0");
        web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8");

        // Make sure we have response encoding to UTF-8
        web.Encoding = Encoding.UTF8;
        html = web.DownloadString(url);
    }
    catch (Exception ex)
    {
        this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " +
                            ex.GetBaseException().Message;
        return null;
    }

    // Extract out trans":"...[Extracted]...","from the JSON string
    string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value;            

    if (string.IsNullOrEmpty(result))
    {
        this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult;
        return null;
    }

    //return WebUtils.DecodeJsString(result);

    // Result is a JavaScript string so we need to deserialize it properly
    JavaScriptSerializer ser = new JavaScriptSerializer();
    return ser.Deserialize(result, typeof(string)) as string;            
}

To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages:

string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de");
TestContext.WriteLine(result);

How it works

The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns.

The JSON result returned looks like this:

{"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24}

I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.).

Couple of odd things to note in this code:

First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least.

The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser.

It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems.

Limitations

There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the
".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length.

Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works.

FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above.

I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in CSharp  HTTP  
kick it on DotNetKicks.com


FireFox 6 Super Slow? Cache Settings Corruption

$
0
0

For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes.

Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick.

Why so slow Boss?

While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening.

The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again.

Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog:

SlowLoads

Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes.

In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow.

Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache:

aboutconfigCache

In my case ALL the main cache flags were set to false for some reason that I can't figure out. 

It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache.

I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly.

If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me.

Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in FireFox  

Figuring out the IIS Version for a given OS in .NET Code

$
0
0

Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on.

Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available.

So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed.

The easiest way: Request.ServerVariables["SERVER_SOFTWARE"]

The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like

Microsoft-IIS/7.5

returned to you. It's a cinch to parse this to retrieve the version number.

This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application.

The messy way: Match Windows OS Versions to IIS Versions

Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here:

http://support.microsoft.com/kb/224609

In .NET you can then sniff the OS version and based on that return the IIS version.

The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS:

    /// <summary>
    /// Returns the IIS version for the given Operating System.
    /// Note this routine doesn't check to see if IIS is installed
    /// it just returns the version of IIS that should run on the OS.
    /// 
    /// Returns the value from Request.ServerVariables["Server_Software"]
    /// if available. Otherwise uses OS sniffing to determine OS version
    /// and returns IIS version instead.
    /// </summary>
    /// <returns>version number or -1 </returns>
    public static decimal GetIisVersion()
    {
        // if running inside of IIS parse the SERVER_SOFTWARE key
        // This would be most reliable
        if (HttpContext.Current != null && HttpContext.Current.Request != null)
        {
            string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"];
            if (!string.IsNullOrEmpty(os))
            {
                //Microsoft-IIS/7.5
                int dash = os.LastIndexOf("/");
                if (dash > 0)
                {
                    decimal iisVer = 0M;
                    if (Decimal.TryParse(os.Substring(dash + 1), out iisVer))
                        return iisVer;
                }
            }
        }

        decimal osVer = (decimal) Environment.OSVersion.Version.Major +
                ((decimal) Environment.OSVersion.Version.MajorRevision / 10);

        // Windows 7 and Win2008 R2
        if (osVer == 6.1M)
            return 7.5M;
        // Windows Vista and Windows 2008
        else if (osVer == 6.0M)
            return 7.0M;
        // Windows 2003 and XP 64 bit
        else if (osVer == 5.2M)
            return 6.0M;
        // Windows XP
        else if (osVer == 5.1M)
            return 5.1M;
        // Windows 2000
        else if (osVer == 5.0M)
            return 5.0M;

        // error result
        return -1M;                
    }
}

Talk about a brute force apporach, but it works.

This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  IIS  

Show raw Text Code from a URL with CodePaste.NET

$
0
0

I introduced CodePaste.NET more than 2 years ago. In case you haven't checked it out it's a code-sharing site where you can post some code, assign a title and syntax scheme to it and then share it with others via a short URL. The idea is super simple and it's not the first time this has been done, but it's focused on Microsoft languages and caters to that crowd.

Show your own code from the Web

There's another feature that I tweeted about recently that's been there for some time, but is not used very much: CodePaste.NET has the ability to show raw text based code from a URL on the Web in syntax colored format for any of the formats provided. I use this all the time with code links to my Subversion repository which only displays code as plain text. Using CodePaste.NET allows me to show syntax colored versions of the same code.

For example I can go from this URL:

http://www.west-wind.com:8080/svn/WestwindWebToolkit/trunk/Westwind.Utilities/SupportClasses/PropertyBag.cs

PlainView[5]

To a nicely colored source code view at this Url:

http://codepaste.net/ShowUrl?url=http%3A%2F%2Fwww.west-wind.com%3A8080%2Fsvn%2FWestwindWebToolkit%2Ftrunk%2FWestwind.Utilities%2FSupportClasses%2FPropertyBag.cs&Language=C%23

which looks like this:

FormattedCode[4] 

Use the Form or access URLs directly

To get there navigate to the Web Code icon on the CodePaste.NET site and paste your original URL and select a language to display:

WebCodeForm

The form creates a link shown above which has two query string parameters:

  • url - The URL for the raw text on the Web
  • language -  The code language used for syntax highlighting

Note that parameters must be URL encoded to work especially the # in C# because otherwise the # will be interpreted by the browser as a hash tag to jump to in the target URL.

The URL must be Web accessible so that CodePaste can download it and then apply the syntax coloring. It doesn't work with localhost urls for example. The code returned must be returned in plain text - HTML based text doesn't work.

Hope some of you find this a useful feature. Enjoy…

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  

An Xml Serializable PropertyBag Dictionary Class for .NET

$
0
0

I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off.

Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML.

What doesn't work?

First off Dictionary serialization with the Xml Serializer doesn't work so the following fails:

[TestMethod]
public void DictionaryXmlSerializerTest()
{
    var bag = new Dictionary<string, object>();

    bag.Add("key", "Value");
    bag.Add("Key2", 100.10M);
    bag.Add("Key3", Guid.NewGuid());
    bag.Add("Key4", DateTime.Now);
    bag.Add("Key5", true);
    bag.Add("Key7", new byte[3] { 42, 45, 66 });
    TestContext.WriteLine(this.ToXml(bag));

}

public string ToXml(object obj)
{
    if (obj == null)
        return null;

    StringWriter sw = new StringWriter();
    XmlSerializer ser = new XmlSerializer(obj.GetType());
    ser.Serialize(sw, obj);
    return sw.ToString();
}

The error you get with this is:

System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary.

Got it! BTW, the same is true with binary serialization.

Running the same code above against the DataContractSerializer does work:

[TestMethod]
public void DictionaryDataContextSerializerTest()
{
    var bag = new Dictionary<string, object>();

    bag.Add("key", "Value");
    bag.Add("Key2", 100.10M);
    bag.Add("Key3", Guid.NewGuid());
    bag.Add("Key4", DateTime.Now);
    bag.Add("Key5", true);
    bag.Add("Key7", new byte[3] { 42, 45, 66 });

    TestContext.WriteLine(this.ToXmlDcs(bag));            
}

public string ToXmlDcs(object value, bool throwExceptions = false)
{
    var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null);

    MemoryStream ms = new MemoryStream();
    ser.WriteObject(ms, value);
    return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length);
}

This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here):

<ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
  <KeyValueOfstringanyType>
    <Key>key</Key>
    <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value>
  </KeyValueOfstringanyType>
  <KeyValueOfstringanyType>
    <Key>Key2</Key>
    <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value>
  </KeyValueOfstringanyType>
  <KeyValueOfstringanyType>
    <Key>Key3</Key>
    <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value>
  </KeyValueOfstringanyType>
  <KeyValueOfstringanyType>
    <Key>Key4</Key>
    <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value>
  </KeyValueOfstringanyType>
  <KeyValueOfstringanyType>
    <Key>Key5</Key>
    <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value>
  </KeyValueOfstringanyType>
  <KeyValueOfstringanyType>
    <Key>Key7</Key>
    <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value>
  </KeyValueOfstringanyType>
</ArrayOfKeyValueOfstringanyType>

Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations.

It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal.

Why should I care?

As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post).

While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly.

Custom XMLSerialization with a PropertyBag Class

So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values).

The PropertyBag<TType> and PropertyBag classes provide these features:

  • Subclassed from Dictionary<T,V>
  • Implements IXmlSerializable with a cleanish XML format
  • ToXml() and FromXml() methods to export and import to and from XML strings
  • Static CreateFromXml() method to create an instance

It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use:

 [TestMethod]
 public void PropertyBagTwoWayObjectSerializationTest()
 {
     var bag = new PropertyBag();

     bag.Add("key", "Value");
     bag.Add("Key2", 100.10M);
     bag.Add("Key3", Guid.NewGuid());
     bag.Add("Key4", DateTime.Now);
     bag.Add("Key5", true);
     bag.Add("Key7", new byte[3] { 42,45,66 } );
     bag.Add("Key8", null);
     bag.Add("Key9", new ComplexObject()
     {
         Name = "Rick",
         Entered = DateTime.Now,
         Count = 10
     });

     string xml = bag.ToXml();

     TestContext.WriteLine(bag.ToXml());

     bag.Clear();

     bag.FromXml(xml);

     Assert.IsTrue(bag["key"] as string == "Value");
     Assert.IsInstanceOfType( bag["Key3"], typeof(Guid));                        
     
     Assert.IsNull(bag["Key8"]);
     //Assert.IsNull(bag["Key10"]);

     Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject));
}

This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type.

The same code with a strongly typed PropertyBag<decimal> looks like this:

[TestMethod]
public void PropertyBagTwoWayValueTypeSerializationTest()
{
    var bag = new PropertyBag<decimal>();

    bag.Add("key", 10M);
    bag.Add("Key1", 100.10M);
    bag.Add("Key2", 200.10M);
    bag.Add("Key3", 300.10M);
    
    string xml = bag.ToXml();

    TestContext.WriteLine(bag.ToXml());

    bag.Clear();

    bag.FromXml(xml);            

    Assert.IsTrue(bag.Get("Key1") == 100.10M);
    Assert.IsTrue(bag.Get("Key3") == 300.10M);            
}

and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice.

Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this:

<?xml version="1.0" encoding="utf-8"?>
<properties>
  <item>
    <key>key</key>
    <value>Value</value>
  </item>
  <item>
    <key>Key2</key>
    <value type="decimal">100.10</value>
  </item>
  <item>
    <key>Key3</key>
    <value type="___System.Guid">
      <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid>
    </value>
  </item>
  <item>
    <key>Key4</key>
    <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value>
  </item>
  <item>
    <key>Key5</key>
    <value type="boolean">true</value>
  </item>
  <item>
    <key>Key7</key>
    <value type="base64Binary">Ki1C</value>
  </item>
  <item>
    <key>Key8</key>
    <value type="nil" />
  </item>
  <item>
    <key>Key9</key>
    <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject">
      <ComplexObject>
        <Name>Rick</Name>
        <Entered>2011-09-26T17:45:58.5789578-10:00</Entered>
        <Count>10</Count>
      </ComplexObject>
    </value>
  </item>
</properties>

 

The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding.

For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least.

The XML generated from the second example that uses PropertyBag<decimal> looks like this:

<?xml version="1.0" encoding="utf-8"?>
<properties>
  <item>
    <key>key</key>
    <value type="decimal">10</value>
  </item>
  <item>
    <key>Key1</key>
    <value type="decimal">100.10</value>
  </item>
  <item>
    <key>Key2</key>
    <value type="decimal">200.10</value>
  </item>
  <item>
    <key>Key3</key>
    <value type="decimal">300.10</value>
  </item>
</properties>

 

How does it work

As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required.

Here's the code:

    /// <summary>
    /// Creates a serializable string/object dictionary that is XML serializable
    /// Encodes keys as element names and values as simple values with a type
    /// attribute that contains an XML type name. Complex names encode the type 
    /// name with type='___namespace.classname' format followed by a standard xml
    /// serialized format. The latter serialization can be slow so it's not recommended
    /// to pass complex types if performance is critical.
    /// </summary>
    [XmlRoot("properties")]
    public class PropertyBag : PropertyBag<object>
    {
        /// <summary>
        /// Creates an instance of a propertybag from an Xml string
        /// </summary>
        /// <param name="xml">Serialize</param>
        /// <returns></returns>
        public static PropertyBag CreateFromXml(string xml)
        {
            var bag = new PropertyBag();
            bag.FromXml(xml);
            return bag;            
        }
    }

    /// <summary>
    /// Creates a serializable string for generic types that is XML serializable.
    /// 
    /// Encodes keys as element names and values as simple values with a type
    /// attribute that contains an XML type name. Complex names encode the type 
    /// name with type='___namespace.classname' format followed by a standard xml
    /// serialized format. The latter serialization can be slow so it's not recommended
    /// to pass complex types if performance is critical.
    /// </summary>
    /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam>
    [XmlRoot("properties")]    
    public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable               
    {           
        /// <summary>
        /// Not implemented - this means no schema information is passed
        /// so this won't work with ASMX/WCF services.
        /// </summary>
        /// <returns></returns>       
        public System.Xml.Schema.XmlSchema GetSchema()
        {
            return null;
        }


        /// <summary>
        /// Serializes the dictionary to XML. Keys are 
        /// serialized to element names and values as 
        /// element values. An xml type attribute is embedded
        /// for each serialized element - a .NET type
        /// element is embedded for each complex type and
        /// prefixed with three underscores.
        /// </summary>
        /// <param name="writer"></param>
        public void WriteXml(System.Xml.XmlWriter writer)
        {
            foreach (string key in this.Keys)
            {
                TValue value = this[key];

                Type type = null;
                if (value != null)
                    type = value.GetType();

                writer.WriteStartElement("item");

                writer.WriteStartElement("key");
                writer.WriteString(key as string);
                writer.WriteEndElement();

                writer.WriteStartElement("value");
                string xmlType = XmlUtils.MapTypeToXmlType(type);
                bool isCustom = false;

                // Type information attribute if not string
                if (value == null)
                {
                    writer.WriteAttributeString("type", "nil");
                }
                else if (!string.IsNullOrEmpty(xmlType))
                {
                    if (xmlType != "string")
                    {
                        writer.WriteStartAttribute("type");
                        writer.WriteString(xmlType);
                        writer.WriteEndAttribute();
                    }
                }
                else
                {
                    isCustom = true;
                    xmlType = "___" + value.GetType().FullName;
                    writer.WriteStartAttribute("type");
                    writer.WriteString(xmlType);
                    writer.WriteEndAttribute();
                }

                // Actual deserialization
                if (!isCustom)
                {
                    if (value != null)
                        writer.WriteValue(value);
                }
                else
                {
                    XmlSerializer ser = new XmlSerializer(value.GetType());
                    ser.Serialize(writer, value);
                }
                writer.WriteEndElement(); // value

                writer.WriteEndElement(); // item
            }
        }
        

        /// <summary>
        /// Reads the custom serialized format
        /// </summary>
        /// <param name="reader"></param>
        public void ReadXml(System.Xml.XmlReader reader)
        {
            this.Clear();
            while (reader.Read())
            {
                if (reader.NodeType == XmlNodeType.Element && reader.Name == "key")
                {                    
                    string xmlType = null;
                    string name = reader.ReadElementContentAsString(); 

                    // item element
                    reader.ReadToNextSibling("value");
                    
                    if (reader.MoveToNextAttribute())
                        xmlType = reader.Value;
                    reader.MoveToContent();

                    TValue value;
                    if (xmlType == "nil")
                        value = default(TValue); // null
                    else if (string.IsNullOrEmpty(xmlType))
                    {
                        // value is a string or object and we can assign TValue to value
                        string strval = reader.ReadElementContentAsString();
                        value = (TValue) Convert.ChangeType(strval, typeof(TValue)); 
                    }
                    else if (xmlType.StartsWith("___"))
                    {
                        while (reader.Read() && reader.NodeType != XmlNodeType.Element)
                        { }

                        Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3));
                        //value = reader.ReadElementContentAs(type,null);
                        XmlSerializer ser = new XmlSerializer(type);
                        value = (TValue)ser.Deserialize(reader);
                    }
                    else
                        value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null);

                    this.Add(name, value);
                }
            }
        }


        /// <summary>
        /// Serializes this dictionary to an XML string
        /// </summary>
        /// <returns>XML String or Null if it fails</returns>
        public string ToXml()
        {
            string xml = null;
            SerializationUtils.SerializeObject(this, out xml);
            return xml;
        }

        /// <summary>
        /// Deserializes from an XML string
        /// </summary>
        /// <param name="xml"></param>
        /// <returns>true or false</returns>
        public bool FromXml(string xml)
        {
            this.Clear();

            // if xml string is empty we return an empty dictionary                        
            if (string.IsNullOrEmpty(xml))
                return true;

            var result = SerializationUtils.DeSerializeObject(xml, 
                                                 this.GetType()) as PropertyBag<TValue>;
            if (result != null)
            {
                foreach (var item in result)
                {
                    this.Add(item.Key, item.Value);
                }
            }
            else
                // null is a failure
                return false;

            return true;
        }


        /// <summary>
        /// Creates an instance of a propertybag from an Xml string
        /// </summary>
        /// <param name="xml"></param>
        /// <returns></returns>
        public static PropertyBag<TValue> CreateFromXml(string xml)
        {
            var bag = new PropertyBag<TValue>();
            bag.FromXml(xml);
            return bag;
        }
    }
}

The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here).

Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back.

Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it.

Resources

You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  CSharp  

Getting a Web Resource Url in non WebForms Applications

$
0
0

WebResources in ASP.NET are pretty useful feature. WebResources are resources that are embedded into a .NET assembly and can be loaded from the assembly via a special resource URL. WebForms includes a method on the ClientScriptManager (Page.ClientScript) and the ScriptManager object to retrieve URLs to these resources.

For example you can do:

ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE);

GetWebResourceUrl requires a type (which is used for the assembly lookup in which to find the resource) and the resource id to lookup. GetWebResourceUrl() then returns a nasty old long URL like this:

WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634533278261362212

While lately excessive resource usage has been frowned upon especially by MVC developers who tend to opt for content distributed as files, I still think that Web Resources have their place even in non-WebForms applications. Also if you have existing assemblies that include resources like scripts and common image links it sure would be nice to access them from non-WebForms pages like MVC views or even in plain old Razor Web Pages.

Where's my Page object Dude?

Unfortunately natively ASP.NET doesn't have a mechanism for retrieving WebResource Urls outside of the WebForms engine. It's a feature that's specifically baked into WebForms and that relies specifically on the Page HttpHandler implementation. Both Page.ClientScript (obviously) and ScriptManager rely on a hosting Page object in order to work and the various methods off these objects require control instances passed. The reason for this is that the script managers can inject scripts and links into Page content (think RegisterXXXX methods) and for that a Page instance is required. However, for many other methods - like GetWebResourceUrl() - that simply return resources or resource links the Page reference is really irrelevant.

While there's a separate ClientScriptManager class, it's marked as sealed and doesn't have any public constructors so you can't create your own instance (without Reflection). Even if it did the internal constructor it does have requires a Page reference. No good…

So, can we get access to a WebResourceUrl generically without running in a WebForms Page instance?

We just have to create a Page instance ourselves and use it internally. There's nothing intrinsic about the use of the Page class in ClientScript, at least for retrieving resources and resource Urls so it's easy to create an instance of a Page for example in a static method.

For our needs of retrieving ResourceUrls or even actually retrieving script resources we can use a canned, non-configured Page instance we create on our own. The following works just fine:

public static string GetWebResourceUrl(Type type, string resource )
{
    Page page = new Page();            
    return page.ClientScript.GetWebResourceUrl(type, resource);
}

A slight optimization for this might be to cache the created Page instance. Page tends to be a pretty heavy object to create each time a URL is required so you might want to cache the instance:

public class WebUtils
{
    private static Page CachedPage
    {
        get
        {
            if (_CachedPage == null)
                _CachedPage = new Page();
            return _CachedPage;
        }
    }
    private static Page _CachedPage;

    public static string GetWebResourceUrl(Type type, string resource)
    {
        return CachedPage.ClientScript.GetWebResourceUrl(type, resource);
    }
}

You can now use GetWebResourceUrl in a Razor page like this:

<!DOCTYPE html>
<html
    <head>
        <script src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.JQUERY_SCRIPT_RESOURCE)">
</
script
> </head> <body> <div class="errordisplay"> <img src="@WebUtils.GetWebResourceUrl(typeof(ControlResources),ControlResources.WARNING_ICON_RESOURCE)" /> This is only a Test! </div> </body> </html>

And voila - there you have WebResources served from a non-Page based application.

WebResources may be a on the way out, but legacy apps have them embedded and for some situations, like fallback scripts and some common image resources I still like to use them. Being able to use them from non-WebForms applications should have been built into the core ASP.NETplatform IMHO, but seeing that it's not this workaround is easy enough to implement.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  MVC  
Viewing all 664 articles
Browse latest View live