Quantcast
Channel: Rick Strahl's Web Log
Viewing all 664 articles
Browse latest View live

An easy way to create Side by Side registrationless COM Manifests with Visual Studio

$
0
0

Here's something I didn't find out until today: You can use Visual Studio to easily create registrationless COM manifest files for you with just a couple of small steps. Registrationless COM lets you use COM component without them being registered in the registry. This means it's possible to deploy COM components along with another application using plain xcopy semantics. To be sure it's rarely quite that easy - you need to watch out for dependencies - but if you know you have COM components that are light weight and have no or known dependencies it's easy to get everything into a single folder and off you go.

Registrationless COM works via manifest files which carry the same name as the executable plus a .manifest extension (ie. yourapp.exe.manifest)

I'm going to use a Visual FoxPro COM object as an example and create a simple Windows Forms app that calls the component - without that component being registered. Let's take a walk down memory lane…

Create a COM Component

I start by creating a FoxPro COM component because that's what I know and am working with here in my legacy environment. You can use VB classic or C++ ATL object if that's more to your liking. Here's a real simple Fox one:

DEFINE CLASS SimpleServer as Session OLEPUBLIC

FUNCTION HelloWorld(lcName)
RETURN "Hello " + lcName

ENDDEFINE

Compile it into a DLL COM component with:

BUILD MTDLL simpleserver FROM simpleserver RECOMPILE

And to make sure it works test it quickly from Visual FoxPro:

server = CREATEOBJECT("simpleServer.simpleserver")
MESSAGEBOX( server.HelloWorld("Rick") )

Using Visual Studio to create a Manifest File for a COM Component

Next open Visual Studio and create a new executable project - a Console App or WinForms or WPF application will all do.

  • Go to the References Node
    AddReferenceDialog
  • Select Add Reference
  • Use the Browse tab and find your compiled DLL to import 
  • Next you'll see your assembly in the project.
  • Right click on the reference and select Properties
  • Click on the Isolated DropDown and select True
    IsolatedSetting

Compile and that's all there's to it. Visual Studio will create a App.exe.manifest file right alongside your application's EXE. The manifest file created looks like this:

xml version="1.0" encoding="utf-8"?>
<assembly xsi:schemaLocation="urn:schemas-microsoft-com:asm.v1 assembly.adaptive.xsd"
          manifestVersion="1.0"
          xmlns:asmv1="urn:schemas-microsoft-com:asm.v1"
          xmlns:asmv2="urn:schemas-microsoft-com:asm.v2"
          xmlns:asmv3="urn:schemas-microsoft-com:asm.v3"
          xmlns:dsig="http://www.w3.org/2000/09/xmldsig#"
          xmlns:co.v1="urn:schemas-microsoft-com:clickonce.v1"
          xmlns:co.v2="urn:schemas-microsoft-com:clickonce.v2"
          xmlns="urn:schemas-microsoft-com:asm.v1"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <assemblyIdentity name="App.exe"
                    version="1.0.0.0"
                    processorArchitecture="x86"
                    type="win32" />
  <file name="simpleserver.DLL"
        asmv2:size="27293">
    <hash xmlns="urn:schemas-microsoft-com:asm.v2">
      <dsig:Transforms>
        <dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" />
      dsig:Transforms>
      <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />
      <dsig:DigestValue>puq+ua20bbidGOWhPOxfquztBCU=dsig:DigestValue>
    hash>
    <typelib tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}"
             version="1.0"
             helpdir=""
             resourceid="0"
             flags="HASDISKIMAGE" />
    <comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}"
              threadingModel="Apartment"
              tlbid="{f10346e2-c9d9-47f7-81d1-74059cc15c3c}"
              progid="simpleserver.SimpleServer"
              description="simpleserver.SimpleServer" />
  file>
assembly>

Now let's finish our super complex console app to test with:

using System;
using System.Collections.Generic;
using System.Text;

namespace ConsoleApplication1
{
    class
Program
   
{
        static voidMain(string[] args)
        {

           Type type = Type.GetTypeFromProgID("simpleserver.simpleserver",true);
           dynamic server = Activator.CreateInstance(type); 
           Console.WriteLine(server.HelloWorld("rick"));
           Console.ReadLine(); 
} } }

Now run the Console Application… As expected that should work. And why not? The COM component is still registered, right? :-) Nothing tricky about that.

Let's unregister the COM component and then re-run and see what happens.

  • Go to the Command Prompt
  • Change to the folder where the DLL is installed
  • Unregister with: RegSvr32 -u simpleserver.dll     

To be sure that the COM component no longer works, check it out with the same test you used earlier (ie. o = CREATEOBJECT("SimpleServer.SimpleServer") in your development environment or VBScript etc.).

Make sure you run the EXE and you don't re-compile the application or else Visual Studio will complain that it can't find the COM component in the registry while compiling. In fact now that we have our .manifest file you can remove the COM object from the project.

When you run run the EXE from Windows Explorer or a command prompt to avoid the recompile.

Watch out for embedded Manifest Files

Now recompile your .NET project and run it… and it will most likely fail!

The problem is that .NET applications by default embeds a manifest file into the compiled EXE application which results in the externally created manifest file being completely ignored. Only one manifest can be applied at a time and the compiled manifest takes precedency. Uh, thanks Visual Studio - not very helpful…

Note that if you use another development tool like Visual FoxPro to create your EXE this won't be an issue as long as the tool doesn't automatically add a manifest file. Creating a Visual FoxPro EXE for example will work immediately with the generated manifest file as is.

If you are using .NET and Visual Studio you have a couple of options of getting around this:

  • Remove the embedded manifest file
  • Copy the contents of the generated manifest file into a project manifest file and compile that in

To remove an embedded manifest in a Visual Studio project:

  • Open the Project Properties (Alt-Enter on project node)
  • Go down to Resources | Manifest and select | Create Application without a Manifest

NoCompiledManfiest 

You can now add use the external manifest file and it will actually be respected when the app runs.

The other option is to let Visual Studio create the manifest file on disk and then explicitly add the manifest file into the project. Notice on the dialog above I did this for app.exe.manifest and the manifest actually shows up in the list. If I select this file it will be compiled into the EXE and be used in lieu of any external files and that works as well.

Remove the simpleserver.dll reference so you can compile your code and run the application. Now it should work without COM registration of the component.

Personally I prefer external manifests because they can be modified after the fact - compiled manifests are evil in my mind because they are immutable - once they are there they can't be overriden or changed. So I prefer an external manifest. However, if you are absolutely sure nothing needs to change and you don't want anybody messing with your manifest, you can also embed it. The option to either is there.

Watch for Manifest Caching

While working trying to get this to work I ran into some problems at first. Specifically when it wasn't working at first (due to the embedded schema) I played with various different manifest layouts in different files etc.. There are a number of different ways to actually represent manifest files including offloading to separate folder (more on that later).

A few times I made deliberate errors in the schema file and I found that regardless of what I did once the app failed or worked no amount of changing of the manifest file would make it behave differently. It appears that Windows is caching the manifest data for a given EXE or DLL. It takes a restart or a recompile of either the EXE or the DLL to clear the caching. Recompile your servers in order to see manifest changes unless there's an outright failure of an invalid manifest file. If the app starts the manifest is being read and caches immediately.

This can be very confusing especially if you don't know that it's happening. I found myself always recompiling the exe after each run and before making any changes to the manifest file.

Don't forget about Runtimes of COM Objects

In the example I used above I used a Visual FoxPro COM component. Visual FoxPro is a runtime based environment so if I'm going to distribute an application that uses a FoxPro COM object the runtimes need to be distributed as well. The same is true of classic Visual Basic applications. Assuming that you don't know whether the runtimes are installed on the target machines make sure to install all the additional files in the EXE's directory alongside the COM DLL.

In the case of Visual FoxPro the target folder should contain:

  • The EXE  App.exe
  • The Manifest file (unless it's compiled in) App.exe.manifest
  • The COM object DLL (simpleserver.dll)
  • Visual FoxPro Runtimes: VFP9t.dll (or VFP9r.dll for non-multithreaded dlls), vfp9rENU.dll, msvcr71.dll

All these files should be in the same folder.

Debugging Manifest load Errors

If you for some reason get your manifest loading wrong there are a couple of useful tools available - SxSTrace and SxSParse. These two tools can be a huge help in debugging manifest loading errors. Put the following into a batch file (SxS_Trace.bat for example):

sxstrace Trace -logfile:sxs.bin
sxstrace Parse -logfile:sxs.bin -outfile:sxs.txt

Then start the batch file before running your EXE. Make sure there's no caching happening as described in the previous section. For example, if I go into the manifest file and explicitly break the CLSID and/or ProgID I get a detailed report on where the EXE is looking for the manifest and what it's reading. Eventually the trace gives me an error like this:

INFO: Parsing Manifest File C:\wwapps\Conf\SideBySide\Code\app.EXE.
    INFO: Manifest Definition Identity is App.exe,processorArchitecture="x86",type="win32",version="1.0.0.0".
    ERROR: Line 13: The value {AAaf2c2811-0657-4264-a1f5-06d033a969ff} of attribute clsid in element comClass is invalid.
ERROR: Activation Context generation failed.
End Activation Context Generation.

pinpointing nicely where the error lies. Pay special attention to the various attributes - they have to match exactly in the different sections of the manifest file(s).

Multiple COM Objects

The manifest file that Visual Studio creates is actually quite more complex than is required for basic registrationless COM object invokation. The manifest file can be simplified a lot actually by stripping off various namespaces and removing the type library references altogether. Here's an example of a simplified manifest file that actually includes references to 2 COM servers:

xml version="1.0" encoding="utf-8"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
          manifestVersion="1.0">
  <assemblyIdentity name="App.exe"
                    version="1.0.0.0"
                    processorArchitecture="x86"
                    type="win32"
                    />
  <file name="simpleserver.DLL">

    <comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}"
              threadingModel="Apartment"
              progid="simpleserver.SimpleServer"
              description="simpleserver.SimpleServer" />
  file>

  <file name = "sidebysidedeploy.dll">
    <comClass
      clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}"
      progid="sidebysidedeploy.SidebysidedeployServer"
      description="SidebySideDeploy Server"
      threadingModel="apartment"
     />
  file>
assembly>

Simple enough right?

Routing to separate Manifest Files and Folders

In the examples above all files ended up in the application's root folder - all the DLLs, support files and runtimes. Sometimes that's not so desirable and you can actually create separate manifest files. The easiest way to do this is to create a manifest file that 'routes' to another manifest file in a separate folder. Basically you create a new 'assembly identity' via a named id. You can then create a folder and another manifest with the id plus .manifest that points at the actual file.

In this example I create:

  • App.exe.manifest
  • A folder called App.deploy
  • A manifest file in App.deploy
  • All DLLs and runtimes in App.deploy

Let's start with that master manifest file. This file only holds a reference to another manifest file:

App.exe.manifest

xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
          manifestVersion="1.0">

  <assemblyIdentity name="App.exe"
                    version="1.0.0.0"
                    processorArchitecture="x86"
                    type="win32" />

  <dependency>
    <dependentAssembly>
      <assemblyIdentity name="App.deploy"
                        version="1.0.0.0"
                        type="win32"
                        />

    dependentAssembly>
  dependency>


assembly>

 

Note this file only contains a dependency to App.deploy which is another manifest id. I can then create App.deploy.manifest in the current folder or in an App.deploy folder. In this case I'll create App.deploy and in it copy the DLLs and support runtimes. I then create App.deploy.manifest.

App.deploy.manifest

xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
          manifestVersion="1.0">

  <assemblyIdentity
    name="App.deploy"
    type="win32"
    version="1.0.0.0" />

  <file name="simpleserver.DLL">
    <comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}"
              threadingModel="Apartment"
              progid="simpleserver.SimpleServer"
              description="simpleserver.SimpleServer" />
  file>

  <file name="sidebysidedeploy.dll">
    <comClass
      clsid="{EF82B819-7963-4C36-9443-3978CD94F57C}"
      threadingModel="Apartment"
      progid="sidebysidedeploy.SidebysidedeployServer"
      description="SidebySideDeploy Server" />
  file>

assembly>

 

In this manifest file I then host my COM DLLs and any support runtimes. This is quite useful if you have lots of DLLs you are referencing or if you need to have separate configuration and application files that are associated with the COM object. This way the operation of your main application and the COM objects it interacts with is somewhat separated.

You can see the two folders here:

DeployedFiles

 

Routing Manifests to different Folders

In theory registrationless COM should be pretty easy in painless - you've seen the configuration manifest files and it certainly doesn't look very complicated, right? But the devil's in the details. The ActivationContext API (SxS - side by side activation) is very intolerant of small errors in the XML or formatting of the keys, so be really careful when setting up components, especially if you are manually editing these files. If you do run into trouble SxsTrace/SxsParse are a huge help to track down the problems. And remember that if you do have problems that you'll need to recompile your EXEs or DLLs for the SxS APIs to refresh themselves properly.

All of this gets even more fun if you want to do registrationless COM inside of IIS :-) But I'll leave that for another blog post…

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in COM  .NET  FoxPro  

Loading jQuery Consistently in a .NET Web App

$
0
0

One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC.

Loading jQuery Properly From CDN

Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario.

Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage.

Here's how I use jQuery CDN plus a fallback link on my WebLog for example:

<!DOCTYPE HTML>
<html>
<head>
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script>
    <script>
        if (typeof (jQuery) == 'undefined')
            document.write(unescape("%3Cscript " +
"src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E"
)); </script
> <title>Rick Strahl's Web Log</title> ... </head>
 

You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js".

Important: Use the proper Protocol/Scheme for  for CDN Urls

[updated based on comments]

If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL:

<script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script>

which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links.

BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-)

Version Numbers

When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache.

On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages.

Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about.

Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from.

Loading Scripts via Server Code

Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes.

In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal…

In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template.

It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics.

Providing Resources in ControlResources

In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter).

The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading:

  • A Script LoadMode (CDN, Resource, or script url)
  • A default CDN Url
  • A fallback url

They are  static properties in the ControlResources class:

public class ControlResources
 {

     /// <summary>
     /// Determines what location jQuery is loaded from
     /// </summary>
     public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork;

     /// <summary>
     /// jQuery CDN Url on Google
     /// </summary>
     public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js";

     /// <summary>
     /// jQuery CDN Url on Google
     /// </summary>
     public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js";

     /// <summary>
     /// jQuery UI fallback Url if CDN is unavailable or WebResource is used
     /// Note: The file needs to exist and hold the minimized version of jQuery ui
     /// </summary>
     public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js";
}

These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application:

protected void Application_Start(object sender, EventArgs e)
{
    // Force jQuery to be loaded off Google Content Network
    ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork;

    // Allow overriding of the Cdn url
    ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js";


    // Route to our own internal handler
    App.OnApplicationStart();
} 

With these basic settings in place you can then embed expressions into a page easily.

In WebForms use:

<!DOCTYPE html>
<html>
<head runat="server">
    <%= ControlResources.jQueryLink() %>
    <script src="scripts/ww.jquery.min.js"></script>
</head>

In Razor use:

<!DOCTYPE html>
<html>
<head>
    @Html.Raw(ControlResources.jQueryLink())
    <script src="scripts/ww.jquery.min.js"></script>
</head>

Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text.

Both the WebForms and Razor output produce:

<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head>

which produces the desired effect for both CDN load and fallback URL.

The implementation of jQueryLink is pretty basic of course:

/// <summary>
/// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings
/// of this class. Default load is by CDN plus WebResource fallback
/// </summary>
/// <param name="url">
/// An optional explicit URL to load jQuery from. Url is resolved. 
/// When specified no fallback is applied
/// </param>        
/// <returns>full script tag and fallback script for jQuery to load</returns>
public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null)
{
    string jQueryUrl = string.Empty;
    string fallbackScript = string.Empty;

    if (jQueryLoadMode == JQueryLoadModes.Default)
        jQueryLoadMode = ControlResources.jQueryLoadMode;
    
    if (!string.IsNullOrEmpty(url))
        jQueryUrl = WebUtils.ResolveUrl(url);
    else if (jQueryLoadMode == JQueryLoadModes.WebResource)
    {
        Page page = new Page();
jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources),
ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript,
WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; }

There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine).

You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s).

I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately.

I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation.

FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  jQuery  

SnagIt Live Writer Plug-in Updated

$
0
0

Ah, I love SnagIt from TechSmith and I use the heck out of it almost every day. So no surprise that I've decided some time ago to integrate SnagIt into a few applications that require screen shots extensively. It's been a while since I've posted an update to my small SnagIt Windows Live Writer plug-in. There have been a few nagging issues that have crept up with recent changes in the way SnagIt handles captures in recent versions and they have been addressed in this update of SnagIt.

Personally I love SnagIt and use it extensively mostly for blogging, but also for writing documentation and articles etc. While there are many other (and also free) tools out there to do basic screen captures, SnagIt continues to be the most convenient tool for me with its nice built in capture and effects editor that makes creating professional looking captures childishly simple. And maybe even more importantly: SnagIt has a COM interface that can be automated and  makes it super easy to embed into other applications. I've built plugins for SnagIt as well as for one of my company's own tools, Html Help Builder.

If you use the Windows Live Writer offline WebLog Editor to write blog posts and have a copy of SnagIt it's probably worth your while to check this out if you haven't already.

In case you haven't, this plugin integrates SnagIt with Live Writer so you can easily capture and edit content and embed it into a post. Captures are shown in the SnagIt Preview editor where you can edit the image and apply image markup or effects, before selecting Finish (or Cancel). The final image can then be pasted directly into your Live Writer post.

When installed the SnagIt plug-in shows up on the PlugIn list or in the Plug-Ins toolbar shortcut:

SnagItCaptureDropDown

Once you select the Plug in you get the capture window that allows you to customize the capture process which includes most of the useful SnagIt capture options:

SnagItLiveWriterCapture[4]

Once you're done capturing the image shows up in the SnagIt Image Editor and you can crop, mark up and apply effects.

CaptureWindowEditing

When done you click the Finish button and the image is embedded right into your blog post. Easy - how do you think the images in this blog entry got in here? The beauty of SnagIt is that it's all easily integrated - Capturing, editing and embedding, it only takes a few seconds to do it all especially if you save image effect presets in SnagIt.

What's updated

The main issue addressed in this update has to do with the plug-in updates the Live Writer window. When a capture starts Live Writer gets minimized to get out of the way to let you pick your capture source. When the capture is complete and the image has been embedded Live Writer is activated once again. Recent versions of SnagIt however had changed the Window positioning of SnagIt so that Live Writer ended up popping up back behind the SnagIt window which was pretty annoying. This update pushes Live Writer back to the top of the window stack using some delaying tactics in the code.

There have also been a few small changes to the way the code interacts with the COM object which is more reliable if a capture fails or SnagIt blows up or is locked because it's already in a capture outside of the automation interface.

Source Code

SnagIt Automation is something I actually use a lot. As mentioned I've integrated this automation into Live Writer as well as my documentation tool Html Help Builder, which I use just about daily. The SnagIt integration has a similar interface in that application and provides similar functionality. It's quite useful to integrate SnagIt into other applications.

Because it's quite useful to embed SnagIt into other apps there's source code that you can download and embed into your own applications. The code includes both the dialog class that is automated from Live Writer, as well as the basic capture component that captures images to a disk file.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in Live Writer  WebLog  

Using the West Wind Web Toolkit to set up AJAX and REST Services

$
0
0

I frequently get questions about which option to use for creating AJAX and REST backends for ASP.NET applications. There are many solutions out there to do this actually, but when I have a choice - not surprisingly - I fall back to my own tools in the West Wind West Wind Web Toolkit. I've talked a bunch about the 'in-the-box' solutions in the past so for a change in this post I'll talk about the tools that I use in my own and customer applications to handle AJAX and REST based access to service resources using the West Wind West Wind Web Toolkit.

Let me preface this by saying that I like things to be easy. Yes flexible is very important as well but not at the expense of over-complexity. The goal I've had with my tools is make it drop dead easy, with good performance while providing the core features that I'm after, which are:

  • Easy AJAX/JSON Callbacks
  • Ability to return any kind of non JSON content (string, stream, byte[], images)
  • Ability to work with both XML and JSON interchangeably for input/output
  • Access endpoints via POST data, RPC JSON calls, GET QueryString values or Routing interface
  • Easy to use generic JavaScript client to make RPC calls (same syntax, just what you need)
  • Ability to create clean URLS with Routing
  • Ability to use standard ASP.NET HTTP Stack for HTTP semantics

It's all about options!

In this post I'll demonstrate most of these features (except XML) in a few simple and short samples which you can download. So let's take a look and see how you can build an AJAX callback solution with the West Wind Web Toolkit.

Installing the Toolkit Assemblies

The easiest and leanest way of using the Toolkit in your Web project is to grab it via NuGet:

West Wind Web and AJAX Utilities (Westwind.Web)

and drop it into the project by right clicking in your Project and choosing Manage NuGet Packages from anywhere in the Project.

InsertNuget

 

When done you end up with your project looking like this:

NewProjectAfterNuget

What just happened?

Nuget added two assemblies - Westwind.Web and Westwind.Utilities and the client ww.jquery.js library. It also added a couple of references into web.config: The default namespaces so they can be accessed in pages/views and a ScriptCompressionModule that the toolkit optionally uses to compress script resources served from within the assembly (namely ww.jquery.js and optionally jquery.js).

Creating a new Service

The West Wind Web Toolkit supports several ways of creating and accessing AJAX services, but for this post I'll stick to the lower level approach that works from any plain HTML page or of course MVC, WebForms, WebPages. There's also a WebForms specific control that makes this even easier but I'll leave that for another post.

So, to create a new standalone AJAX/REST service we can create a new HttpHandler in the new project either as a pure class based handler or as a generic .ASHX handler. Both work equally well, but generic handlers don't require any web.config configuration so I'll use that here.

In the root of the project add a Generic Handler. I'm going to call this one StockService.ashx. Once the handler has been created, edit the code and remove all of the handler body code. Then change the base class to CallbackHandler and add methods that have a [CallbackMethod] attribute.

Here's the modified base handler implementation now looks like with an added HelloWorld method:

using System;
using Westwind.Web;

namespace WestWindWebAjax
{
    /// <summary>
    /// Handler implements CallbackHandler to provide REST/AJAX services
    /// </summary>
    public class SampleService : CallbackHandler
    {

        [CallbackMethod]
        public string HelloWorld(string name)
        { 
            return "Hello " + name + ". Time is: " + DateTime.Now.ToString();
        }
    }
}


Notice that the class inherits from CallbackHandler and that the HelloWorld service method is marked up with [CallbackMethod]. We're done here. Services

Urlbased Syntax

Once you compile, the 'service' is live can respond to requests. All CallbackHandlers support input in GET and POST formats, and can return results as JSON or XML. To check our fancy HelloWorld method we can now access the service like this:

http://localhost/WestWindWebAjax/StockService.ashx?Method=HelloWorld&name=Rick

which produces a default JSON response - in this case a string (wrapped in quotes as it's JSON):

JSONOutput

(note by default JSON will be downloaded by most browsers not displayed - various options are available to view JSON right in the browser)

If I want to return the same data as XML I can tack on a &format=xml at the end of the querystring which produces:

<string>Hello Rick. Time is: 11/1/2011 12:11:13 PM</string>

Cleaner URLs with Routing Syntax

If you want cleaner URLs for each operation you can also configure custom routes on a per URL basis similar to the way that WCF REST does. To do this you need to add a new RouteHandler to your application's startup code in global.asax.cs one for each CallbackHandler based service you create:

protected void Application_Start(object sender, EventArgs e)
{
    CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes);
}

With this code in place you can now add RouteUrl properties to any of your service methods. For the HelloWorld method that doesn't make a ton of sense but here is what a routed clean URL might look like in definition:

[CallbackMethod(RouteUrl="stocks/HelloWorld/{name}")]
public string HelloWorld(string name)
{ 
    return "Hello " + name + ". Time is: " + DateTime.Now.ToString();
}

The same URL I previously used now becomes a bit shorter and more readable with:

http://localhost/WestWindWebAjax/HelloWorld/Rick

It's an easy way to create cleaner URLs and still get the same functionality.

Calling the Service with $.getJSON()

Since the result produced is JSON you can now easily consume this data using jQuery's getJSON method. First we need a couple of scripts - jquery.js and ww.jquery.js in the page:

<!DOCTYPE html>
<html>
<head>
    <link href="Css/Westwind.css" rel="stylesheet" type="text/css" />
    <script src="scripts/jquery.min.js" type="text/javascript"></script>
    <script src="scripts/ww.jquery.min.js" type="text/javascript"></script>
</head>
<body>

Next let's add a small HelloWorld example form (what else) that has a single textbox to type a name, a button and a div tag to receive the result:

        <fieldset>
        <legend>Hello World</legend>
        
            Please enter a name: 
            <input type="text" name="txtHello" id="txtHello" value="" />
            <input type="button" id="btnSayHello" value="Say Hello (POST)"  />
            <input type="button" id="btnSayHelloGet" value="Say Hello (GET)"  />

            <div id="divHelloMessage" class="errordisplay" 
                 style="display:none;width: 450px;" >
            </div>

        </fieldset>

Then to call the HelloWorld method a little jQuery is used to hook the document startup and the button click followed by the $.getJSON call to retrieve the data from the server.

<script type="text/javascript">
    $(document).ready(function () {

        $("#btnSayHelloGet").click(function () {
            $.getJSON("SampleService.ashx",
                        { Method: "HelloWorld", name: $("#txtHello").val() },
                        function (result) {
                            $("#divHelloMessage")
                                    .text(result)
                                    .fadeIn(1000);
                        });

        });
</script>

.getJSON() expects a full URL to the endpoint of our service, which is the ASHX file. We can either provide a full URL (SampleService.ashx?Method=HelloWorld&name=Rick) or we can just provide the base URL and an object that encodes the query string parameters for us using an object map that has a property that matches each parameter for the server method. We can also use the clean URL routing syntax, but using the object parameter encoding actually is safer as the parameters will get properly encoded by jQuery.

The result returned is whatever the result on the server method is - in this case a string. The string is applied to the divHelloMessage element and we're done. Obviously this is a trivial example, but it demonstrates the basics of getting a JSON response back to the browser.

AJAX Post Syntax - using ajaxCallMethod()

The previous example allows you basic control over the data that you send to the server via querystring parameters. This works OK for simple values like short strings, numbers and boolean values, but doesn't really work if you need to pass something more complex like an object or an array back up to the server. To handle traditional RPC type messaging where the idea is to map server side functions and results to a client side invokation, POST operations can be used.

The easiest way to use this functionality is to use ww.jquery.js and the ajaxCallMethod() function. ww.jquery wraps jQuery's AJAX functions and knows implicitly how to call a CallbackServer method with parameters and parse the result.

Let's look at another simple example that posts a simple value but returns something more interesting. Let's start with the service method:

[CallbackMethod(RouteUrl="stocks/{symbol}")]
public StockQuote GetStockQuote(string symbol)
{
    Response.Cache.SetExpires(DateTime.UtcNow.Add(new TimeSpan(0, 2, 0)));

    StockServer server = new StockServer();
    var quote = server.GetStockQuote(symbol);
    if (quote == null)
        throw new ApplicationException("Invalid Symbol passed.");

    return quote;
}

This sample utilizes a small StockServer helper class (included in the sample) that downloads a stock quote from Yahoo's financial site via plain HTTP GET requests and formats it into a StockQuote object. Lets create a small HTML block that lets us query for the quote and display it:

<fieldset>
    <legend>Single Stock Quote</legend>

    Please enter a stock symbol:
    <input type="text" name="txtSymbol" id="txtSymbol" value="msft" />
    <input type="button" id="btnStockQuote" value="Get Quote" />

    <div id="divStockDisplay" class="errordisplay" style="display:none; width: 450px;">
        <div class="label-left">Company:</div>
        <div id="stockCompany"></div>
        <div class="label-left">Last Price:</div>
        <div id="stockLastPrice"></div>      
        <div class="label-left">Quote Time:</div>          
        <div id="stockQuoteTime"></div>
    </div>
</fieldset>

The final result looks something like this:

SingleStockQuote 

Let's hook up the button handler to fire the request and fill in the data as shown:

$("#btnStockQuote").click(function () {
    ajaxCallMethod("SampleService.ashx", "GetStockQuote",
                    [$("#txtSymbol").val()],
                    function (quote) {
                        $("#divStockDisplay").show().fadeIn(1000);
                        $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")");
                        $("#stockLastPrice").text(quote.LastPrice);
                        $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, HH:mm EST"));
                    }, onPageError);
});

So we point at SampleService.ashx and the GetStockQuote method, passing a single parameter of the input symbol value. Then there are two handlers for success and failure callbacks. 

The success handler is the interesting part - it receives the stock quote as a result and assigns its values to various 'holes' in the stock display elements.

The data that comes back over the wire is JSON and it looks like this:

{ "Symbol":"MSFT",
"Company":"Microsoft Corpora",
"OpenPrice":26.11,
"LastPrice":26.01,
"NetChange":0.02,
"LastQuoteTime":"2011-11-03T02:00:00Z",
"LastQuoteTimeString":"Nov. 11, 2011 4:20pm" }

which is an object representation of the data. JavaScript can evaluate this JSON string back into an object easily and that's the reslut that gets passed to the success function. The quote data is then applied to existing page content by manually selecting items and applying them. There are other ways to do this more elegantly like using templates, but here we're only interested in seeing how the data is returned. The data in the object is typed - LastPrice is a number and QuoteTime is a date.

Note about the date value: JavaScript doesn't have a date literal although the JSON embedded ISO string format used above  ("2011-11-03T02:00:00Z") is becoming fairly standard for JSON serializers. However, JSON parsers don't deserialize dates by default and return them by string. This is why the StockQuote actually returns a string value of LastQuoteTimeString for the same date. ajaxMethodCallback always converts dates properly into 'real' dates and the example above uses the real date value along with a .formatDate() data extension (also in ww.jquery.js) to display the raw date properly.

Errors and Exceptions

So what happens if your code fails? For example if I pass an invalid stock symbol to the GetStockQuote() method you notice that the code does this:

if (quote == null) throw new ApplicationException("Invalid Symbol passed.");

CallbackHandler automatically pushes the exception message back to the client so it's easy to pick up the error message. Regardless of what kind of error occurs: Server side, client side, protocol errors - any error will fire the failure handler with an error object parameter. The error is returned to the client via a JSON response in the error callback. In the previous examples I called onPageError which is a generic routine in ww.jquery that displays a status message on the bottom of the screen. But of course you can also take over the error handling yourself:

$("#btnStockQuote").click(function () {
    ajaxCallMethod("SampleService.ashx", "GetStockQuote",
                    [$("#txtSymbol").val()],
                    function (quote) {
                        $("#divStockDisplay").fadeIn(1000);
                        $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")");
                        $("#stockLastPrice").text(quote.LastPrice);
                        $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt"));
                    }, 
function (error, xhr) { $("#divErrorDisplay").text(error.message).fadeIn(1000); }); });

The error object has a isCallbackError, message and  stackTrace properties, the latter of which is only populated when running in Debug mode, and this object is returned for all errors: Client side, transport and server side errors. Regardless of which type of error you get the same object passed (as well as the XHR instance optionally) which makes for a consistent error retrieval mechanism.

Specifying HttpVerbs

You can also specify HTTP Verbs that are allowed using the AllowedHttpVerbs option on the CallbackMethod attribute:

[CallbackMethod(AllowedHttpVerbs=HttpVerbs.GET | HttpVerbs.POST)]
public string HelloWorld(string name)  { … }

If you're building REST style API's this might be useful to force certain request semantics onto the client calling. For the above if call with a non-allowed HttpVerb the request returns a 405 error response along with a JSON (or XML) error object result. The default behavior is to allow all verbs access (HttpVerbs.All).

Passing in object Parameters

Up to now the parameters I passed were very simple. But what if you need to send something more complex like an object or an array? Let's look at another example now that passes an object from the client to the server. Keeping with the Stock theme here lets add a method called BuyOrder that lets us buy some shares for a stock.

Consider the following service method that receives an StockBuyOrder object as a parameter:

[CallbackMethod]
public string BuyStock(StockBuyOrder buyOrder)
{
    var server = new StockServer();
    var quote = server.GetStockQuote(buyOrder.Symbol);
    if (quote == null)
        throw new ApplicationException("Invalid or missing stock symbol.");

    return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.",
                         buyOrder.Quantity,
                         quote.Company,
                         quote.Symbol,
                         quote.LastPrice.ToString("c"),                                 
                         (quote.LastPrice * buyOrder.Quantity).ToString("c"),
                         buyOrder.BuyOn.ToString("MMM d"));
}

public class StockBuyOrder
{
    public string Symbol { get; set; }
    public int Quantity { get; set; }
    public DateTime BuyOn { get; set; }

    public StockBuyOrder()
    {
        BuyOn = DateTime.Now;
    }
}

This is a contrived do-nothing example that simply echoes back what was passed in, but it demonstrates how you can pass complex data to a callback method.

On the client side we now have a very simple form that captures the three values on a form:

<fieldset>
    <legend>Post a Stock Buy Order</legend>

    Enter a symbol: <input type="text" name="txtBuySymbol" id="txtBuySymbol" value="GLD" />&nbsp;&nbsp;
    Qty: <input type="text" name="txtBuyQty" id="txtBuyQty" value="10"  style="width: 50px" />&nbsp;&nbsp;
    Buy on: <input type="text" name="txtBuyOn" id="txtBuyOn" value="<%= DateTime.Now.ToString("d") %>" style="width: 70px;" />
    <input type="button" id="btnBuyStock" value="Buy Stock"  />

    <div id="divStockBuyMessage" class="errordisplay" style="display:none"></div>
</fieldset>

The completed form and demo then looks something like this:

BuyStockSample 

The client side code that picks up the input values and assigns them to object properties and sends the AJAX request looks like this:

$("#btnBuyStock").click(function () {
    // create an object map that matches StockBuyOrder signature
    var buyOrder =
    {
        Symbol: $("#txtBuySymbol").val(),
        Quantity: $("#txtBuyQty").val() * 1,  // number
        Entered: new Date()
    }
               
    ajaxCallMethod("SampleService.ashx", "BuyStock",
                        [buyOrder],
                        function (result) {
                            $("#divStockBuyMessage").text(result).fadeIn(1000);
                        }, onPageError);

});

The code creates an object and attaches the properties that match the server side object passed to the BuyStock method. Each property that you want to update needs to be included and the type must match (ie. string, number, date in this case). Any missing properties will not be set but also not cause any errors.

Pass POST data instead of Objects

In the last example I collected a bunch of values from form variables and stuffed them into object variables in JavaScript code. While that works, often times this isn't really helping - I end up converting my types on the client and then doing another conversion on the server. If lots of input controls are on a page and you just want to pick up the values on the server via plain POST variables - that can be done too - and it makes sense especially if you're creating and filling the client side object only to push data to the server.

Let's add another method to the server that once again lets us buy a stock. But this time let's not accept a parameter but rather send POST data to the server. Here's the server method receiving POST data:

[CallbackMethod]
public string BuyStockPost()
{
    StockBuyOrder buyOrder = new StockBuyOrder();

    buyOrder.Symbol = Request.Form["txtBuySymbol"]; ;
    int qty;
    int.TryParse(Request.Form["txtBuyQuantity"], out qty);
    buyOrder.Quantity = qty;
    DateTime time;
    DateTime.TryParse(Request.Form["txtBuyBuyOn"], out time);
    buyOrder.BuyOn = time;

    // Or easier way yet                        
    //FormVariableBinder.Unbind(buyOrder,null,"txtBuy");

    var server = new StockServer();
    var quote = server.GetStockQuote(buyOrder.Symbol);
    if (quote == null)
        throw new ApplicationException("Invalid or missing stock symbol.");


    return string.Format("You're buying {0} shares of {1} ({2}) stock at {3} for a total of {4} on {5}.",
                         buyOrder.Quantity,
                         quote.Company,
                         quote.Symbol,
                         quote.LastPrice.ToString("c"),
                         (quote.LastPrice * buyOrder.Quantity).ToString("c"),
                         buyOrder.BuyOn.ToString("MMM d"));
}

Clearly we've made this server method take more code than it did with the object parameter. We've basically moved the parameter assignment logic from the client to the server. As a result the client code to call this method is now a bit shorter since there's no client side shuffling of values from the controls to an object.

$("#btnBuyStockPost").click(function () {
    ajaxCallMethod("SampleService.ashx", "BuyStockPost",
    [],  // Note: No parameters -
    function (result) {
        $("#divStockBuyMessage").text(result).fadeIn(1000);
    }, onPageError, 
    // Force all page Form Variables to be posted
    { postbackMode: "Post" });
});

The client simply calls the BuyStockQuote method and pushes all the form variables from the page up to the server which parses them instead. The feature that makes this work is one of the options you can pass to the ajaxCallMethod() function:

{ postbackMode: "Post" });

which directs the function to include form variable POST data when making the service call. Other options include PostNoViewState (for WebForms to strip out WebForms crap vars), PostParametersOnly (default), None. If you pass parameters those are always posted to the server except when None is set.

The above code can be simplified a bit by using the FormVariableBinder helper, which can unbind form variables directly into an object:

FormVariableBinder.Unbind(buyOrder,null,"txtBuy");

which replaces the manual Request.Form[] reading code. It receives the object to unbind into, a string of properties to skip, and an optional prefix which is stripped off form variables to match property names. The component is similar to the MVC model binder but it's independent of MVC.

Returning non-JSON Data

CallbackHandler also supports returning non-JSON/XML data via special return types. You can return raw non-JSON encoded strings like this:

[CallbackMethod(ReturnAsRawString=true,ContentType="text/plain")]
public string HelloWorldNoJSON(string name)
{
    return "Hello " + name + ". Time is: " + DateTime.Now.ToString();
}

Calling this method results in just a plain string - no JSON encoding with quotes around the result. This can be useful if your server handling code needs to return a string or HTML result that doesn't fit well for a page or other UI component. Any string output can be returned.

You can also return binary data. Stream, byte[] and Bitmap/Image results are automatically streamed back to the client. Notice that you should set the ContentType of the request either on the CallbackMethod attribute or using Response.ContentType. This ensures the Web Server knows how to display your binary response. Using a stream response makes it possible to return any of data.

Streamed data can be pretty handy to return bitmap data from a method. The following is a method that returns a stock history graph for a particular stock over a provided number of years:

[CallbackMethod(ContentType="image/png",RouteUrl="stocks/history/graph/{symbol}/{years}")]
public Stream GetStockHistoryGraph(string symbol, int years = 2,int width = 500, int height=350)
{
    if (width == 0)
        width = 500;
    if (height == 0)
        height = 350;
    StockServer server = new StockServer();
    return server.GetStockHistoryGraph(symbol,"Stock History for " + symbol,width,height,years);
}

I can now hook this up into the JavaScript code when I get a stock quote. At the end of the process I can assign the URL to the service that returns the image into the src property and so force the image to display. Here's the changed code:

$("#btnStockQuote").click(function () {
    var symbol = $("#txtSymbol").val();
    ajaxCallMethod("SampleService.ashx", "GetStockQuote",
                    [symbol],
                    function (quote) {
                        $("#divStockDisplay").fadeIn(1000);
                        $("#stockCompany").text(quote.Company + " (" + quote.Symbol + ")");
                        $("#stockLastPrice").text(quote.LastPrice);
                        $("#stockQuoteTime").text(quote.LastQuoteTime.formatDate("MMM dd, hh:mmt"));

                        // display a stock chart
                        $("#imgStockHistory").attr("src", "stocks/history/graph/" + symbol + "/2");
                    },onPageError);
});

The resulting output then looks like this:

ImageDisplayFromService

The charting code uses the new ASP.NET 4.0 Chart components via code to display a bar chart of the 2 year stock data as part of the StockServer class which you can find in the sample download.

The ability to return arbitrary data from a service is useful as you can see - in this case the chart is clearly associated with the service and it's nice that the graph generation can happen off a handler rather than through a page. Images are common resources, but output can also be PDF reports, zip files for downloads etc. which is becoming increasingly more common to be returned from REST endpoints and other applications.

Why reinvent?

Obviously the examples I've shown here are pretty basic in terms of functionality. But I hope they demonstrate the core features of AJAX callbacks that you need to work through in most applications which is simple: return data, send back data and potentially retrieve data in various formats.

While there are other solutions when it comes down to making AJAX callbacks and servicing REST like requests, I like the flexibility my home grown solution provides. Simply put it's still the easiest solution that I've found that addresses my common use cases:

  • AJAX JSON RPC style callbacks
  • Url based access
  • XML and JSON Output from single method endpoint
  • XML and JSON POST support, querystring input, routing parameter mapping
  • UrlEncoded POST data support on callbacks
  • Ability to return stream/raw string data
  • Essentially ability to return ANYTHING from Service and pass anything

All these features are available in various solutions but not together in one place. I've been using this code base for over 4 years now in a number of projects both for myself and commercial work and it's served me extremely well. Besides the AJAX functionality CallbackHandler provides, it's also an easy way to create any kind of output endpoint I need to create. Need to create a few simple routines that spit back some data, but don't want to create a Page or View or full blown handler for it? Create a CallbackHandler and add a method or multiple methods and you have your generic endpoints.  It's a quick and easy way to add small code pieces that are pretty efficient as they're running through a pretty small handler implementation. I can have this up and running in a couple of minutes literally without any setup and returning just about any kind of data.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  jQuery  AJAX  

Dynamically creating a Generic Type at Runtime

$
0
0

I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction.

The question I ran into is:

How do I create a type instance of a generic type when I have dynamically acquired the type at runtime?

Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete.

Figuring out Generic Parameters of  a Type at Runtime

One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. For the following code assume that arrayType is a known type instance of an array-like object - an IList, IDictionary or a plain array. In this case I'm looking specifically for types that implement IDictionary.

Reflection actually makes it fairly easy to figure out generic parameter types (ie. what concrete types K and V are) at runtime with code like this:

if (arrayType.GetInterface("IDictionary") != null)
{
    if (arrayType.IsGenericType)
    {
        var keyType = arrayType.GetGenericArguments()[0];
        var valueType = arrayType.GetGenericArguments()[1];
        …
    }
}

The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type (the IDictionary type) is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity).

That's the easy part.

Creating a Generic Type and Providing Generic Parameters at RunTime

The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above?

Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first):

Type elementType = typeof(keyvalue<keyType, valueType>);
Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>);

The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile.

I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of:

Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType);

The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger:

TypeDifferences

The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name.

Once the full type reference exists (elementType) it's then easy to create an instance of the element type.

// Objects start out null until we find the opening tag
resultObject = Activator.CreateInstance(elementType);

In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. keyvalue<T,V> is a custom type I created that only contains key, value properties that match the JSON signature of the JSON serializer exactly so when the object is deserialized the signature matches and it just all works using the stock object deserialization. I use a List<keyvalue<T,V>> to hold these items as they are parsed and only when done parsing do I turn that list into the proper kind of dictionary. This way the parsing code works essentially the same regardless of the type of list interface used.

Parsing through a Generic type when you only have Runtime Type Information

This brings up yet another generic type issue. At the end of the parsing sequence I now have a List<> of a generic items.

When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. But now I need to call dict.Add(key,value) and both key and value need to be of the proper type for these calls to succeed. Even though my elements are of the correct type, the compiler doesn't know it because the type was created dynamically.

One - ugly - way to do this would be to use Type.ConvertType() to convert both the key and value types.

In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness:

else if (IsIDictionary)
{
    IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary;
    foreach (dynamic item in items)
    {                                                        
        dict.Add(item.key, item.value);
    }

    return dict;
}

This code creates an instance of the final generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Dynamic makes this code a lot cleaner than it would have otherwise been.

Static <- -> Dynamic

Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code.

Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  CSharp  

Creating a Dynamic DataRow for easier DataRow Syntax

$
0
0

I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer.

Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:

public int UpdateAllSafeTitles()
{
    int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks");
    if (result < 0)            
        return result;

    result = 0;

    foreach (DataRow row in this.DataSet.Tables["TPks"].Rows)
    {
        string title = row["title"] as string;
        string safeTitle = row["safeTitle"] as string;
        int pk = (int)row["pk"];

        string newSafeTitle = this.GetSafeTitle(title);
        if (newSafeTitle != safeTitle)
        {
            this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk",
                                 this.CreateParameter("@safeTitle",newSafeTitle),
                                 this.CreateParameter("@pk",pk) );
            result++;
        }
    }

    return result;
}

The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters.

Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:

public int UpdateAllSafeTitles()
{
    int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks");
    if (result < 0)            
        return result;

    result = 0;

    foreach (DataRow row in this.DataSet.Tables["TPks"].Rows)
    {
        dynamic entry = new DynamicDataRow(row);

        string newSafeTitle = this.GetSafeTitle(entry.title);
        if (newSafeTitle != entry.safeTitle)
        {
            this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk",
                                 this.CreateParameter("@safeTitle",newSafeTitle),
                                 this.CreateParameter("@pk",entry.pk) );
            result++;
        }
    }

    return result;
}

The code looks much a bit more natural and describes what's happening a little nicer as well.

Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class.

Creating your own custom Dynamic Objects

.NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime.

In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here.

Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access.

In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:

/// <summary>
/// This class provides an easy way to turn a DataRow 
/// into a Dynamic object that supports direct property
/// access to the DataRow fields.
/// 
/// The class also automatically fixes up DbNull values
/// (null into .NET and DbNUll to DataRow)
/// </summary>
public class DynamicDataRow : DynamicObject
{
    /// <summary>
    /// Instance of object passed in
    /// </summary>
    DataRow DataRow;
    
    /// <summary>
    /// Pass in a DataRow to work off
    /// </summary>
    /// <param name="instance"></param>
    public DynamicDataRow(DataRow dataRow)
    {
        DataRow = dataRow;
    }

   /// <summary>
   /// Returns a value from a DataRow items array.
   /// If the field doesn't exist null is returned.
   /// DbNull values are turned into .NET nulls.
   /// 
   /// </summary>
   /// <param name="binder"></param>
   /// <param name="result"></param>
   /// <returns></returns>
    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        result = null;

        try
        {
            result = DataRow[binder.Name];

            if (result == DBNull.Value)
                result = null;
            
            return true;
        }
        catch { }

        result = null;
        return false;
    }


    /// <summary>
    /// Property setter implementation tries to retrieve value from instance 
    /// first then into this object
    /// </summary>
    /// <param name="binder"></param>
    /// <param name="value"></param>
    /// <returns></returns>
    public override bool TrySetMember(SetMemberBinder binder, object value)
    {
        try
        {
            if (value == null)
                value = DBNull.Value;

            DataRow[binder.Name] = value;
            return true;
        }
        catch {}

        return false;
    }
}

To demonstrate the basic features here's a short test:

[TestMethod]
[ExpectedException(typeof(RuntimeBinderException))]
public void BasicDataRowTests()
{
    DataTable table = new DataTable("table");
    table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) });
    table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) });
    table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) });

    DataRow row = table.NewRow();

    DateTime now = DateTime.Now;

    row["Name"] = "Rick";
    row["Entered"] = now;
    row["NullValue"] = null; // converted in DbNull

    dynamic drow = new DynamicDataRow(row);

    string name = drow.Name;
    DateTime entered = drow.Entered;
    string nulled = drow.NullValue;

    Assert.AreEqual(name, "Rick");
    Assert.AreEqual(entered,now);
    Assert.IsNull(nulled);
    
    // this should throw a RuntimeBinderException
    Assert.AreEqual(entered,drow.enteredd);
                
}

The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database.

What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement.

By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow:

  • Direct Property Syntax
  • Automatic Type Casting so no explicit casts are required

Caveats

As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code.

However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation.

The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work.

Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:

// this should throw a RuntimeBinderException
Assert.AreEqual(entered,drow.enteredd);

Dynamic - Lots of uses

The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place).

Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do.

There's a lot of power hidden in this seemingly simple interface. Your move…

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in CSharp  .NET  

A Closable jQuery Plug-in

$
0
0

In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out.

Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout.

/*
* 
* Closable
*
* Makes selected DOM elements closable by making them 
* invisible when close icon is clicked
*
* Version 1.01
* @requires jQuery v1.3 or later
* 
* Copyright (c) 2007-2010 Rick Strahl 
* http://www.west-wind.com/
*
* Licensed under the MIT license:
* http://www.opensource.org/licenses/mit-license.php

Support CSS:

.closebox
{
    position: absolute;        
    right: 4px;
    top: 4px;
    background-image: url(images/close.gif);
    background-repeat: no-repeat;
    width: 14px;
    height: 14px;
    cursor: pointer;        
    opacity: 0.60;
    filter: alpha(opacity="80");
} 
.closebox:hover 
{
    opacity: 0.95;
    filter: alpha(opacity="100");
}

Options:

* handle
Element to place closebox into (like say a header). Use if main element 
and closebox container are two different elements.

* closeHandler
Function called when the close box is clicked. Return true to close the box
return false to keep it visible.

* cssClass
The CSS class to apply to the close box DIV or IMG tag.

* imageUrl
Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling.

* fadeOut
Optional provide fadeOut speed. Default no fade out occurs
*/
(function ($) {

    $.fn.closable = function (options) {
        var opt = { handle: null,
            closeHandler: null,
            cssClass: "closebox",
            imageUrl: null,
            fadeOut: null
        };
        $.extend(opt, options);

        return this.each(function (i) {
            var el = $(this);
            var pos = el.css("position");
            if (!pos || pos == "static")
                el.css("position", "relative");
            var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el;

            var div = opt.imageUrl ?
                    $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") :
                    $("<div>");
            div.addClass(opt.cssClass)
                    .click(function (e) {
                        if (opt.closeHandler)
                            if (!opt.closeHandler.call(this, e))
                                return;
                        if (opt.fadeOut)
                            $(el).fadeOut(opt.fadeOut);
                        else $(el).hide();
                    });
            if (opt.imageUrl) div.css("background-image", "none");
            h.append(div);
        });
    }

})(jQuery);

The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this:

.closebox
{
    position: absolute;        
    right: 4px;
    top: 4px;
    background-image: url(images/close.gif);
    background-repeat: no-repeat;
    width: 14px;
    height: 14px;
    cursor: pointer;        
    opacity: 0.60;
    filter: alpha(opacity="80");
} 
.closebox:hover 
{
    opacity: 0.95;
    filter: alpha(opacity="100");
}

Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this:

Closable

you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx

For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag:

$().ready(function() {
    $('#ctl00_MainContent_panEditBook')
        .closable({ handle: $('#divEditBook_Header') })
        .draggable({ dragDelay: 100, handle: '#divEditBook_Header' })
        .shadow({ opacity: 0.25, offset: 6 });
})

The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file:

.closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint',
handle: $('#divEditBook_Header')})

so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel.

More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this:

function loadBooks() {

    showProgress();
    
    // Clear the content
    $("#divBookListWrapper").empty();    
    
    var filter = $("#" + scriptVars.lstFiltersId).val();
    
    Proxy.GetBooks(filter, function(books) {
        $(books).each(function(i) {
            updateBook(this); 
            showProgress(true); 
        });
    }, onPageError);    
}
function updateBook(book,highlight)
{    
    // try to retrieve the single item in the list by tag attribute id
    var item = $(".bookitem[tag=" +book.Pk +"]");

    // grab and evaluate the template
    
    var html = parseTemplate(template, book);

    var newItem = $(html)
                    .attr("tag", book.Pk.toString())
                    .click(function() {
                        var pk = $(this).attr("tag");
                        editBook(this, parseInt(pk));
                    })
                    .closable({ closeHandler: function(e) {
                            removeBook(this, e);
                        },
                        imageUrl: "../../images/remove.gif"
                    });
                    

    if (item.length > 0) 
        item.after(newItem).remove();        
    else 
        newItem.appendTo($("#divBookListWrapper"));
    
    if (highlight) {
        newItem
            .addClass("pulse")
            .effect("bounce", { distance: 15, times: 3 }, 400);
        setTimeout(function() { newItem.removeClass("pulse"); }, 1200);            
    }    
}

Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image.

This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window.

You can find more information about the .closable class behavior and options here:
.closable Documentation

Plug-ins make Server Control JavaScript much easier

I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior:

if (this.Closable && !string.IsNullOrEmpty(DragHandleID) )
{
    string imageUrl = this.CloseBoxImage;
    if (imageUrl == "WebResource" )
        imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE);
    
    StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'");

    if (!string.IsNullOrEmpty(this.DragHandleID))
        closableOptions.Append(",handle: $('#" + this.DragHandleID + "')");

    if (!string.IsNullOrEmpty(this.ClientDialogHandler))
        closableOptions.Append(",handler: " + this.ClientDialogHandler);
       
    if (this.FadeOnClose)
        closableOptions.Append(",fadeOut: 'slow'");
    
    startupScript.Append(@"   .closable({ " + closableOptions + "})");
}

The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least.

I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control.

Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have.

Related Links

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in jQuery   ASP.NET  JavaScript  
kick it on DotNetKicks.com

jQuery Time Entry with Time Navigation Keys

$
0
0

So, how do you display time values in your Web applications? Displaying date AND time values in applications is lot less standardized than date display only. While date input has become fairly universal with various date picker controls available, time entry continues to be a bit of a non-standardized. In my own applications I tend to use the jQuery UI DatePicker control for date entries and it works well for that. Here's an example:

TimeEntry

The date entry portion is well defined and it makes perfect sense to have a calendar pop up so you can pick a date from a rich UI when necessary. However, time values are much less obvious when it comes to displaying a UI or even just making time entries more useful. There are a slew of time picker controls available but other than adding some visual glitz, they are not really making time entry any easier.

Part of the reason for this is that time entry is usually pretty simple. Clicking on a dropdown of any sort and selecting a value from a long scrolling list tends to take more user interaction than just typing 5 characters (7 if am/pm is used).

Keystrokes can make Time Entry easier

Time entry maybe pretty simple, but I find that adding a few hotkeys to handle date navigation can make it much easier. Specifically it'd be nice to have keys to:

  • Jump to the current time (Now)
  • Increase/decrease minutes
  • Increase/decrease hours

The timeKeys jQuery PlugIn

Some time ago I created a small plugin to handle this scenario. It's non-visual other than tooltip that pops up when you press ? to display the hotkeys that are available:

HelpDropdown

Try it Online

The keys loosely follow the ancient Quicken convention of using the first and last letters of what you're increasing decreasing (ie. H to decrease, R to increase hours and + and - for the base unit or minutes here). All navigation happens via the keystrokes shown above, so it's all non-visual, which I think is the most efficient way to deal with dates.

To hook up the plug-in, start with the textbox:

<input type="text" id="txtTime" name="txtTime" value="12:05 pm"  title="press ? for time options" />

Note the title which might be useful to alert people using the field that additional functionality is available.

To hook up the plugin code is as simple as:

$("#txtTime").timeKeys();

You essentially tie the plugin to any text box control.

Options
The syntax for timeKeys allows for an options map parameter:

$(selector).timeKeys(options);

Options are passed as a parameter map object which can have the following properties:

timeFormat
You can pass in a format string that allows you to format the date. The default is "hh:mm t" which is US time format that shows a 12 hour clock with am/pm. Alternately you can pass in "HH:mm" which uses 24 hour time. HH, hh, mm and t are translated in the format string - you can arrange the format as you see fit.

callback
You can also specify a callback function that is called when the date value has been set. This allows you to either re-format the date or perform post processing (such as displaying highlight if it's after a certain hour for example).

Here's another example that uses both options:

$("#txtTime").timeKeys({ 
    timeFormat: "HH:mm",
    callback: function (time) {
        showStatus("new time is: " + time.toString() + " " + $(this).val() );
    }
});

The plugin code itself is fairly simple. It hooks the keydown event and checks for the various keys that affect time navigation which is straight forward. The bulk of the code however deals with parsing the time value and formatting the output using a Time class that implements parsing, formatting and time navigation methods.

Here's the code for the timeKeys jQuery plug-in:

/// <reference path="jquery.js" />
/// <reference path="ww.jquery.js" />
(function ($) {

    $.fn.timeKeys = function (options) {
        /// <summary>
        /// Attaches a set of hotkeys to time fields
        /// + Add minute - subtract minute
        /// H Subtract Hour R Add houR
        /// ? Show keys
        /// </summary>
        /// <param name="options" type="object">
        /// Options:
        /// timeFormat: "hh:mm t" by default HH:mm alternate
        /// callback: callback handler after time assignment
        /// </param>
        /// <example>
        /// var proxy = new ServiceProxy("JsonStockService.svc/");
        /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError);
        ///</example>
        if (this.length < 1) return this;

        var opt = {
            timeFormat: "hh:mm t",
            callback: null
        }
        $.extend(opt, options);

        return this.keydown(function (e) {
            var $el = $(this);

            var time = new Time($el.val());

            //alert($(this).val() + " " + time.toString() + " " + time.date.toString());

            switch (e.keyCode) {
                case 78: // [N]ow                        
                    time = new Time(new Date()); break;
                case 109: case 189:  // - 
                    time.addMinutes(-1);
                    break;
                case 107: case 187: // +
                    time.addMinutes(1);
                    break;
                case 72: //H
                    time.addHours(-1);
                    break;
                case 82: //R
                    time.addHours(1);
                    break;
                case 191: // ?
                    if (e.shiftKey)
                        $(this).tooltip("<b>N</b> Now<br/><b>+</b> add minute<br /><b>-</b> subtract minute<br /><b>H</b> Subtract Hour<br /><b>R</b> add hour", 4000, { isHtml: true });
                    return false;
                default:
                    return true;
            }

            $el.val(time.toString(opt.timeFormat));

            if (opt.callback) {
                // call async and set context in this element
                setTimeout(function () { opt.callback.call($el.get(0), time) }, 1);
            }

            return false;
        });
    }



    Time = function (time, format) {
        /// <summary>
        /// Time object that can parse and format
        /// a time values.
        /// </summary>
        /// <param name="time" type="object">
        /// A time value as a string (12:15pm or 23:01), a Date object
        /// or time value.       /// 
        /// </param>
        /// <param name="format" type="string">
        /// Time format string: 
        /// HH:mm   (23:01)
        /// hh:mm t (11:01 pm)        
        /// </param>
        /// <example>
        /// var time = new Time( new Date());
        /// time.addHours(5);
        /// time.addMinutes(10);
        /// var s = time.toString();
        ///
        /// var time2 = new Time(s);  // parse with constructor
        /// var t = time2.parse("10:15 pm");  // parse with .parse() method
        /// alert( t.hours + " " + t.mins + " " + t.ampm + " " + t.hours25)
        ///</example>

        var _I = this;

        this.date = new Date();
        this.timeFormat = "hh:mm t";
        if (format)
            this.timeFormat = format;

        this.parse = function (time) {
            /// <summary>
            /// Parses time value from a Date object, or string in format of:
            /// 12:12pm or 23:01
            /// </summary>
            /// <param name="time" type="any">
            /// A time value as a string (12:15pm or 23:01), a Date object
            /// or time value.       /// 
            /// </param>
            if (!time)
                return null;

            // Date
            if (time.getDate) {
                var t = {};
                var d = time;
                t.hours24 = d.getHours();
                t.mins = d.getMinutes();
                t.ampm = "am";
                if (t.hours24 > 11) {
                    t.ampm = "pm";
                    if (t.hours24 > 12)
                        t.hours = t.hours24 - 12;
                }
                time = t;
            }

            if (typeof (time) == "string") {
                var parts = time.split(":");

                if (parts < 2)
                    return null;
                var time = {};
                time.hours = parts[0] * 1;
                time.hours24 = time.hours;

                time.mins = parts[1].toLowerCase();
                if (time.mins.indexOf("am") > -1) {
                    time.ampm = "am";
                    time.mins = time.mins.replace("am", "");
                    if (time.hours == 12)
                        time.hours24 = 0;
                }
                else if (time.mins.indexOf("pm") > -1) {
                    time.ampm = "pm";
                    time.mins = time.mins.replace("pm", "");
                    if (time.hours < 12)
                        time.hours24 = time.hours + 12;
                }
                time.mins = time.mins * 1;
            }
            _I.date.setMinutes(time.mins);
            _I.date.setHours(time.hours24);

            return time;
        };
        this.addMinutes = function (mins) {
            /// <summary>
            /// adds minutes to the internally stored time value.       
            /// </summary>
            /// <param name="mins" type="number">
            /// number of minutes to add to the date
            /// </param>
            _I.date.setMinutes(_I.date.getMinutes() + mins);
        }
        this.addHours = function (hours) {
            /// <summary>
            /// adds hours the internally stored time value.       
            /// </summary>
            /// <param name="hours" type="number">
            /// number of hours to add to the date
            /// </param>
            _I.date.setHours(_I.date.getHours() + hours);
        }
        this.getTime = function () {
            /// <summary>
            /// returns a time structure from the currently
            /// stored time value.
            /// Properties: hours, hours24, mins, ampm
            /// </summary>
            return new Time(new Date()); h
        }
        this.toString = function (format) {
            /// <summary>
            /// returns a short time string for the internal date
            /// formats: 12:12 pm or 23:12
            /// </summary>
            /// <param name="format" type="string">
            /// optional format string for date
            /// HH:mm, hh:mm t
            /// </param>
            if (!format)
                format = _I.timeFormat;

            var hours = _I.date.getHours();

            if (format.indexOf("t") > -1) {
                if (hours > 11)
                    format = format.replace("t", "pm")
                else
                    format = format.replace("t", "am")
            }
            if (format.indexOf("HH") > -1)
                format = format.replace("HH", hours.toString().padL(2, "0"));
            if (format.indexOf("hh") > -1) {
                if (hours > 12) hours -= 12;
                if (hours == 0) hours = 12;
                format = format.replace("hh", hours.toString().padL(2, "0"));
            }
            if (format.indexOf("mm") > -1)
                format = format.replace("mm", _I.date.getMinutes().toString().padL(2, "0"));

            return format;
        }

        // construction
        if (time)
            this.time = this.parse(time);
    }


    String.prototype.padL = function (width, pad) {
        if (!width || width < 1)
            return this;

        if (!pad) pad = " ";
        var length = width - this.length
        if (length < 1) return this.substr(0, width);

        return (String.repeat(pad, length) + this).substr(0, width);
    }
    String.repeat = function (chr, count) {
        var str = "";
        for (var x = 0; x < count; x++) { str += chr };
        return str;
    }

})(jQuery);

The plugin consists of the actual plugin and the Time class which handles parsing and formatting of the time value via the .parse() and .toString() methods. Code like this always ends up taking up more effort than the actual logic unfortunately. There are libraries out there that can handle this like datejs or even ww.jquery.js (which is what I use) but to keep the code self contained for this post the plugin doesn't rely on external code.

There's one optional exception: The code as is has one dependency on ww.jquery.js  for the tooltip plugin that provides the small popup for all the hotkeys available. You can replace that code with some other mechanism to display hotkeys or simply remove it since that behavior is optional.

While we're at it: A jQuery dateKeys plugIn

Although date entry tends to be much better served with drop down calendars to pick dates from, often it's also easier to pick dates using a few simple hotkeys. Navigation that uses + - for days and M and H for MontH navigation, Y and R for YeaR navigation are a quick way to enter dates without having to resort to using a mouse and clicking around to what you want to find.

Note that this plugin does have a dependency on ww.jquery.js for the date formatting functionality.

$.fn.dateKeys = function (options) {
    /// <summary>
    /// Attaches a set of hotkeys to date 'fields'
    /// + Add day - subtract day
    /// M Subtract Month H Add montH
    /// Y Subtract Year R Add yeaR
    /// ? Show keys
    /// </summary>
    /// <param name="options" type="object">
    /// Options:
    /// dateFormat: "MM/dd/yyyy" by default  "MMM dd, yyyy
    /// callback: callback handler after date assignment
    /// </param>
    /// <example>
    /// var proxy = new ServiceProxy("JsonStockService.svc/");
    /// proxy.invoke("GetStockQuote",{symbol:"msft"},function(quote) { alert(result.LastPrice); },onPageError);
    ///</example>
    if (this.length < 1) return this;

    var opt = {
        dateFormat: "MM/dd/yyyy",
        callback: null
    };
    $.extend(opt, options);

    return this.keydown(function (e) {
        var $el = $(this);
        var d = new Date($el.val());
        if (!d)
            d = new Date(1900, 0, 1, 1, 1);

        var month = d.getMonth();
        var year = d.getFullYear();
        var day = d.getDate();

        switch (e.keyCode) {
            case 84: // [T]oday
                d = new Date(); break;
            case 109: case 189:
                d = new Date(year, month, day - 1); break;
            case 107: case 187:
                d = new Date(year, month, day + 1); break;
            case 77: //M
                d = new Date(year, month - 1, day); break;
            case 72: //H
                d = new Date(year, month + 1, day); break;
            case 191: // ?
                if (e.shiftKey)
                    $el.tooltip("<b>T</b> Today<br/><b>+</b> add day<br /><b>-</b> subtract day<br /><b>M</b> subtract Month<br /><b>H</b> add montH<br/><b>Y</b> subtract Year<br/><b>R</b> add yeaR", 5000, { isHtml: true });
                return false;

            default:
                return true;
        }

        $el.val(d.formatDate(opt.dateFormat));

        if (opt.callback)
        // call async
            setTimeout(function () { opt.callback.call($el.get(0),d); }, 10);

        return false;
    });
}

The logic for this plugin is similar to the timeKeys plugin, but it's a little simpler as it tries to directly parse the date value from a string via new Date(inputString). As mentioned it also uses a helper function from ww.jquery.js to format dates which removes the logic to perform date formatting manually which again reduces the size of the code.

And the Key is…

I've been using both of these plugins in combination with the jQuery UI datepicker for datetime values and I've found that I rarely actually pop up the date picker any more. It's just so much more efficient to use the hotkeys to navigate dates. It's still nice to have the picker around though - it provides the expected behavior for date entry. For time values however I can't justify the UI overhead of a picker that doesn't make it any easier to pick a time. Most people know how to type in a time value and if they want shortcuts keystrokes easily beat out any pop up UI. Hopefully you'll find this as useful as I have found it for my code.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in jQuery  HTML  


Creating a Dynamic DataReader for easier Property Access

$
0
0

I've been thrown back to using plain old ADO.NET for a bit in a legacy project I'm helping one of my customers with and in the process am finding a few new ways to take advantage of .NET 4 language features to make a number of operations easier. Specifically I'm finding that the new Dynamic type in .NET 4.0 can make a number of operations easier to use and considerably cleaner to read and type. A couple of weeks ago I posted an example of a DynamicDataRow class that uses dynamic to expose DataRow column values as properties.

In this post I do something similar with for the ADO.NET DataReader by exposing a custom dynamic type that retrieves values from the DataReader field collection and exposes them as properties.

Why hack the DataReader?

DbDataReader is one of the lowest level ADO.NET data structures - it returns raw firehose data from the database and exposes it via a simple reader interface that relies on simple looping and internal collection of field keys and values.

Here's a small example that demonstrates the basics using a DataAccess helper SqlDataAccess from Westwind.Utilities from the West Wind Web Toolkit to keep this example small.

[TestMethod]
public void BasicDataReaderTimerTests()
{
    var data = new SqlDataAccess("WebStore_ConnectionString");
    var reader = data.ExecuteReader("select * from wws_items");
    Assert.IsNotNull(reader, "Query Failure: " + data.ErrorMessage);
    
    StringBuilder sb = new StringBuilder();
    
    Stopwatch watch = new Stopwatch();
    watch.Start();
    
    while (reader.Read())
    {
        string sku  = reader["sku"] as string;
        string descript = reader["descript"] as string;

        decimal? price;
        object t = reader["Price"];
        if (t == DBNull.Value)
            price = null;
        else
            price = (decimal)t;
        
        
        sb.AppendLine(sku + " " + descript + " " + price.Value.ToString("n2"));                    
    }

    watch.Stop();

    reader.Close();

    Console.WriteLine(watch.ElapsedMilliseconds.ToString());
    Console.WriteLine(sb.ToString());                                
}

The code is pretty straight forward. SqlDataAccess takes a connection string or connection string name from the config file as a parameter in the constructor to initialize the DAL. Once instantiated you can run a number of data access commands to create a DataReader (or DataSet/DataTable) or execute commands in various ways on this connection. Here I use ExecuteReader() to produce a DataReader I can loop through. The code then loops through the records using reader.Read() which returns false when the end of the result set is reached.

Inside of the loop I can then access each of the fields. Notice that each field has to be explicitly cast to a specific type (as string or (decimal) here). In addition if you support NULL values in the database you also have to explicitly check for DBNull values showing up in the DataReader fields which is messy.

It's not terribly complicated to do any of this - just a bit of extra typing and - aesthetically - the code looks a bit messy.

Using DynamicDataReader

Personally I prefer standard object.property syntax when dealing with data and a custom dynamic type can actually make that easy. While it won't give me a full strongly typed .NET type, I can at least get the standard object syntax with this implementation.

Here's the code that does exactly the same thing using the DynamicDataReader:

[TestMethod]
public void BasicDynamicDataReaderTimerTest()
{
    var data = new SqlDataAccess("WebStore_ConnectionString");
    var reader = data.ExecuteReader("select * from wws_items");

    Assert.IsNotNull(reader, "Query Failure: " + data.ErrorMessage);

    dynamic dreader = new DynamicDataReader(reader);

    Stopwatch watch = new Stopwatch();
    watch.Start();


    while (reader.Read())
    {
        string sku = dreader.Sku;
        string descript = dreader.Descript;
        decimal? price = dreader.Price;

        sb.AppendLine(sku + " " + descript + " " + price.Value.ToString("n2"));
    }

    watch.Stop();
    reader.Close();

    Console.WriteLine(watch.ElapsedMilliseconds.ToString());
    Console.WriteLine(sb.ToString());
}

The code is nearly the same except in the two places highlighted in bold: a new DynamicDataReader() instance is created and the values assigned are read from this dreader instance as property values.

Even though this is a simple example that only uses 3 fields, it still is quite a bit cleaner than the first example:

  • It uses cleaner object.property syntax
  • All the type casting is gone
  • The DBNull to .NET NULL assignment is automatic

Good News, Bad News

It's important to understand that what you're seeing is a dynamic type, not a strongly typed .NET type. Dynamic means you get to use object.property syntax in this case, and you get automatic casting, but you do not get strong typing and so no compiler type checking or Intellisense on those properties. They are dynamic and so essentially syntactic sugar around dynamic invocation and Reflection implemented through the Dynamic Language Runtime (DLR).

Because the type is dynamic there's also a performance penalty. Specifically first time access of the dynamic properties tends to be slow. Once the DLR is spun up and a dynamic type created from the DataReader and you've iterated over each property once, the parsing is fairly swift on repeated calls/conversions. In informal testing it looks like the dynamic code takes roughly three times as long as the raw code from a cold start, and is a little over 1.5 times slower once the dynamic type has been created once. Not sure why that is because the implementation just does look ups into the DataReader field collection (no Reflection caching for PropertyInfo data), but nevertheless repeated requests are significantly faster than first time access.

Even through performance was nearly twice as slow using the dynamic type, the numbers were still very fast taking less than 8 milliseconds for rendering 500 records compared to 4-5 with raw DataReader access. Hardly a deal breaker in all but the most critical scenarios especially when you figure in the cost of data access (which the example code doesn't for the timings).

How does DynamicDataReader work?

The DLR makes it very easy to abstract data structures and wrap them into an object based syntax. Using DynamicObject as a base class to implement custom types, you can basically implement 'method missing' or 'property missing' functionality by simply overriding the TryGetMember() method and TryInvokeMember() methods.

Here's the implementation of DynamicDataReader:

/// <summary>
/// This class provides an easy way to use object.property
/// syntax with a DataReader by wrapping a DataReader into
/// a dynamic object.
/// 
/// The class also automatically fixes up DbNull values
/// (null into .NET and DbNUll)
/// </summary>
public class DynamicDataReader : DynamicObject
{
    /// <summary>
    /// Cached Instance of DataReader passed in
    /// </summary>
    IDataReader DataReader;
    
    /// <summary>
    /// Pass in a loaded DataReader
    /// </summary>
    /// <param name="dataReader">DataReader instance to work off</param>
    public DynamicDataReader(IDataReader dataReader)
    {
        DataReader = dataReader;
    }

   /// <summary>
   /// Returns a value from the current DataReader record
   /// If the field doesn't exist null is returned.
   /// DbNull values are turned into .NET nulls.
   /// </summary>
   /// <param name="binder"></param>
   /// <param name="result"></param>
   /// <returns></returns>
    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        result = null;

        // 'Implement' common reader properties directly
        if (binder.Name == "IsClosed")            
            result = DataReader.IsClosed;                            
        else if (binder.Name == "RecordsAffected")            
            result = DataReader.RecordsAffected;                         
        // lookup column names as fields
        else
        {
            try
            {
                result = DataReader[binder.Name];
                if (result == DBNull.Value)
                    result = null;                    
            }
            catch 
            {
                result = null;
                return false;
            }
        }

        return true;
    }

    public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result)
    {
        // Implement most commonly used method
        if (binder.Name == "Read")
            result = DataReader.Read();
        else if (binder.Name == "Close")
        {
            DataReader.Close();
            result = null;
        }
        else
            // call other DataReader methods using Reflection (slow - not recommended)
            // recommend you use full DataReader instance
            result = ReflectionUtils.CallMethod(DataReader, binder.Name, args);

        return true;            
    }
}

As you can see the implementation is super simple. The implementation inherits from Dynamic object and overrides TryGetMember() and TryInvokeMember(). The constructor is set up so it passes in a DataReader instance which is stored internally. TryGetMember() is then called when an 'unknown' property is accessed on the dynamic type and tries to find a matching value in the DataReader field collection based on the binder.Name property.

The class also implements a couple of DataReader's common properties (IsClosed, RecordsAffected) and methods (Read, Close) to match a full DataReader's functionality so it can behave like a full DataReader instance, so you can write code like this:

dynamic reader = new DynamicDataReader(data.ExecuteReader("select * from wws_items"));
while (reader.Read())
{ … }

The key feature however is that values from the DataReader fields collection are turned into properties which is handled by

try
{
    result = DataReader[binder.Name];
    if (result == DBNull.Value)
       
result = null;
}
catch { result = null; return false; }

Note that the code handles the null conversion and the assignment of the result value the value from the DataReader field. TryGetMember() expects an out parameter for result and the value set is what effectively becomes the property value that is used when accessing the object.property syntax.

It's neat how easy it is to implement custom behavior in TryGetMember(). Note that I can check for explicit values (like IsClosed and RecordsAffected) as well as checking the fields collection for matching values. IOW, you can have a single dynamic return values for properties from any number of 'data sources' easily, be it from static translations, an internal data structure like a DataReader or DataRow, an XML document or even via Reflection from additional properties on the object. You have full control over this implementation which is very powerful and opens up many more avenues to simplify structured data access.

Even easier: Get a DynamicDataReader directly from the DAL

With DynamicDataReader available, it's now a cinch to extend my DAL to directly return a dynamic data reader instance. We earlier saw the SqlDataAccess.ExecuteReader() method which returned a DataReader. In the SqlDataAccessBase class I can now implement a dynamic version of ExecuteReader that directly returns a DynamicDataReader instance as a dynamic type:

/// <summary>
/// Executes a Sql statement and returns a dynamic DataReader instance 
/// that exposes each field as a property
/// </summary>
/// <param name="sql">Sql String to executeTable</param>
/// <param name="parameters">Array of DbParameters to pass</param>
/// <returns></returns>
public virtual dynamic ExecuteDynamicDataReader(string sql, params DbParameter[] parameters)
{
    var reader = ExecuteReader(sql, parameters);
    return new DynamicDataReader(reader);
}

which directly returns me a dynamic DataReader instance. Note that the type returned from ExecuteDynamicDataReader() is not DynamicDataReader but dynamic!

I can now fire off a query simply like this and use my object.property syntax without any conversion:

var data = new SqlDataAccess("WebStore_ConnectionString");
dynamic reader = data.ExecuteDynamicDataReader("select * from wws_items");

StringBuilder sb = new StringBuilder();

while (reader.Read())
{
    string sku = reader.Sku;
    string descript = reader.Descript;
    decimal? price = reader.Price;

    sb.AppendLine(sku + " " + descript + " " + price.Value.ToString("n2"));
}
reader.Close();

Note that Read() and Close() work on the dynamic because I explicitly implemented them in TryInvokeMember() based on the method name.

Summary

Sweet. This makes it super easy and transparent to access data with clean syntax! Personally I much prefer object.property syntax over collection syntax plus type casting and so I'm sold on this concept of using custom dynamic types for wrapping non object data structures into object syntax dynamic types.

Clearly this is not something you want to use for all occasions. Were performance is is of utmost importance, raw DataReader access is still a better choice. But for smallish result sets or one-off queries in application (especially admin interfaces) this can be a nice enhancement to make code easier to read and maintain.

Also, direct DataReader access - for me at least - seems to be going the way of the Dodo, with ORM based data access mostly replacing raw ADO.NET data access. But there are occasions even with an ORM that I fall back to DataReaders with complex queries or maintenance tasks where mapping a one of query to a type is simply overkill. Using this class though I can at least have ORM-like syntax in my code even if strong typing is not available.

And as I can attest to at the moment - old code dies only slowly - and I still find myself digging around in 10 year old code from time to time that uses low level ADO.NET data access and it's nice to have some tools to modernize that old code with, with minimal effort. This tool fits the bill for me.

Resources

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in .NET  CSharp   ADO.NET  

A Key Code Checker for DOM Keyboard Events

$
0
0

If you've ever written some code that needs to deal with individual keystrokes entered and to 'translate' or parse them, you've probably figured out that while on the surface it all looks pretty easy with DOM event processing, it's actually quite tricky to get accurate key information. There are several keyboard event properties available and - even with jQuery - various browsers handle these event properties slightly differently.

In fact, I realized that almost every time I run into a problem with keyCodes in event handling code I add a bunch of console.log statements to figure out exactly what keyCodes are coming back for each of the available events. So tonight I sat down and threw together a small form that lets me see all this information at a glance on one page.

You can check out my Key Code Tester and play around with it in various browsers:

KeyCodeChecker

Try the Key Code Checker Example

If you try this in various browsers, you'll find that you get a rather large divergence of values for the various events.

But before we look at the actual results and some suggestions, let's back up for a second and explain how key handling in the DOM works.

Key Event Handling for the DOM

The HTML DOM has support for keyboard events that fire when you press keys anywhere in the document. The most common place where the event trapping matters is on input controls like a textbox, but it also can work in any other DOM element like a <div>.

The DOM keyboard input event model has a number of events to handle key events that you can handle:

  1. keydown
  2. keypress
  3. keyup

These events are fairly self-explanatory on the surface and they fire in the order shown above. Each event can be hooked up to an event handler that receives a DOM event parameter which contains information like keyCode, charCode and which as well as shift,ctl, alt states you can look at. Most modern browsers support the keyCode, charCode and which properties. which is a special property that returns the 'significant' value from an event which in the case of the key events tends to be the keyCode or if that is empty the charCode. charCode and which are not available on old versions of IE pre version 9. This can get tricky to check for quickly.and this is where jQuery comes in and provides at least a minimal bit of normalization of these event values.

jQuery Event Normalization of Key Event Properties

jQuery's Event Object supports keyCode, charCode and which properties as well, but you typically should use the which property with it. jQuery normalizes which based on keyCode or charCode. It checks keyCode first and if that's empty reads the charCode which should yield a valid keyboard code.

Back to the keyboard events: keydown fires as a key is pressed down. keypress is fired immediately after keydown, but unlike keydown and keyup it fires only on 'visible' characters that are printable (FireFox and Opera don't follow the spec and always fire). keyup fires after keypress when the keys are released.

At any point during the event handler processing for the keyboard events you can return false which causes the event bubbling to stop. When false is returned further processing stops and key events following the current key event won't fire. If you return false from keyDown, keypress and keyup will not be fired.

Using jQuery it's very easy to handle keyboard events. Here's an example for keydown handling:

$("#txtKey")
    .keydown(function (e) {
    var keyCode = e.keyCode;
    var which = e.which;
    var charCode = e.charCode;
    
    // .. do something for key handling       

    // pass through key stroke with true 
    // keep keystroke from processing further (or get entered) with false 
    return true;
});

Sample Implementation

The above form was implemented using a little bit of script code that basically handles all three key events. It then calls a common function and displays the various key code values in the appropriate box in the page.

The code to do this is pretty simple, but I post it here as it gives a little bit of insight on how the various key event properties work:

    <script type="text/javascript">
        $( function () {
            $("#txtKey")
                .focus()
                .keypress(handleKey)
                .keydown(handleKey)                
                .keyup(handleKey);
        });

        function handleKey(e) {            
            var keyCode = e.keyCode;
            var charCode = e.charCode;
            var which = e.which;

            var type = e.handleObj.origType;            
            var orig = e.orignalEvent;

            if (!keyCode)
                keyCode = "0";
            if (!which)
                which = "0";

            if (type == "keydown") {
                $("#divDown .mainvalue").text(keyCode);
                $("#divCharCodeDown .mainvalue").text(charCode);
                $("#divWhichDown .mainvalue").text(which);

                $("#divCharTyped .mainvalue:eq(1)")
                    .text("")
                    .text(String.fromCharCode(keyCode));

                // also clear out key press values in case it doesn't fire                
                $("#divCharTyped .mainvalue:eq(0)")
                    .html("&nbsp;")
                $("#divPress .mainvalue").text("");
                $("#divCharCodePress .mainvalue").text("");
                $("#divWhichPress .mainvalue").text("");
            }
            else if (type == "keypress") {                
                $("#divPress .mainvalue").text(keyCode);
                $("#divCharCodePress .mainvalue").text(charCode);
                $("#divWhichPress .mainvalue").text(which);
                
                $("#divCharTyped .mainvalue:eq(0)")
                    .text(String.fromCharCode(which));
            }
            else if (type == "keyup") {
                $("#divUp .mainvalue").text(keyCode);
                $("#divCharCodeUp .mainvalue").text(charCode);
                $("#divWhichUp .mainvalue").text(which);

                $("#divCharTyped .mainvalue:eq(2)")
                    .text("")
                    .text(String.fromCharCode(keyCode));                
            }

            

            // must return true in order for keypress to fire
            // in all browsers. Remove character using setTimeout 
            // to delay and clear text after the fact.
            setTimeout(function () { $("#txtKey").val(""); }, 2);

            // always return true so keypress fires
            return true;
        }        
    </script>

Browser Divergence

I mentioned earlier that various browsers handle the various key codes differently. Check out the following when pressing the 'a' key in FireFox, Chrome, IE (8 standards mode), Opera respectively:

browserDifferences

You can see the divergence here. FireFox doesn't get a good keyCode in the keyPress event, and instead gives a charCode. All other browsers tested return a valid keycode in all events. Notice that all browsers return consistent results with jQuery using the e.which property. The which property is clearly what you should use to get a reliable and consistent keyCode value that works across all browsers.

Suggestions

  • Use e.which when checking for Key Codes
    When using jQuery it's best to use e.which in key events to check for key codes. e.which normalizes between all key combinations that I saw and provides the most consistent value across browsers.
  • If you need a Character Value from a Key Code use keypress Event
    When you need to get a character value from a key event handle the keypress event as it's the only one that can receive a translated key code that reflects the actual character that the user typed and that would appear in the text box. You can use String.fromCharCode(e.which) to get the printable character.
  • Watch out for keypress Event Differences
    Keep in mind that the keypress only fires on printable characters and that the key code value for e.which will be different than those in keydown and keyup. If you need to handle both printable and special keys you might have to implement both keyDown and keyPress.

Hopefully this utility plus some of these suggestions will prove useful to some of you. I know it will be to me the next time I have a need to manipulate keystrokes in my JavaScript code - information like this has a half life of about 2 days with me after it files out of my brain again :-). This blog post should help me remember next time…

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in JavaScript  jQuery  HTML  

HTML 5 Input Types on WebForms Controls

$
0
0

Did you know that you can use HTML5 input types with ASP.NET WebForms controls? I wasn't sure until I tried it today:

<asp:TextBox runat="server" ID="Username" Width="250px"  type="email" />

which properly produces this HTML5 compliant HTML output:

<input type="email" style="width:250px;" id="Username" name="Username">

That this works shouldn't come as a big surprise since ASP.NET always has supported 'extra' attributes on Web Controls to render into the HTML. However, the input type seems like a pretty core feature in input controls and the control does have a TextMode property that already partially addresses this task. Nevertheless, explicitly specifying the type attribute allows you to override the type and so provide custom HTML 5 input types.

In short - nice! HTML5 input types are supported in WebForms without any framework changes or updates.

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  

HTML 5 Input Types - How useful is this really going to be?

$
0
0

HTML5 introduces a number of new input types that are meant to provide a richer user input experience. It seems like this should be a good thing, given that we've basically been stuck with a very small and limited set of stock input controls in HTML.

HTML5 aims to change that with input type extensions that allow you to edit and validate special types of text input. The new input types that are available in HTML5 are the following:

  • email
  • url
  • number
  • range
  • Date pickers (date, month, week, time, datetime, datetime-local)
  • search
  • color

Some of these act as validators (email, url, number) while others are meant as helpers to provide richer functionality like DataPicker, Range, Color and Search options.

What do you get?

A number of newer browsers now support some of the new HTML 5 input types and provide limited user hints and validation on the client. For example in FireFox 8 if I type in an invalid email address into my login dialog and tab off I get this display:

FireFoxEmail_thumb

<input type="email" style="width:250px;" id="Username" name="Username">

and when I submit the form I get:

ffErrorOnSubmit_thumb

Chrome does something similar although frankly it looks a lot nicer than the klunky FireFox message.

ChromeUnSubmitEmail_thumb

Note that in both cases the form won't submit until a valid email address has been entered.

Rich Interfaces? Maybe not

In theory browsers are supposed to implement rich interfaces to handle some of these interface types. For example, dates should be shown as a date picker. Current implementations in this department however are really sad at best. Check out the date field displays here in Chrome:

DateInput_thumb

The 'user interface' consists of little more than an up-down button which increases decreases the date by a day. Additionally the date is displayed in universal format rather than locale adjusted format for the current browser or language. AFAIK there's no real clean way to style any of this either.

Opera on the other hand provides a number of editors, but boy are they ugly:

OperaInput_thumb

The date picker that actually pops up also isn't any better looking. Even here the date and time are displayed in universal format. Again there appears to be no way to style this stuff so it's bound to clash hard with any UI layout you have.

Good News Bad News

This existing browser behavior of HTML 5 input tags raises all sorts of questions. I already mentioned the styling is a big issue - how do you get the UI to match your HTML's existing layout. But more importantly it seems that the new HTML input element behaviors are going to conflict with existing validation and custom input control types. If you're already using a different date picker like say jQuery UI date picker using code like this:

$().ready(function () {
    $("input[type=date]")
            .datepicker({ showOn: 'button',
                buttonImageOnly: true,
                buttonImage: '../../images/calendar.png',
                buttonText: 'Select date',
                showButtonPanel: true                        
            });
})

You can end up with lovely stuff like this:

DateError_thumb

Ouch! The date box doesn't support my local American date and I can't even submit the form. Major FAIL there.

There are other similar scenarios - the url type attribute in Firefox will complain about an protocol-less url, while in Chrome it'll automatically append the http:// in front of the URL. Either of these behaviors can be problematic depending on what you're trying to achieve in your form.

To be fair you can turn off the form validation on the form submission level:

<input type="submit" id="btnSubmit" name="btnSubmit" value="Save" 
class="submitbutton" formnovalidate="formnovalidate" />

to avoid frivolous validation. But then you might as well use a input type=text control.

So in the end use of the new HTML5 input elements may not be as useful as I had hoped. Lack of control over the input becomes a big problem and lack of styling makes these controls look out of place. Worst of all though the validation can seriously get in the way unless you disable the validation diminishing the value significantly. It certainly isn't going to replace existing client side validation schemes - since you most likely need more sophisticated validation than just these general types. And if you leave it on for the base validations and use custom validations for your custom logic, the two are likely to look very different for a very disjointed feature.

I definitely think that it's high time HTML gets better input support, but the current HTML 5 state of affairs isn't going to help much. In fact, I think it going to get in the way more than it actually helps… It seems to me it wouldn't be rocket science to build basic user input controls like a date picker that actually address the 90%+ scenario. Pick a date from a dropdown and display the date in local browser time format. How hard could that possibly be? Or have a validator that understands both full URLs and partial URLs. I mean that's a problem that's been solved many times over in just about any application that accepts URLs as user input, right?  But no, we get everything in a standards compliant geek format that you can't possibly push on to users. :-( Let's hope going forward that things will improve and more thought to the end result will go into the actual browser implementations.

Your Turn

What do you think? Have you played with HTML 5 input controls? Do you think these new input control features would help you? What would you need to make them useful?

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in HTML   ASP.NET  HTML5  

Debugging Application_Start and Module Initialization with IIS and Visual Studio

$
0
0

Recently I've seen quite a few questions pop up in regards to debugging ASP.NET application initialization. Most commonly I see something along these lines:

I'm trying to debug my ASP.NET application and am starting the Debugger, but it won't stop on code in my Application_Start event

Yup, been there done that.

What does Application_Start do?

Application_Start is pseudo handler that ASP.NET checks for during initialization of an ASP.NET application - if one exists it's fired during the application's initialization process which occurs specifically when the HttpRuntime instance is started. Effectively Application_Start is fired exactly once when the Web application's AppDomain starts. Application_Start is simply a short cut that makes it consistent with the other Application_ events that are available in global.asax.cs even though it is not actually an HttpApplication event like most other Application_ handlers. Remember that Global.asax is your application's subclass of the HttpApplication object:

public class Global : System.Web.HttpApplication

so Global.asax.cs inherits the HttpApplication and this instance is what ASP.NET uses to run your application. You can always access this instance with HttpContext.Current.HttpApplicationInstance. ASP.NET creates a pool of multiple HttpApplication instances that are assigned to each request being processed, but Application_Start fires only exactly once during the Web application's lifetime not for each instance of HttpApplication started.

Because Application_Start fires only when the HttpRuntime is initialized, it's a common hook point for application configuration tasks that need to happen only once during the lifetime of the application. It's perfect for setting static values that affect the entire running application or set up global configuration settings that get reused throughout the application. Application_Start can also be used to dynamically add new modules to the processing chain at runtime.

Application initialization extends beyond Application_Start. There's other code that fires during the one time initialization as well. HttpModules and their Init() handlers that fire to hook up the actual module events, also fire during this initialization stage. Likewise you can't debug these Init() handlers when debugging your code in the full version of IIS. I'm sure there's other code in ASP.NET that fires during application initialization but these two are the most common scenarios you might run into with user code.

IIS and Application_Start Debugging from Visual Studio

Problems with debugging Application_Start come up when you try to debug this initialization code when running the full version of IIS. It doesn't happen when you run the Visual Studio Web Server or IIS Express - it only occurs when you run the full version of IIS.

The problem is caused by the way that Visual Studio attaches the debugger to IIS when running ASP.NET applications, which occurs AFTER the application initialization phase and so is too late to actually stop on a breakpoint in Application_Start.

The code of course still executes, but the debugger doesn't trigger any breakpoints if you simply press F5 to debug in Visual Studio. If you set a break point like this (when running in full IIS):

Application_StartDebugger

the breakpoint won't trigger.

Likewise if you have code in an HttpModule's Init() handler which also fires as part of the runtime initialization code, the debugger won't trigger there either:

HttpModule_init

If you don't know what's happening it's easy to assume that this code is never firing, but that's not the case. The code fires, only the debugger is not triggered when the code is executed by IIS.

Easiest Solution: Use the Visual Studio Web Server or IIS Express

Debugging of Application_Start - and also HttpModule.Init() handlers - occurs only when running the full version of IIS. When running the Visual Studio built-in Web Server or IIS Express there's no problem debugging application initialization. So one easy solution to debugging Application_Init and Module Initialization code is to simply switch to using the built in Visual Studio Web Server (or IIS Express) as opposed to the full version of IIS:

DebuggingInVSServer

Now if you press F5 to debug from within Visual Studio the debugger happily triggers on your breakpoint. This works because Visual Studio actually launches the local server EXE manually and so can easily attach the debugger on startup. Inside of IIS the application pool is actually loaded from the IIS Admin service and Visual Studio has to attach to the Application Pool Exe after it's been started.

If you're already running one of these local servers for all your debugging then you shouldn't really have run into this issue in the first place and this is certainly a good way to go. In many environments it makes perfect sense to run the local servers.

Even if you're like me and you prefer to run the full version of IIS, and you happen to need to debug your Application_Start or other initialization code it might be worthwhile to switch temporarily to one of the local servers to debug the issue. It's the easiest way to get past any issues you might have during application initialization.

Debugging IIS Initialization in Visual Studio

If you definitely need to debug your startup code in the full version of IIS there are other ways to get the debugger attached as well. It won't work with the F5 run menu and it requires a little more work.

You can use the System.Diagnostics.Debugger.Break() method to force the Web application to break into the debugger:

protected void Application_Start(object sender, EventArgs e)
{
    Debugger.Break();

}

Add this code to your Application_Start or other initialization code like Module.Init() code where you want to break.

Before you do anything else do:

IISReset

from the Windows RUN box to ensure that IIS has been shut down and your Application Pool is in fact going to start up fresh.

Then open a page on your Web site.

When you hit this page, Visual Studio will pop up a debugger dialog like this:

DebuggerDialog

Click yes, and Visual Studio will then start up a new instance and display your code. If your code is set for debugging (ie. the PDB file exists) you'll be able to step through your code and see watch and locals information.

Here's what the debugger session looks like:

VSw3wpDebug

You can see that now we're debugging the w3wp process directly. The debugger has attached using a different route bypassing the Visual Studio F5 startup and instead directly attaching the debugger to the process.

One downside of this setup is that you can't see your entire project while debugging this way, but you can step into any code that symbols (PDB files) are available for. So if you needed to step into a module that resides in a separate assembly for example you can do that as long as that assembly and the .pdb file is available.

Summary

Hopefully you don't have a lot of code in your Web application's initialization code and hopefully you never actually need to debug it :-) Even if you do have to debug your startup code it's not likely to be a frequent task.

But it does come in handy at times to be able to do so directly in IIS so you can see the entire IIS environment including parent hierarchy and modules that are loading up in your application. In the past I've had some nasty module configuration conflicts between my virtual and the parent directories and being able to step into the live environment directly in IIS was a lot easier than trying to write debug log information out to a log file.

It's one of those edge case issues that comes up very rarely, but when you need it, you'll be glad you know how to debug your code in the live environment…

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  IIS7  IIS  

Changing the default HTML Templates to HTML5 in Visual Studio

$
0
0

If you're using Visual Studio 2010 to create Web applications, you probably have found out that the default Web templates for ASP.NET Web Forms and Master pages and plain HTML pages all create HTML 4 XHTML headers like this:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="$fileinputname$.aspx.cs" Inherits="$rootnamespace$.$classname$" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">

Now I don't know about you, but I'm happy to use HTML5's simple DOCTYPE definition. The first thing I tend to do is manually change my document header so that it looks like this:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="$fileinputname$.aspx.cs" Inherits="$rootnamespace$.$classname$" %>
<!DOCTYPE html>
<html>

Wouldn't it be nice if this was the default?

If you have a few minutes it's easy to change the stock templates in Visual Studio, or if you prefer you can create your own custom templates to exactly suit your needs.

Stock templates in Visual Studio 2010

All the default document types in Visual Studio are based on templates which are stored conveniently in .zip files. The folder where Visual Studio stores its HTML templates for Web Application Projects lives in this location:

C:\Program Files (x86)\Visual Studio 2010\Common7\IDE\ItemTemplates\CSharp\Web\1033

If you're using Web Site project the location is:

C:\Program Files (x86)\Visual Studio 2010\Common7\IDE\ItemTemplates\Web\CSharp\1033

There are a ton of templates in both folders. For WebForms the one we want is - not surprisingly - WebForms.zip. If you open up WebForm.zip with your favorite Zip tool (or Explorer) you'll see something like this for Web Application Projects:

WebForm_Template[6]

You can see the three templates - the ASPX, .cs and designer files in the zip file. You can edit all of these, but for our purpose here I'll just change the HTML header.

If I open the default.aspx page you see the following default markup:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="$fileinputname$.aspx.cs" Inherits="$rootnamespace$.$classname$" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    
    </div>
    </form>
</body>
</html>

 

It's a template so you see a few template expressions like $fileinputname$ and $rootnamespace$ in the document. Visual Studio fills those values in when the template is loaded and a new item added to a project. However, the rest of the document can be changed to your heart's delight. For the basic WebForms template I simply added the HTML 5 doctype header like this:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="$fileinputname$.aspx.cs" Inherits="$rootnamespace$.$classname$" %>
<!DOCTYPE html>
<html>
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    
    </div>
    </form>
</body>
</html>

Save the file and make sure that the zip file gets updated with the saved data (check: Open the Zip file again and edit the file again to make sure). If you use any recent zipping tool (or even Explorer) you will be able to simply edit the file and save it to write the changes to the Zip file.

Rebuild the TemplateCache

You're not quite done yet, unfortunately. Once you've updated your zip template you need to override the Cache that Visual Studio creates from templates. Visual Studio internally unzips these template zip files and stores them in a TemplateCache folder which lives in:

C:\Program Files (x86)\Visual Studio 2010\Common7\IDE\ItemTemplatesCache\CSharp\Web\1033\WebForms.zip

C:\Program Files (x86)\Visual Studio 2010\Common7\IDE\ItemTemplatesCache\Web\CSharp\1033\WebForms.zip

Basically there are folders that mimic the .zip files and hold the unzipped content on disk. Updating the Zip file in the ItemTemplates folder on its own will not yet give you the new template until the TemplateCache has been updated.

To do this you need to run:

DevEnv /InstallVsTemplates

from the Visual Studio Command Prompt. This recreates the template cache with the updated templates you modified.

Alternately I also found that you can delete all the folders in the ItemTempaltesCache folder which effectively clears the cache. Visual Studio will then use the templates directly from the zip file. This can be useful if you're mucking around with the templates a bit or you're trying out multiple templates all at once as it bypasses the Template registration steps. When you're all done though it's a good idea to /InstallVsTemplates just to be 'correct' about it.

While you're at it you probably also want to change:

  • HtmlPage.zip
  • MasterPage.zip

along the same lines. Note that the various Razor templates already use HTML5 doc headers, so no need to update them for HTML 5.

Creating new Templates

Changing the stock templates is useful because you probably use them all the time every day. But if you want to make sweeping changes to templates, or you want to have multiple templates that do various specific tasks, it's probably better to create brand spanking new templates instead. It's also very easy to create brand new templates. One of the easiest ways to do that is actually built right into Visual Studio via the Template Export mechanism.

Let's look at an example. I'll use a Web Application project here, but the same works for any kind of project: Web Site, MVC w/ Razor, Web Pages.

Let's start with a page that acts as my default template I'm going to create. This template includes a few basic setups for some base layout that is common on sample pages I create:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication10.WebForm1" %>
<!DOCTYPE html>
<html>
<head id="Head" runat="server">
    <title></title>
    <link href="css/westwind.css" rel="stylesheet" type="text/css" />
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script>
    <script type="text/javascript">
        if (typeof (jQuery) == 'undefined')
            document.write(unescape("%3Cscript src='scripts/jquery.min.js' type='text/javascript'%3E%3C/script%3E"));
    </script>
</head>
<body>
    <form id="form1" runat="server">
    
    <h1></h1>

    <div class="toolbarcontainer">
        <a href="" class="hoverpanel"><img src="css/images/home.gif" /> Home</a>
        <a href="" class="hoverpanel"><img src="css/images/refresh.gif" /> Refresh</a>
    </div>
    
    <div class="containercontent">
    
    
    </div>
    </form>
</body>
</html>

Now to create a template from this page:

  • Select the Web project or any file within it
  • Click on File | Export Template and select Item Template

    WizardStep1

  • Click Next and select the file or files in the project to export

    WizardStep2
  • Fill in the info for the template:

    WizardStep3

This ends up creating a new template in your My Documents folder:
<MyDocuments>\Visual Studio 2010\My Exported Templates

and

<MyDocuments>\Visual Studio 2010\Templates\ItemTemplates\Visual C#

You might want to move the latter file into the

C:\Users\rstrahl\Documents\Visual Studio 2010\Templates\ItemTemplates\Visual C#\Web

folder, so the template properly shows up in the Web folder which then looks like this in the Add New Item dialog:

TemplateInVs

When you select the template it now produces your custom HTML for the template you created.

Templates A-GoGo

Templates are a nice way to create pre-fab content. I've found it useful for certain kinds of projects to create project specific templates just so some common content can be loaded into the page. While WebForm Master and Razor Content Pages remove some of the need to build large custom headers, for some situations having custom content pumped directly into pages is still useful. Templates make this task easy and save you from repetitive typing. It's worth the effort to spent a little time to customize those templates you use daily to fit your needs. Whether it's changing the existing templates or create brand new ones, you now have the tools to customize to your hearts' content. Go for it!

© Rick Strahl, West Wind Technologies, 2005-2011
Posted in ASP.NET  .NET  HTML5  

XmlWriter and lower ASCII characters

$
0
0

Ran into an interesting problem today on my CodePaste.net site: The main RSS and ATOM feeds on the site were broken because one code snippet on the site contained a lower ASCII character (CHR(3)). I don't think this was done on purpose but it was enough to make the feeds fail.

After quite a bit of debugging and throwing in a custom error handler into my actual feed generation code that just spit out the raw error instead of running it through the ASP.NET MVC and my own error pipeline I found the actual error.

The lovely base exception and error trace I got looked like this:

Error: '', hexadecimal value 0x03, is an invalid character.


at System.Xml.XmlUtf8RawTextWriter.InvalidXmlChar(Int32 ch, Byte* pDst, Boolean entitize)
at System.Xml.XmlUtf8RawTextWriter.WriteElementTextBlock(Char* pSrc, Char* pSrcEnd)
at System.Xml.XmlUtf8RawTextWriter.WriteString(String text)
at System.Xml.XmlWellFormedWriter.WriteString(String text)
at System.Xml.XmlWriter.WriteElementString(String localName, String ns, String value)
at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItemContents(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)
at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItem(XmlWriter writer, SyndicationItem item, Uri feedBaseUri)
at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri)
at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteFeed(XmlWriter writer)
at System.ServiceModel.Syndication.Rss20FeedFormatter.WriteTo(XmlWriter writer)
at CodePasteMvc.Controllers.ApiControllerBase.GetFeed(Object instance) in C:\Projects2010\CodePaste\CodePasteMvc\Controllers\ApiControllerBase.cs:line 131

XML doesn't like extended ASCII Characters

It turns out the issue is that XML in general does not deal well with lower ASCII characters. According to the XML spec it looks like any characters below 0x09 are invalid. If you generate an XML document in .NET with an embedded &#x3; entity (as mine did to create the error above), you tend to get an XML document error when displaying it in a viewer. For example, here's what the result of my  feed output looks like with the invalid character embedded inside of Chrome which displays RSS feeds as raw XML by default:

ChromeError

Other browsers show similar error messages. The nice thing about Chrome is that you can actually view source and jump down to see the line that causes the error which allowed me to track down the actual message that failed.

If you create an XML document that contains a 0x03 character the XML writer fails outright with the error:

'', hexadecimal value 0x03, is an invalid character.

The good news is that this behavior is overridable so XML output can at least be created by using the XmlSettings object when configuring the XmlWriter instance. In my RSS configuration code this looks something like this:

MemoryStream ms = new MemoryStream();
var settings = new XmlWriterSettings()
{
    CheckCharacters = false
};
XmlWriter writer = XmlWriter.Create(ms,settings);

and voila the feed now generates.

Now generally this is probably NOT a good idea, because as mentioned above these characters are illegal and if you view a raw XML document you'll get validation errors. Luckily though most RSS feed readers however don't care and happily accept and display the feed correctly, which is good because it got me over an embarrassing hump until I figured out a better solution.

How to handle extended Characters?

I was glad to get the feed fixed for the time being, but now I was still stuck with an interesting dilemma. CodePaste.net accepts user input for code snippets and those code snippets can contain just about anything. This means that ASP.NET's standard request filtering cannot be applied to this content. The code content displayed is encoded before display so for the HTML end the CHR(3) input is not really an issue.

While invisible characters are hardly useful in user input it's not uncommon that odd characters show up in code snippets. You know the old fat fingering that happens when you're in the middle of a coding session and those invisible characters do end up sometimes in code editors and then end up pasted into the HTML textbox for pasting as a Codepaste.net snippet.

The question is how to filter this text? Looking back at the XML Charset Spec it looks like all characters below 0x20 (space) except for 0x09 (tab), 0x0A (LF), 0x0D (CR) are illegal. So applying the following filter with a RegEx should work to remove invalid characters:

string code = Regex.Replace(item.Code, @"[\u0000-\u0008,\u000B,\u000C,\u000E-\u001F]", "");

Applying this RegEx to the code snippet (and title) eliminates the problems and the feed renders cleanly.

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in .NET  XML  


IE9 not rendering box-shadow Elements inside of Table Cells

$
0
0

Ran into an annoying problem today with IE 9. Slowly updating some older sites with CSS 3 tags and for the most part IE9 does a reasonably decent job of working with the new CSS 3 features. Not all by a long shot but at least some of the more useful ones like border-radius and box-shadow are supported.

Until today I was happy to see that IE supported box-shadow just fine, but I ran into a problem with some old markup that uses tables for its main layout sections. I found that inside of a table cell IE fails to render a box-shadow.

Below are images from Chrome (left) and IE 9 (right) of the same content:

ChromeAndIe

The download and purchase images are rendered with:

<a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a>

where the .boxshadow and .roundbox styles look like this:

.boxshadow 
{
  -moz-box-shadow: 3px 3px 5px #535353;
  -webkit-box-shadow: 3px 3px 5px #535353;       
  box-shadow: 3px 3px 5px #535353;
}
.roundbox
{  
  -moz-border-radius: 6px 6px 6px 6px;
  -webkit-border-radius: 6px;  
  border-radius: 6px 6px 6px 6px;
}

And the Problem is… collapsed Table Borders

Now normally these two styles work just fine in IE 9 when applied to elements. But the box-shadow doesn't work inside of this markup - because the parent container is a table cell.

<td class="sidebar" style="border-collapse: collapse">
   <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a>

</td>

This HTML causes the image to not show a shadow. In actuality I'm not styling inline, but as part of my browser Reset I have the following in my master .css file:

table 
{
    border-collapse: collapse;
    border-spacing: 0;
}

which has the same effect as the inline style. border-collapse by default inherits from the parent and so the TD inherits from table and tr - so TD tags are effectively collapsed.

You can check out a test document that demonstrates this behavior here in this CodePaste.net snippet or run it here.

How to work around this Issue

To get IE9 to render the shadows inside of the TD tag correctly, I can just change the style explicitly NOT to use border-collapse:

<td class="sidebar" style="border-collapse: separate; border-width: 0;">

And now IE renders the shadows correctly. Note that I explicitly change the border-width just in case there is a border in use.

Do you really need border-collapse?

Should you bother with border-collapse? I think so! Collapsed borders render flat as a single fat line if a border-width and border-color are assigned, while separated borders render a thin line with a bunch of weird white space around it or worse render a old skool 3D raised border which is terribly ugly as well. So as a matter of course in any app my browser Reset includes the above code to make sure all tables with borders render the same flat borders.

As you probably know, IE has all sorts of rendering issues in tables and on backgrounds (opacity backgrounds or image backgrounds) most of which is caused by the way that IE internally uses ActiveX filters to apply these effects. Apparently collapsed borders are yet one more item that causes problems with rendering.

There you have it. Another crappy failure in IE we have to check for now, just one more reason to hate Internet Explorer. Luckily this one has a reasonably easy workaround. I hope this helps out somebody and saves them the hour I spent trying to figure out what caused this problem in the first place.

Resources

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in HTML  Internet Explorer  

Problems with opening CHM Help files from Network or Internet

$
0
0

As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file.

The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:


 

 

The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you.


The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:

 
mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm

which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed. 

Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above.

How to Fix This - Unblock the Help File

There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this:

  • Open Windows Explorer
  • Find your CHM file
  • Right click and select Properties
  • Click the Unblock button on the General tab

Here's what the dialog looks like:

 

Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.

 

Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues.

How to avoid this Problem

As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows.

Summary


It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues.

© Rick Strahl, West Wind Technologies, 2005-2012

Unable to cast transparent proxy to type <type>

$
0
0

This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains.

In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds.

An Example of Failure

To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded.

The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is:

  • Create a new AppDomain
  • Load a Factory Class into the AppDomain
  • Use the Factory Class to load additional types from the remote domain

Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:

public class TypeParserFactory : System.MarshalByRefObject,IDisposable    
{
/// <summary>
/// TypeParser Factory method that loads the TypeParser
/// object into a new AppDomain so it can be unloaded.
/// Creates AppDomain and creates type.
/// </summary>
/// <returns></returns>
public TypeParser CreateTypeParser() 
{
    if (!CreateAppDomain(null))
        return null;

    /// Create the instance inside of the new AppDomain
    /// Note: remote domain uses local EXE's AppBasePath!!!
    TypeParser parser = null;

    try 
    {
       Assembly assembly = Assembly.GetExecutingAssembly();               
       string assemblyPath = Assembly.GetExecutingAssembly().Location;
       parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath,
                                              typeof(TypeParser).FullName).Unwrap();                              
    }
    catch (Exception ex)
    {
        this.ErrorMessage = ex.GetBaseException().Message;
        return null;
    }

    return parser;
}

private bool CreateAppDomain(string lcAppDomain) 
{
    if (lcAppDomain == null)
        lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x");

    AppDomainSetup setup = new AppDomainSetup();

    // *** Point at current directory
    setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory;
    //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin");

    this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup);

    // Need a custom resolver so we can load assembly from non current path
    AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve);
    
    return true;
}
   …
}

Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject.

The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:

Assembly assembly = Assembly.GetExecutingAssembly();               
string assemblyPath = Assembly.GetExecutingAssembly().Location;
parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath,
                                       typeof(TypeParser).FullName).Unwrap();  

The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain.

Finding the Problem

To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance:

Assembly assembly = Assembly.GetExecutingAssembly();               
string assemblyPath = Assembly.GetExecutingAssembly().Location;
object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath,
                                      typeof(TypeParser).FullName).Unwrap();


// dynamic works
dynamic dynParser = objparser;
string info = dynParser.GetVersionInfo(); // method call works

// casting fails
parser = (TypeParser)objparser; 

So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.
Another couple of tries reveal the problem however:

// works
dynamic dynParser = objparser;
string info = dynParser.GetVersionInfo(); // method call works

// c:\wwapps\wwhelp\wwReflection20.dll   (Current Execution Folder)
string info3 = typeof(TypeParser).Assembly.CodeBase;

// c:\program files\vfp9\wwReflection20.dll   (my COM client EXE's folder)
string info4 = dynParser.GetType().Assembly.CodeBase;

// fails
parser = (TypeParser)objparser; 

As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch!

How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:

public string GetVersionInfo()
{
    return ".NET Version: " + Environment.Version.ToString() + "\r\n" +
    "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" +
    "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" +
    "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" +
    "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n";
}

For the factory I got:

.NET Version: 4.0.30319.239
wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dll
Assembly Cur Dir: c:\wwapps\wwhelp
ApplicationBase: C:\Programs\vfp9\
App Domain: wwReflection534cfa1f

For the instance type I got:

.NET Version: 4.0.30319.239
wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dll
Assembly Cur Dir: c:\\wwapps\\wwhelp
ApplicationBase: C:\\Programs\\vfp9\
App Domain: wwDotNetBridge_56006605

which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location.

Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like:

FusionLogViewer

The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:

*** Assembly Binder Log Entry  (1/13/2012 @ 3:06:49 AM) ***

The operation was successful.
Bind result: hr = 0x0. The operation completed successfully.

Assembly manager loaded from:  C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll
Running under executable  c:\programs\vfp9\vfp9.exe
--- A detailed error log follows. 

=== Pre-bind state information ===
LOG: User = Ras\ricks
LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null
 (Fully-specified)
LOG: Appbase = file:///C:/Programs/vfp9/
LOG: Initial PrivatePath = NULL
LOG: Dynamic Base = NULL
LOG: Cache Base = NULL
LOG: AppName = vfp9.exe
Calling assembly : (Unknown).
===
LOG: This bind starts in default load context.
LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config
LOG: Using host configuration file: 
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config.
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL.
LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll
LOG: Entering run-from-source setup phase.
LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null
LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll.
LOG: Assembly is loaded in default load context.
WRN: The same assembly was loaded into multiple contexts of an application domain:
WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null
WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null
WRN: This might lead to runtime failures.
WRN: It is recommended to inspect your application on whether this is intentional or not.
WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue.

Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified.

Remember your Assembly Locations

As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places.

If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies.

In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident.

Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in .NET  COM  

Dynamic Types and DynamicObject References in C#

$
0
0

I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicObject or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicObject types  are treated special when they are cast to the dynamic type.

Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:

public class DynamicFoo : DynamicObject
{
    Dictionary<string, object> properties = new Dictionary<string, object>();

    public string Bar { get; set; }
    public DateTime Entered { get; set; }

    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        result = null;
        if (!properties.ContainsKey(binder.Name))
            return false;

        result = properties[binder.Name];
        return true;
    }

    public override bool TrySetMember(SetMemberBinder binder, object value)
    {
        properties[binder.Name] = value;
        return true;
    }
}

This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type.

Strong Typing and Dynamic Casting

I now can instantiate and use DynamicFoo in a couple of different ways:

Strong Typing

DynamicFoo fooExplicit = new DynamicFoo();
var fooVar = new DynamicFoo();

These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicObject implementation of my class above as it's never used.

Using either of these I can access the native properties:

DynamicFoo fooExplicit = new DynamicFoo();


// static typing assignments
fooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now;
// echo back static values
Console.WriteLine(fooVar.Bar);
Console.WriteLine(fooExplicit.Entered);

but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicObject features don't kick in until you cast the type to dynamic.

If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic.

Dynamic

dynamic fooDynamic = new DynamicFoo();

fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:

DynamicFoo fooExplicit = new DynamicFoo();
dynamic fooDynamic = fooExplicit;

Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic.

Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places:

  • Using the strongly typed members of the object
  • Using the IDynamicObject Interface methods to access members

So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicObject interface methods are kick in.

All of the following works with the dynamic type:

dynamic fooDynamic = new DynamicFoo();
// dynamic typing assignments
fooDynamic.NewProperty = "Something new!";
fooDynamic.LastAccess = DateTime.Now;

// dynamic assigning static properties
fooDynamic.Bar = "dynamic barred";
fooDynamic.Entered = DateTime.Now;

// echo back dynamic values
Console.WriteLine(fooDynamic.NewProperty);
Console.WriteLine(fooDynamic.LastAccess);
Console.WriteLine(fooDynamic.Bar);
Console.WriteLine(fooDynamic.Entered);

The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically.

The Alter Ego of IDynamicObject

The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic.

Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see

DynamicInDebugger

which means there's no Intellisense and no compiler type checking on any members you apply to this instance.

If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicObject. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicObject implementation to handle any missing member definitions by running custom code.

You can easily cast objects back and forth between dynamic and the original type:

dynamic fooDynamic = new DynamicFoo();
fooDynamic.NewProperty = "New Property Value";             
DynamicFoo foo = fooDynamic;
foo.Bar = "Barred";

Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo.

Moral of the Story

This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime.

Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code.

The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in CSharp  .NET  

Creating a dynamic, extensible C# Expando Object

$
0
0

I love dynamic functionality in a strongly typed language because it offers us the best of both worlds. In C# (or any of the main .NET languages) we now have the dynamic type that provides a host of dynamic features for the static C# language.

One place where I've found dynamic to be incredibly useful is in building extensible types or types that expose traditionally non-object data (like dictionaries) in easier to use and more readable syntax. I wrote about a couple of these for accessing old school ADO.NET DataRows and DataReaders more easily for example. These classes are dynamic wrappers that provide easier syntax and auto-type conversions which greatly simplifies code clutter and increases clarity in existing code.

ExpandoObject in .NET 4.0

Another great use case for dynamic objects is the ability to create extensible objects - objects that start out with a set of static members and then can add additional properties and even methods dynamically. The .NET 4.0 framework actually includes an ExpandoObject class which provides a very dynamic object that allows you to add properties and methods on the fly and then access them again.

For example with ExpandoObject you can do stuff like this:

dynamic expand = new ExpandoObject();

expand.Name = "Rick";
expand.HelloWorld = (Func<string, string>) ((string name) => 
{ 
    return "Hello " + name; 
});

Console.WriteLine(expand.Name);
Console.WriteLine(expand.HelloWorld("Dufus"));

Internally ExpandoObject uses a Dictionary like structure and interface to store properties and methods and then allows you to add and access properties and methods easily. As cool as ExpandoObject is it has a few shortcomings too:

  • It's a sealed type so you can't use it as a base class
  • It only works off 'properties' in the internal Dictionary - you can't expose existing type data
  • It doesn't serialize to XML or with DataContractSerializer/DataContractJsonSerializer

Expando - A truly extensible Object

ExpandoObject is nice if you just need a dynamic container for a dictionary like structure. However, if you want to build an extensible object that starts out with a set of strongly typed properties and then allows you to extend it, ExpandoObject does not work because it's a sealed class that can't be inherited.

I started thinking about this very scenario for one of my applications I'm building for a customer. In this system we are connecting to various different user stores. Each user store has the same basic requirements for username, password, name etc. But then each store also has a number of extended properties that is available to each application. In the real world scenario the data is loaded from the database in a data reader and the known properties are assigned from the known fields in the database. All unknown fields are then 'added' to the expando object dynamically.

In the past I've done this very thing with a separate property - Properties - just like I do for this class. But the property and dictionary syntax is not ideal and tedious to work with.

I started thinking about how to represent these extra property structures. One way certainly would be to add a Dictionary, or an ExpandoObject to hold all those extra properties. But wouldn't it be nice if the application could actually extend an existing object that looks something like this as you can with the Expando object:

public class User : Westwind.Utilities.Dynamic.Expando
{
    public string Email { get; set; }
    public string Password { get; set; }
    public string Name { get; set; }
    public bool Active { get; set; }
    public DateTime? ExpiresOn { get; set; }
}

and then simply start extending the properties of this object dynamically? Using the Expando object I describe later you can now do the following:

[TestMethod]
public void UserExampleTest()
{            
    var user = new User();

    // Set strongly typed properties
    user.Email = "rick@west-wind.com";
    user.Password = "nonya123";
    user.Name = "Rickochet";
    user.Active = true;

    // Now add dynamic properties
    dynamic duser = user;
    duser.Entered = DateTime.Now;
    duser.Accesses = 1;

    // you can also add dynamic props via indexer
    user["NickName"] = "AntiSocialX";
    duser["WebSite"] = "http://www.west-wind.com/weblog";


    // Access strong type through dynamic ref
    Assert.AreEqual(user.Name,duser.Name);

    // Access strong type through indexer 
    Assert.AreEqual(user.Password,user["Password"]);
    

    // access dyanmically added value through indexer
    Assert.AreEqual(duser.Entered,user["Entered"]);
    
    // access index added value through dynamic
    Assert.AreEqual(user["NickName"],duser.NickName);
    

    // loop through all properties dynamic AND strong type properties (true)
    foreach (var prop in user.GetProperties(true))
    { 
        object val = prop.Value;
        if (val == null)
            val = "null";

        Console.WriteLine(prop.Key + ": " + val.ToString());
    }
}

As you can see this code somewhat blurs the line between a static and dynamic type. You start with a strongly typed object that has a fixed set of properties. You can then cast the object to dynamic (as I discussed in my last post) and add additional properties to the object. You can also use an indexer to add dynamic properties to the object.

To access the strongly typed properties you can use either the strongly typed instance, the indexer or the dynamic cast of the object. Personally I think it's kinda cool to have an easy way to access strongly typed properties by string which can make some data scenarios much easier.

To access the 'dynamically added' properties you can use either the indexer on the strongly typed object, or property syntax on the dynamic cast.

Using the dynamic type allows all three modes to work on both strongly typed and dynamic properties.

Finally you can iterate over all properties, both dynamic and strongly typed if you chose. Lots of flexibility.

Note also that by default the Expando object works against the (this) instance meaning it extends the current object. You can also pass in a separate instance to the constructor in which case that object will be used to iterate over to find properties rather than this.

Using this approach provides some really interesting functionality when use the dynamic type. To use this we have to add an explicit constructor to the Expando subclass:

public class User : Westwind.Utilities.Dynamic.Expando
{
    public string Email { get; set; }
    public string Password { get; set; }
    public string Name { get; set; }
    public bool Active { get; set; }
    public DateTime? ExpiresOn { get; set; }

    public User() : base()
    { }

    // only required if you want to mix in seperate instance
    public User(object instance)
        : base(instance)
    {
    }
}

to allow the instance to be passed. When you do you can now do:

[TestMethod]
public void ExpandoMixinTest()
{
    // have Expando work on Addresses
    var user = new User( new Address() );

    // cast to dynamicAccessToPropertyTest
    dynamic duser = user;

    // Set strongly typed properties
    duser.Email = "rick@west-wind.com";
    user.Password = "nonya123";
    
    // Set properties on address object
    duser.Address = "32 Kaiea";
    //duser.Phone = "808-123-2131";

    // set dynamic properties
    duser.NonExistantProperty = "This works too";

    // shows default value Address.Phone value
    Console.WriteLine(duser.Phone);

}


Using the dynamic cast in this case allows you to access *three* different 'objects': The strong type properties, the dynamically added properties in the dictionary and the properties of the instance passed in! Effectively this gives you a way to simulate multiple inheritance (which is scary - so be very careful with this, but you can do it).

How Expando works

Behind the scenes Expando is a DynamicObject subclass as I discussed in my last post. By implementing a few of DynamicObject's methods you can basically create a type that can trap 'property missing' and 'method missing' operations. When you access a non-existant property a known method is fired that our code can intercept and provide a value for. Internally Expando uses a custom dictionary implementation to hold the dynamic properties you might add to your expandable object.

Let's look at code first. The code for the Expando type is straight forward and given what it provides relatively short. Here it is.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Dynamic;
using System.Reflection;

namespace Westwind.Utilities.Dynamic
{
    /// <summary>
    /// Class that provides extensible properties and methods. This
    /// dynamic object stores 'extra' properties in a dictionary or
    /// checks the actual properties of the instance.
    /// 
    /// This means you can subclass this expando and retrieve either
    /// native properties or properties from values in the dictionary.
    /// 
    /// This type allows you three ways to access its properties:
    /// 
    /// Directly: any explicitly declared properties are accessible
    /// Dynamic: dynamic cast allows access to dictionary and native properties/methods
    /// Dictionary: Any of the extended properties are accessible via IDictionary interface
    /// </summary>
    [Serializable]
    public class Expando : DynamicObject, IDynamicMetaObjectProvider
    {
        /// <summary>
        /// Instance of object passed in
        /// </summary>
        object Instance;

        /// <summary>
        /// Cached type of the instance
        /// </summary>
        Type InstanceType;

        PropertyInfo[] InstancePropertyInfo
        {
            get
            {
                if (_InstancePropertyInfo == null && Instance != null)                
                    _InstancePropertyInfo = Instance.GetType().GetProperties(BindingFlags.Instance | 
                                                          BindingFlags.Public | BindingFlags.DeclaredOnly);
                return _InstancePropertyInfo;                
            }
        }
        PropertyInfo[] _InstancePropertyInfo;


        /// <summary>
        /// String Dictionary that contains the extra dynamic values
        /// stored on this object/instance
        /// </summary>        
        /// <remarks>Using PropertyBag to support XML Serialization of the dictionary</remarks>
        public PropertyBag Properties = new PropertyBag();

        //public Dictionary<string,object> Properties = new Dictionary<string, object>();

        /// <summary>
        /// This constructor just works off the internal dictionary and any 
        /// public properties of this object.
        /// 
        /// Note you can subclass Expando.
        /// </summary>
        public Expando() 
        {
            Initialize(this);            
        }

        /// <summary>
        /// Allows passing in an existing instance variable to 'extend'.        
        /// </summary>
        /// <remarks>
        /// You can pass in null here if you don't want to 
        /// check native properties and only check the Dictionary!
        /// </remarks>
        /// <param name="instance"></param>
        public Expando(object instance)
        {
            Initialize(instance);
        }


        protected virtual void Initialize(object instance)
        {
            Instance = instance;
            if (instance != null)
                InstanceType = instance.GetType();           
        }


       /// <summary>
       /// Try to retrieve a member by name first from instance properties
       /// followed by the collection entries.
       /// </summary>
       /// <param name="binder"></param>
       /// <param name="result"></param>
       /// <returns></returns>
        public override bool TryGetMember(GetMemberBinder binder, out object result)
        {
            result = null;

            // first check the Properties collection for member
            if (Properties.Keys.Contains(binder.Name))
            {
                result = Properties[binder.Name];
                return true;
            }


            // Next check for Public properties via Reflection
            if (Instance != null)
            {
                try
                {
                    return GetProperty(Instance, binder.Name, out result);                    
                }
                catch { }
            }

            // failed to retrieve a property
            result = null;
            return false;
        }


        /// <summary>
        /// Property setter implementation tries to retrieve value from instance 
        /// first then into this object
        /// </summary>
        /// <param name="binder"></param>
        /// <param name="value"></param>
        /// <returns></returns>
        public override bool TrySetMember(SetMemberBinder binder, object value)
        {

            // first check to see if there's a native property to set
            if (Instance != null)
            {
                try
                {
                    bool result = SetProperty(Instance, binder.Name, value);
                    if (result)
                        return true;
                }
                catch { }
            }
            
            // no match - set or add to dictionary
            Properties[binder.Name] = value;
            return true;
        }

        /// <summary>
        /// Dynamic invocation method. Currently allows only for Reflection based
        /// operation (no ability to add methods dynamically).
        /// </summary>
        /// <param name="binder"></param>
        /// <param name="args"></param>
        /// <param name="result"></param>
        /// <returns></returns>
        public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result)
        {
            if (Instance != null)
            {
                try
                {
                    // check instance passed in for methods to invoke
                    if (InvokeMethod(Instance, binder.Name, args, out result))
                        return true;                    
                }
                catch { }
            }

            result = null;
            return false;
        }
        

        /// <summary>
        /// Reflection Helper method to retrieve a property
        /// </summary>
        /// <param name="instance"></param>
        /// <param name="name"></param>
        /// <param name="result"></param>
        /// <returns></returns>
        protected bool GetProperty(object instance, string name, out object result)
        {
            if (instance == null)
                instance = this;

            var miArray = InstanceType.GetMember(name, BindingFlags.Public | 
                                      BindingFlags.GetProperty | BindingFlags.Instance);
            if (miArray != null && miArray.Length > 0)
            {
                var mi = miArray[0];
                if (mi.MemberType == MemberTypes.Property)
                {
                    result = ((PropertyInfo)mi).GetValue(instance,null);
                    return true;
                }
            }

            result = null;
            return false;                
        }

        /// <summary>
        /// Reflection helper method to set a property value
        /// </summary>
        /// <param name="instance"></param>
        /// <param name="name"></param>
        /// <param name="value"></param>
        /// <returns></returns>
        protected bool SetProperty(object instance, string name, object value)
        {
            if (instance == null)
                instance = this;

            var miArray = InstanceType.GetMember(name, BindingFlags.Public | 
                                                       BindingFlags.SetProperty | 
                                                       BindingFlags.Instance);
            if (miArray != null && miArray.Length > 0)
            {
                var mi = miArray[0];
                if (mi.MemberType == MemberTypes.Property)
                {
                    ((PropertyInfo)mi).SetValue(Instance, value, null);
                    return true;
                }
            }
            return false;                
        }

        /// <summary>
        /// Reflection helper method to invoke a method
        /// </summary>
        /// <param name="instance"></param>
        /// <param name="name"></param>
        /// <param name="args"></param>
        /// <param name="result"></param>
        /// <returns></returns>
        protected bool InvokeMethod(object instance, string name, object[] args, out object result)
        {
            if (instance == null)
                instance = this;

            // Look at the instanceType
            var miArray = InstanceType.GetMember(name,
                                    BindingFlags.InvokeMethod |
                                    BindingFlags.Public | BindingFlags.Instance);

            if (miArray != null && miArray.Length > 0)
            {
                var mi = miArray[0] as MethodInfo;
                result = mi.Invoke(Instance, args);
                return true;
            }

            result = null;
            return false;
        }



        /// <summary>
        /// Convenience method that provides a string Indexer 
        /// to the Properties collection AND the strongly typed
        /// properties of the object by name.
        /// 
        /// // dynamic
        /// exp["Address"] = "112 nowhere lane"; 
        /// // strong
        /// var name = exp["StronglyTypedProperty"] as string; 
        /// </summary>
        /// <remarks>
        /// The getter checks the Properties dictionary first
        /// then looks in PropertyInfo for properties.
        /// The setter checks the instance properties before
        /// checking the Properties dictionary.
        /// </remarks>
        /// <param name="key"></param>
        /// 
        /// <returns></returns>
        public object this[string key]
        {
            get
            {
                try
                {
                    // try to get from properties collection first
                    return Properties[key];
                }
                catch (KeyNotFoundException ex)
                {
                    // try reflection on instanceType
                    object result = null;
                    if (GetProperty(Instance, key, out result))
                        return result;

                    // nope doesn't exist
                    throw;
                }
            }
            set
            {
                if (Properties.ContainsKey(key))
                {
                    Properties[key] = value;
                    return;
                }

                // check instance for existance of type first
                var miArray = InstanceType.GetMember(key, BindingFlags.Public | BindingFlags.GetProperty);
                if (miArray != null && miArray.Length > 0)
                    SetProperty(Instance, key, value);
                else
                    Properties[key] = value;
            }
        }


        /// <summary>
        /// Returns and the properties of 
        /// </summary>
        /// <param name="includeProperties"></param>
        /// <returns></returns>
        public IEnumerable<KeyValuePair<string,object>> GetProperties(bool includeInstanceProperties = false)
        {
            if (includeInstanceProperties && Instance != null)
            {
                foreach (var prop in this.InstancePropertyInfo)
                    yield return new KeyValuePair<string, object>(prop.Name, prop.GetValue(Instance, null));
            }

            foreach (var key in this.Properties.Keys)
               yield return new KeyValuePair<string, object>(key, this.Properties[key]);

        }
  

        /// <summary>
        /// Checks whether a property exists in the Property collection
        /// or as a property on the instance
        /// </summary>
        /// <param name="item"></param>
        /// <returns></returns>
        public bool Contains(KeyValuePair<string, object> item, bool includeInstanceProperties = false)
        {
            bool res = Properties.ContainsKey(item.Key);
            if (res)
                return true;

            if (includeInstanceProperties && Instance != null)
            {
                foreach (var prop in this.InstancePropertyInfo)
                {
                    if (prop.Name == item.Key)
                        return true;
                }
            }

            return false;
        }        

    }
}

Although the Expando class supports an indexer, it doesn't actually implement IDictionary or even IEnumerable. It only provides the indexer and Contains() and GetProperties() methods, that work against the Properties dictionary AND the internal instance.

The reason for not implementing IDictionary is that a) it doesn't add much value since you can access the Properties dictionary directly and that b) I wanted to keep the interface to class very lean so that it can serve as an entity type if desired. Implementing these IDictionary (or even IEnumerable) causes LINQ extension methods to pop up on the type which obscures the property interface and would only confuse the purpose of the type. IDictionary and IEnumerable are also problematic for XML and JSON Serialization - the XML Serializer doesn't serialize IDictionary<string,object>, nor does the DataContractSerializer. The JavaScriptSerializer does serialize, but it treats the entire object like a dictionary and doesn't serialize the strongly typed properties of the type, only the dictionary values which is also not desirable. Hence the decision to stick with only implementing the indexer to support the user["CustomProperty"] functionality and leaving iteration functions to the publicly exposed Properties dictionary.

Note that the Dictionary used here is a custom PropertyBag class I created to allow for serialization to work. One important aspect for my apps is that whatever custom properties get added they have to be accessible to AJAX clients since the particular app I'm working on is a SIngle Page Web app where most of the Web access is through JSON AJAX calls. PropertyBag can serialize to XML and one way serialize to JSON using the JavaScript serializer (not the DCS serializers though).

The key components that make Expando work in this code are the Properties Dictionary and the TryGetMember() and TrySetMember() methods. The Properties collection is public so if you choose you can explicitly access the collection to get better performance or to manipulate the members in internal code (like loading up dynamic values form a database).

Notice that TryGetMember() and TrySetMember() both work against the dictionary AND the internal instance to retrieve and set properties. This means that user["Name"] works against native properties of the object as does user["Name"] = "RogaDugDog".

What's your Use Case?

This is still an early prototype but I've plugged it into one of my customer's applications and so far it's working very well. The key features for me were the ability to easily extend the type with values coming from a database and exposing those values in a nice and easy to use manner. I'm also finding that using this type of object for ViewModels works very well to add custom properties to view models. I suspect there will be lots of uses for this - I've been using the extra dictionary approach to extensibility for years - using a dynamic type to make the syntax cleaner is just a bonus here.

What can you think of to use this for?

Resources

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in CSharp  .NET  Dynamic Types  
Viewing all 664 articles
Browse latest View live