Quantcast
Channel: Rick Strahl's Web Log
Viewing all 664 articles
Browse latest View live

A free standing ASP.NET Pager Web Control

$
0
0

Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily.

Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with.

Why a Pager Control?

At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control.

The built in ASP.NET paging has several issues though:

  • Postback and JavaScript requirements
    If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer).
  • DataSource Controls required for Efficient Data Paging Retrieval
    The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK.
  • DataPager is not Free Standing
    The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control.
  • Paging Events
    In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk.
  • Styling
    The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for.
  • Not a Generic Solution
    The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences.

So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control.

The Pager Control

My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager.

The Pager control I ended up with has the following features:

  • Completely free standing Pager control – no control or data dependencies
  • Complete manual control – Pager can render without any data dependency
  • Easy to use: Only need to set PageSize, ActivePage and TotalItems
  • Supports optional filtering of IQueryable for efficient queries and Pager rendering
  • Supports optional full set filtering of IEnumerable<T> and DataTable
  • Page links are plain HTTP GET href Links
  • Control automatically picks up Page links on the URL and assigns them
    (automatic page detection no page index changing events to hookup)
  • Full CSS Styling support

On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on.

To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):

Pagers[4] 

The markup for these two pagers looks like this:

<ww:Pager runat="server" id="ItemPager" 
          PageSize="5" 
          PageLinkCssClass="gridpagerbutton" 
          SelectedPageCssClass="gridpagerbutton-selected"
          PagesTextCssClass="gridpagertext"
          CssClass="gridpager" 
          RenderContainerDiv="true"
          ContainerDivCssClass="gridpagercontainer"
          MaxPagesToDisplay="6" 
          PagesText="Item Pages:" 
          NextText="next"
          PreviousText="previous"
          />
<ww:Pager runat="server" id="ItemPager2" 
          PageSize="5" 
          RenderContainerDiv="true"
          MaxPagesToDisplay="6" />

The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options.

Styling

The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style.

The default styling shown for the red outlined pager looks like this:

.pagercontainer
{
    margin: 20px 0;
    background: whitesmoke;    
    padding: 5px;    
}
.pager
{
    float: right;
    font-size: 10pt;
    text-align: left;
}
.pagerbutton,.pagerbutton-selected,.pagertext
{
    display: block;        
    float: left;    
    text-align: center;
    border: solid 2px maroon;        
    min-width: 18px;      
    margin-left: 3px;    
    text-decoration: none;        
    padding: 4px;
}
.pagerbutton-selected
{
    font-size: 130%;
    font-weight: bold;        
    color: maroon;
    border-width: 0px;
    background: khaki;
}
.pagerbutton-first
{
    margin-right: 12px;        
}
.pagerbutton-last,.pagerbutton-prev
{
    margin-left: 12px;        
}
.pagertext
{
    border: none;
    margin-left: 30px;
    font-weight: bold;
}
.pagerbutton a
{
    text-decoration: none;
}
.pagerbutton:hover
{
    background-color: maroon;
    color: cornsilk;
}
.pagerbutton-prev
{
    background-image: url(images/prev.png);
    background-position: 2px center;
    background-repeat: no-repeat;
    width: 35px;
    padding-left: 20px;
}
.pagerbutton-next
{
    background-image: url(images/next.png);
    background-position: 40px center;
    background-repeat: no-repeat;
    width: 35px;
    padding-right: 20px;
    margin-right: 0px;
}

Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons.

In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles.

Configuring the Pager with Code

Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems.

So for this markup:

<ww:Pager runat="server" id="ItemPagerManual" 
          PageSize="5" 
          MaxPagesToDisplay="6" 
/>

I can use code as simple as:

ItemPagerManual.PageSize = 3;
ItemPagerManual.ActivePage = 4;
ItemPagerManual.TotalItems = 20;

Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above.

A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data.

The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e)
{
    Category = Request.Params["Category"] ?? string.Empty;

    IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);

   
// Update the page and filter the list down
    ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList);

    // Render user control with a list view     


Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder }

The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things:

  • Filters the IQueryable to retrieve only the data displayed on the active page
  • Sets the Totaltems property and calculates TotalPages on the Pager

and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager.

In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data.

Output Generated and Paging Links

The output generated creates pager links as plain href links. Here’s what the output looks like:

<div id="ItemPager" class="pagercontainer">
    <div class="pager">
        <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a>
        <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a>
        <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a>
        <span class="pagerbutton-selected">4</span>
        <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a>
        <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a>
        <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div>
        <br clear="all" />
    </div>
</div>

The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging.

Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time.

The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not.

Pager with a Grid

The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control.

The very basic plain vanilla ASP.NET grid markup looks like this:

   <div style="width: 600px; margin: 0 auto;padding: 20px; ">
        <asp:DataGrid runat="server" AutoGenerateColumns="True" 
                      ID="gdItems" CssClass="blackborder" style="width: 600px;">
        <AlternatingItemStyle CssClass="gridalternate" /> 
        <HeaderStyle CssClass="gridheader" />
        </asp:DataGrid>
        
        <ww:Pager runat="server" ID="Pager" 
              CssClass="gridpager"
              ContainerDivCssClass="gridpagercontainer"
              PageLinkCssClass="gridpagerbutton"
              SelectedPageCssClass="gridpagerbutton-selected"
              PageSize="8" 
              RenderContainerDiv="true"
              MaxPagesToDisplay="6"  />    
    </div>

and looks like this when rendered:
GridPager
using custom set of CSS styles. The code behind for this code is also very simple:

protected void Page_Load(object sender, EventArgs e)
{
    string category = Request.Params["category"] ?? "";

    busItem itemRep = WebStoreFactory.GetItem();
    var items = itemRep.GetItemsByCategory(category)
                       .Select(itm => new {Sku = itm.Sku, Description = itm.Description});           

    // run query into a DataTable for demonstration
    DataTable dt = itemRep.Converter.ToDataTable(items,"TItems");

    // Remove all items not on the current page
    dt = Pager.FilterDataTable(dt,0);
    
    // bind and display
    gdItems.DataSource = dt;
    gdItems.DataBind();
}

A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those.

Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query).

Control Creation

The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies).

To give you an idea how the control works here is the Render() method:

/// <summary>
/// overridden to handle custom pager rendering for runtime and design time
/// </summary>
/// <param name="writer"></param>
protected override void Render(HtmlTextWriter writer)
{
    base.Render(writer);

    if (TotalPages == 0 && TotalItems > 0)                            
       TotalPages = CalculateTotalPagesFromTotalItems();  

    if (DesignMode)
        TotalPages = 10;

    // don't render pager if there's only one page
    if (TotalPages < 2)
        return;

    if (RenderContainerDiv)
    {
        if (!string.IsNullOrEmpty(ContainerDivCssClass))
            writer.AddAttribute("class", ContainerDivCssClass);
        writer.RenderBeginTag("div");
    }

    // main pager wrapper
    writer.WriteBeginTag("div");
    writer.AddAttribute("id", this.ClientID);
    if (!string.IsNullOrEmpty(CssClass))
        writer.WriteAttribute("class", this.CssClass);
    writer.Write(HtmlTextWriter.TagRightChar + "\r\n");


    // Pages Text
    writer.WriteBeginTag("span");
    if (!string.IsNullOrEmpty(PagesTextCssClass))
        writer.WriteAttribute("class", PagesTextCssClass);
    writer.Write(HtmlTextWriter.TagRightChar);
    writer.Write(this.PagesText);
    writer.WriteEndTag("span");

    // if the base url is empty use the current URL
    FixupBaseUrl();        

    // set _startPage and _endPage
    ConfigurePagesToRender();

    // write out first page link
    if (ShowFirstAndLastPageLinks && _startPage != 1)
    {
        writer.WriteBeginTag("a");
        string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString());
        writer.WriteAttribute("href", pageUrl);
        if (!string.IsNullOrEmpty(PageLinkCssClass))
            writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first");
        writer.Write(HtmlTextWriter.SelfClosingTagEnd);
        writer.Write("1");
        writer.WriteEndTag("a");
        writer.Write("&nbsp;");
    }

    // write out all the page links
    for (int i = _startPage; i < _endPage + 1; i++)
    {
        if (i == ActivePage)
        {
            writer.WriteBeginTag("span");
            if (!string.IsNullOrEmpty(SelectedPageCssClass))
                writer.WriteAttribute("class", SelectedPageCssClass);
            writer.Write(HtmlTextWriter.TagRightChar);
            writer.Write(i.ToString());
            writer.WriteEndTag("span");
        }
        else
        {
            writer.WriteBeginTag("a");
            string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&');
            writer.WriteAttribute("href", pageUrl);
            if (!string.IsNullOrEmpty(PageLinkCssClass))
                writer.WriteAttribute("class", PageLinkCssClass);
            writer.Write(HtmlTextWriter.SelfClosingTagEnd);
            writer.Write(i.ToString());
            writer.WriteEndTag("a");
        }

        writer.Write("\r\n");
    }

    // write out last page link
    if (ShowFirstAndLastPageLinks && _endPage < TotalPages)
    {
        writer.WriteBeginTag("a");
        string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString());
        writer.WriteAttribute("href", pageUrl);
        if (!string.IsNullOrEmpty(PageLinkCssClass))
            writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last");
        writer.Write(HtmlTextWriter.SelfClosingTagEnd);
        writer.Write(TotalPages.ToString());
        writer.WriteEndTag("a");
    }


    // Previous link
    if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1)
    {
        writer.Write("&nbsp;");
        writer.WriteBeginTag("a");
        string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString());
        writer.WriteAttribute("href", pageUrl);
        if (!string.IsNullOrEmpty(PageLinkCssClass))
            writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev");
        writer.Write(HtmlTextWriter.SelfClosingTagEnd);
        writer.Write(PreviousText);
        writer.WriteEndTag("a");
    }

    // Next link
    if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages)
    {
        writer.Write("&nbsp;");
        writer.WriteBeginTag("a");
        string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString());
        writer.WriteAttribute("href", pageUrl);
        if (!string.IsNullOrEmpty(PageLinkCssClass))
            writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next");
        writer.Write(HtmlTextWriter.SelfClosingTagEnd);
        writer.Write(NextText);
        writer.WriteEndTag("a");
    }

    writer.WriteEndTag("div");

    if (RenderContainerDiv)
    {
        if (RenderContainerDivBreak)
writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } }

As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few:

PagerVisualStudio

You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}.

Filtering Data

What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually.

There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ:

 /// <summary>
 /// <summary>
 /// Queries the database for the ActivePage applied manually
 /// or from the Request["page"] variable. This routine
 /// figures out and sets TotalPages, ActivePage and
 /// returns a filtered subset IQueryable that contains
 /// only the items from the ActivePage.
 /// </summary>
 /// <param name="query"></param>
 /// <param name="activePage">
 /// The page you want to display. Sets the ActivePage property when passed. 
 /// Pass 0 or smaller to use ActivePage setting.
 /// </param>
 /// <returns></returns>
 public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage)
       where T : class, new()
 {
     ActivePage = activePage < 1 ? ActivePage : activePage;
     if (ActivePage < 1)
         ActivePage = 1;

     TotalItems = query.Count(); 

     if (TotalItems <= PageSize)
     {
         ActivePage = 1;
         TotalPages = 1;
         return query;
     }

     int skip = ActivePage - 1;
     if (skip > 0)
         query = query.Skip(skip * PageSize);

     _TotalPages = CalculateTotalPagesFromTotalItems();

     return query.Take(PageSize);
 }


The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering.

The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this):

/// <summary>
/// Filters a data table for an ActivePage.
/// 
/// Note: Modifies the data set permanently by remove DataRows
/// </summary>
/// <param name="dt">Full result DataTable</param>
/// <param name="activePage">Page to display. 0 to use ActivePage property </param>
/// <returns></returns>
public DataTable FilterDataTable(DataTable dt, int activePage)
{
    ActivePage = activePage < 1 ? ActivePage : activePage;
    if (ActivePage < 1)
        ActivePage = 1;

    TotalItems = dt.Rows.Count;

    if (TotalItems <= PageSize)
    {
        ActivePage = 1;
        TotalPages = 1;
        return dt;
    }
    
    int skip = ActivePage - 1;            
    if (skip > 0)
    {
        for (int i = 0; i < skip * PageSize; i++ )
            dt.Rows.RemoveAt(0);
    }
    while(dt.Rows.Count > PageSize)
        dt.Rows.RemoveAt(PageSize);

    return dt;
}

Using the Pager Control

The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet.

Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of…

Resources

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  
kick it on DotNetKicks.com

SmtpClient and Locked File Attachments

$
0
0

Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built.

The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages.

File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure.

The jist of the code is something like this:

/// <summary>
/// Fully self contained mail sending method. Sends an email message by connecting 
/// and disconnecting from the email server.
/// </summary>
/// <returns>true or false</returns>
public bool SendMail()
{
    if (!this.Connect())
        return false;

    try
    {
        // Create and configure the message 
        MailMessage msg = this.GetMessage();

        smtp.Send(msg);

        this.OnSendComplete(this);

    }
    catch (Exception ex)
    {
        string msg = ex.Message;
        if (ex.InnerException != null)                
            msg = ex.InnerException.Message;
        
        this.SetError(msg);
        this.OnSendError(this);

        return false;
    }
    finally
    {
        // close connection and clear out headers
// SmtpClient instance nulled out
this.Close(); } return true; }
/// <summary>
/// Configures the message interface
/// </summary>
/// <param name="msg"></param>
protected virtual MailMessage GetMessage()
{
    MailMessage msg = new MailMessage();            

    msg.Body = this.Message;
    msg.Subject = this.Subject;
    msg.From = new MailAddress(this.SenderEmail, this.SenderName);

    if (!string.IsNullOrEmpty(this.ReplyTo))
        msg.ReplyTo = new MailAddress(this.ReplyTo);

    // Send all the different recipients
    this.AssignMailAddresses(msg.To, this.Recipient);
    this.AssignMailAddresses(msg.CC, this.CC);
    this.AssignMailAddresses(msg.Bcc, this.BCC);

    if (!string.IsNullOrEmpty(this.Attachments))
    {
        string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries);
        foreach (string file in files)
        {
            msg.Attachments.Add(new Attachment(file));
        }
    }

    if (this.ContentType.StartsWith("text/html"))
        msg.IsBodyHtml = true;
    else
        msg.IsBodyHtml = false;

    msg.BodyEncoding = this.Encoding;
    … additional code  omitted

    return msg;
}

Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created.

The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed.

As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem:

// Create and configure the message 
using (MailMessage msg = this.GetMessage())
{
    smtp.Send(msg);

    if (this.SendComplete != null)
        this.OnSendComplete(this);
    // or use an explicit msg.Dispose() here
}

The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed.

I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic.

Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected.

Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching.

Resources:

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in .NET  
kick it on DotNetKicks.com

Making Sense of ASP.NET Paths

$
0
0

ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for.

To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments.

Request Object Paths Available

Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual.

Request Property Description and Value
ApplicationPath Returns the web root-relative logical path to the virtual root of this app.
/webstore/
PhysicalApplicationPath Returns local file system path of the virtual root for this app.
c:\inetpub\wwwroot\webstore
PhysicalPath Returns the local file system path to the current script or path.
c:\inetpub\wwwroot\webstore\admin\paths.aspx
Path
FilePath
CurrentExecutionFilePath
All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path.
/webstore/admin/paths.aspx
AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned.
~/admin/paths.aspx
PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available.
/webstore/admin/paths.aspx/ExtraPathInfo
RawUrl Returns the full root relative URL including querystring and extra path as a string.
/webstore/admin/paths.aspx?sku=wwhelp40
Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string.
http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40
UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header.
http://www.west-wind.com/webstore/default.aspx?Info
Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path.
/webstore/admin/

As you can see there’s a ton of information available there for each of the three common path formats:

  • Physical Path
    is an OS type path that points to a path or file on disk.
  • Logical Path
    is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path.
  • ~/ (Root-relative) Path
    is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code.

You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms.

~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl()

ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path:

<asp:Image runat="server" ID="imgHelp"  ImageUrl="~/images/help.gif" />

ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality.

In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing:

string imgPath = this.ResolveUrl("~/images/help.gif");
imgHelp.ImageUrl = imgPath;

Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available.

ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode:

<script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> 

which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced.

In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class:

string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx");

VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too).

Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios.

Mapping Virtual Paths to Physical Paths with Server.MapPath()

If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following:

string physicalPath = Server.MapPath("~/scripts/ww.jquery.js"));

MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works.

string physicalPath = Server.MapPath("scripts/silverlight.js");

as well as dot relative syntax:

string physicalPath = Server.MapPath("../scripts/jquery.js");

Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically).

Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons.

Server and Host Information

Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing:

Server Variable Function and Example
SERVER_NAME The of the domain or IP Address
wwww.west-wind.com or 127.0.0.1
SERVER_PORT The port that the request runs under.
80
SERVER_PORT_SECURE Determines whether https: was used.
0 or 1
APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name.
/LM/W3SVC/1/ROOT/webstore

Request.Url and Uri Parsing

If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation.

Uri Property Function
Scheme The URL scheme or protocol prefix.
http or https
Port The port if specifically specified.
DnsSafeHost The domain name or local host NetBios machine name
www.west-wind.com or rasnote
LocalPath The full path of the URL including script name and extra PathInfo.
/webstore/admin/paths.aspx
Query The query string if any
?id=1

The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one.

Here are a few common operations I’ve needed to do to get specific URLs:

Convert the Request URL to an SSL/HTTPS link

For example to take the current request URL and converted  it to a secure URL can be done like this:

UriBuilder build = new UriBuilder(Request.Url);
build.Scheme = "https";
build.Port = -1;  // don't inject port

Uri newUri = build.Uri;
string newUrl = build.ToString();

Retrieve the fully qualified URL without a QueryString
AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however:

UriBuilder builder = newUriBuilder(Request.Url);
builder.Query = "";
stringlogicalPathWithoutQuery = builder.ToString();

What else?

I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these.

But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  
kick it on DotNetKicks.com

LINQ to SQL and missing Many to Many EntityRefs

$
0
0

Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL.

I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code.

When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-})

Here is the original database layout:

Schema

There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship.

When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix):

Model

The relationship looks perfectly fine, both in the designer as well as in the XML document:

  <Table Name="dbo.wws_Item_Categories" Member="ItemCategories">
    <Type Name="ItemCategory">
      <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" />
      <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" />
      <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" />
      <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" />
    </Type>
  </Table>
  <Table Name="dbo.wws_Categories" Member="Categories">
    <Type Name="Category">
      <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" />
      <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" />
      <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" />
      <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" />
      <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" />
      <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" />
    </Type>
  </Table>

However when looking at the code generated these navigation properties (also on Item) are completely missing:

[global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")]
[global::System.Runtime.Serialization.DataContractAttribute()]
public partial class ItemCategory : Westwind.BusinessFramework.EntityBase
{
    private System.Guid _ItemId;
    private System.Guid _CategoryId;
    
    public ItemCategory()
    {
    }
    
    [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")]
    [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)]
    public System.Guid ItemId
    {
        get
        {
            return this._ItemId;
        }
        set
        {
            if ((this._ItemId != value))
            {
                this._ItemId = value;
            }
        }
    }
    
    [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")]
    [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)]
    public System.Guid CategoryId
    {
        get
        {
            return this._CategoryId;
        }
        set
        {
            if ((this._CategoryId != value))
            {
                this._CategoryId = value;
            }
        }
    }
}

Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much.

So what’s the problem here?

The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties.

Adding an Id field as a Pk to the database and then importing results in this model layout:

ModelLinkWithPk

which properly generates the Item and Category properties into the link entity.

It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL.

[updated from comments – 12/24/2009]

Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this:

CompoundPrimary Key

This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity.

It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly.

This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ADO.NET  LINQ  
kick it on DotNetKicks.com

Rendering ASP.NET Script References into the Html Header

$
0
0

One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code.

This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources.

Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax):

  • RegisterClientScriptBlock
    Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes.
  • RegisterStartupScript
    Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines.
  • RegisterClientScriptInclude
    Embeds a reference to a script from a url into the page.
  • RegisterClientScriptResource
    Embeds a reference to a Script from a resource file generating a long resource file string

All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration.

The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present.

Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality.

Handling ScriptRenderModes

One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations:

/// <summary>
/// Determines how scripts are included into the page
/// </summary>
public enum ScriptRenderModes
{
    /// <summary>
    /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode
    /// </summary>
    Inherit,
    /// Renders the script include at the location of the control
    /// </summary>
    Inline,
    /// <summary>
    /// Renders the script include into the bottom of the header of the page
    /// </summary>
    Header,
    /// <summary>
    /// Renders the script include into the top of the header of the page
    /// </summary>
    HeaderTop,
    /// <summary>
    /// Uses ClientScript or ScriptManager to embed the script include to
    /// provide standard ASP.NET style rendering in the HTML body.
    /// </summary>
    Script,
    /// <summary>
    /// Renders script at the bottom of the page before the last Page.Controls
    /// literal control. Note this may result in unexpected behavior 
    /// if /body and /html are not the last thing in the markup page.
    /// </summary>
    BottomOfPage        
}

This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example.

Typically I end up with a general script layout like this:

  • Core Libraries- HeaderTop
  • Plug-ins: Header
  • ScriptBlocks: Header or Script depending on other dependencies

There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded.

The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts.

For example, to render some script into the current page in the header:

// Create script block in header
ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
                "hello_function", "function helloWorld() { alert('hello'); }", true,
                ScriptRenderModes.Header);

// Same again - shouldn't be rendered because it's the same id
ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
         "hello_function", "function helloWorld() { alert('hello'); }", true,
         ScriptRenderModes.Header);

// Create a second script block in header
ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
    "hello_function2", "function helloWorld2() { alert('hello2'); }", true,
    ScriptRenderModes.Header);


// This just calls ClientScript and renders into bottom of document
ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources),
                "call_hello", "helloWorld();helloWorld2();", true);

which generates:

<html xmlns="http://www.w3.org/1999/xhtml" >
<head><title>
</title>
<script type="text/javascript">
function helloWorld() { alert('hello'); }
</script>

<script type="text/javascript">
function helloWorld2() { alert('hello2'); }
</script>
</head>
<body>
…    
<script type="text/javascript">
//<![CDATA[
helloWorld();helloWorld2();//]]>
</script>
</form>
</body>
</html>

Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing:

ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
    "call_hello", "$().ready( function() { alert('hello2'); });", true,
    ScriptRenderModes.Header);

assuming you’re using jQuery on the page.

For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location:

// load jquery from CDN
ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources),
                "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js",
                ScriptRenderModes.HeaderTop);

// check if jquery loaded - if it didn't we're not online
string scriptCheck =
    @"if (typeof jQuery != 'object')  
        document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));";

string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources),
                ControlResources.JQUERY_SCRIPT_RESOURCE);            
ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
                "jquery_register", string.Format(scriptCheck,jQueryUrl),true,
                ScriptRenderModes.HeaderTop);                            

// Load jquery-ui from cdn
ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources),
                "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js",
                ScriptRenderModes.Header);

// check if we need to load from local
string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); 
ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
    "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true,
    ScriptRenderModes.Header); 


// Create script block in header
ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources),
                "hello_function", "$().ready( function() { alert('hello'); });", true,
                ScriptRenderModes.Header);

which in turn generates this HTML:

<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript">
    if (typeof jQuery != 'object')
        document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E"));
</script>
<title>
</title>
<script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script>
<script type="text/javascript">
    if (typeof jQuery != 'object')
        document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E"));
</script>

<script type="text/javascript">
    $().ready(function() { alert('hello'); });
</script>
</head>
<body>
…
</body> </html>

As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example.

How does it work?

As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality.

The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however.

Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering:

/// <summary>
/// Renders client script block with the option of rendering the script block in
/// the Html header
/// 
/// For this to work Header must be defined as runat="server"
/// </summary>
/// <param name="control">any control that instance typically page</param>
/// <param name="type">Type that identifies this rendering</param>
/// <param name="key">unique script block id</param>
/// <param name="script">The script code to render</param>
/// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param>
/// <param name="renderMode">Where the block is rendered</param>
public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode)
{
    if (renderMode == ScriptRenderModes.Inherit)
        renderMode = DefaultScriptRenderMode;

    if (control.Page.Header == null || 
        renderMode != ScriptRenderModes.HeaderTop && 
        renderMode != ScriptRenderModes.Header &&
        renderMode != ScriptRenderModes.BottomOfPage)
    {
        RegisterClientScriptBlock(control, type, key, script, addScriptTags);
        return;
    }

    // No dupes - ref script include only once
    const string identifier = "scriptblock_";
    if (HttpContext.Current.Items.Contains(identifier + key))
        return;
    HttpContext.Current.Items.Add(identifier + key, string.Empty);

    StringBuilder sb = new StringBuilder();

    // Embed in header
    sb.AppendLine("\r\n<script type=\"text/javascript\">");
    sb.AppendLine(script);            
    sb.AppendLine("</script>");

    int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?;
    if (index == null)
        index = 0;

    if (renderMode == ScriptRenderModes.HeaderTop)
    {
        control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString()));
        index++;
    }
    else if(renderMode == ScriptRenderModes.Header)
        control.Page.Header.Controls.Add(new LiteralControl(sb.ToString()));
    else if (renderMode == ScriptRenderModes.BottomOfPage)
        control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString()));

    
    HttpContext.Current.Items["__ScriptResourceIndex"] = index;            
}

Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order.

The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression

/// <summary>
/// Registers a client script reference into the page with the option to specify
/// the script location in the page
/// </summary>
/// <param name="control">Any control instance - typically page</param>
/// <param name="type">Type that acts as qualifier (uniqueness)</param>
/// <param name="url">the Url to the script resource</param>
/// <param name="ScriptRenderModes">Determines where the script is rendered</param>
public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode)
{
    const string STR_ScriptResourceIndex = "__ScriptResourceIndex";

    if (string.IsNullOrEmpty(url))
        return;

    if (renderMode == ScriptRenderModes.Inherit)
        renderMode = DefaultScriptRenderMode;

    // Extract just the script filename
    string fileId = null;
    
    
    // Check resource IDs and try to match to mapped file resources
    // Used to allow scripts not to be loaded more than once whether
    // embedded manually (script tag) or via resources with ClientScriptProxy
    if (url.Contains(".axd?r="))
    {
        string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) );
        foreach (ScriptResourceAlias item in ScriptResourceAliases)
        {
            if (item.Resource == res)
            {
                fileId = item.Alias + ".js";
                break;
            }
        }
        if (fileId == null)
            fileId = url.ToLower();
    }
    else
        fileId = Path.GetFileName(url).ToLower();

    // No dupes - ref script include only once
    const string identifier = "script_";
    if (HttpContext.Current.Items.Contains( identifier + fileId ) )
        return;
    
    HttpContext.Current.Items.Add(identifier + fileId, string.Empty);

    // just use script manager or ClientScriptManager
    if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline)
    {
        RegisterClientScriptInclude(control, type,url, url);
        return;
    }

    // Retrieve script index in header            
    int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?;
    if (index == null)
        index = 0;

    StringBuilder sb = new StringBuilder(256);

    url = WebUtils.ResolveUrl(url);

    // Embed in header
    sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>");

    if (renderMode == ScriptRenderModes.HeaderTop)
    {
        control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString()));
        index++;
    }
    else if (renderMode == ScriptRenderModes.Header)
        control.Page.Header.Controls.Add(new LiteralControl(sb.ToString()));
    else if (renderMode == ScriptRenderModes.BottomOfPage)
        control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString()));                

    HttpContext.Current.Items[STR_ScriptResourceIndex] = index;
}

There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes.

Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module:

/// <summary>
/// Returns a WebResource or ScriptResource URL for script resources that are to be
/// embedded as script includes.
/// </summary>
/// <param name="control">Any control</param>
/// <param name="type">A type in assembly where resources are located</param>
/// <param name="resourceName">Name of the resource to load</param>
/// <param name="renderMode">Determines where in the document the link is rendered</param>
public void RegisterClientScriptResource(Control control, Type type, 
                                         string resourceName, 
                                         ScriptRenderModes renderMode)
{ 
    string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName);
    RegisterClientScriptInclude(control, type, resourceUrl, renderMode);
}
/// <summary>
/// Works like GetWebResourceUrl but can be used with javascript resources
/// to allow using of resource compression (if the module is loaded).
/// </summary>
/// <param name="control"></param>
/// <param name="type"></param>
/// <param name="resourceName"></param>
/// <returns></returns>
public string GetClientScriptResourceUrl(Control control, Type type, string resourceName)
{            
    #if IncludeScriptCompressionModuleSupport

    // If wwScriptCompression Module through Web.config is loaded use it to compress 
    // script resources by using wcSC.axd Url the module intercepts
    if (ScriptCompressionModule.ScriptCompressionModuleActive) 
    {
        string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName);
        if (type.Assembly != GetType().Assembly)
            url += "&t=" + HttpUtility.UrlEncode(type.FullName);
        
        return WebUtils.ResolveUrl(url);
    }
    
    #endif

    return control.Page.ClientScript.GetWebResourceUrl(type, resourceName);
}

This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy.

What else is there in ClientScriptProxy?

ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here:

Transparent ScriptManager and ClientScript calls

ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page.

RegisterCssLink and RegisterCssResource
Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”.

LoadControlScript

Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way.

Check out the full Code

You can check out the full code to the ClientScriptProxyClass here:

ClientScriptProxy.cs

ClientScriptProxy Documentation (class reference)

Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods.

There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options.

You can find all the dependencies in the West Wind Web Toolkit repository:

West Wind Web Toolkit Repository

West Wind Web Toolkit Home Page

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  JavaScript  
kick it on DotNetKicks.com

HttpContext.Items and Server.Transfer/Execute

$
0
0

A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts.

It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this:

/// <summary>
/// Current instance of this class which should always be used to 
/// access this object. There are no public constructors to
/// ensure the reference is used as a Singleton to further
/// ensure that all scripts are written to the same clientscript
/// manager.
/// </summary>        
public static ClientScriptProxy Current
{
    get
    {
        if (HttpContext.Current == null)
            return new ClientScriptProxy();

        ClientScriptProxy proxy = null;
        if (HttpContext.Current.Items.Contains(STR_CONTEXTID))
            proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy;
        else
        {

            proxy = new ClientScriptProxy();
            HttpContext.Current.Items[STR_CONTEXTID] = proxy;
        }
        
        return proxy;
    }
}

The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply.

For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’:

// No dupes - ref script include only once
if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) )
    return;

HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty);


where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer.

The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys.

The first issue is to determine if a script is in a Trransfer or Execute call:

if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler)

Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that).

For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this:

/// <summary>
/// Clears all the request specific context items which are script references
/// and the script placement index.
/// </summary>
public void ClearContextItemsOnTransfer()
{
    if (HttpContext.Current != null)
    {
        // Check for Server.Transfer/Execute calls - we need to clear out Context.Items
        if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler)
        {
            List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList();
            foreach (string key in Keys)
            {
                HttpContext.Current.Items.Remove(key);
            }

        }
    }
}

along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred:

if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler)
{
    proxy.ClearContextItemsOnTransfer();
    proxy.IsTransferred = true;
}

return proxy;

I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work.

In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}.

If possible use Page.Items

ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there.

So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control):

/// <summary>
/// Returns a current instance of this control if an instance
/// is already loaded on the page. Otherwise a new instance is
/// created, added to the Form and returned.
/// 
/// It's important this function is not called too early in the
/// page cycle - it should not be called before Page.OnInit().
/// 
/// This property is the preferred way to get a reference to a
/// ScriptContainer control that is either already on a page
/// or needs to be created. Controls in particular should always
/// use this property.
/// </summary>
public static ScriptContainer Current
{
    get
    {
        // We need a context for this to work!
        if (HttpContext.Current == null)
            return null;

        Page page = HttpContext.Current.CurrentHandler as Page;
        if (page == null)
            throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); 

        ScriptContainer ctl = null;

        // Retrieve the current instance
        ctl = page.Items[STR_CONTEXTID] as ScriptContainer;
        if (ctl != null)
            return ctl;
        
        ctl = new ScriptContainer();        
        page.Form.Controls.Add(ctl);
        return ctl;
    }
}

The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired.

Server.Transfer and Server.Execute are Evil

All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands…

Related Resources

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  
kick it on DotNetKicks.com

jQuery 1.4 Opacity and IE Filters

$
0
0

Ran into a small problem today with my client side jQuery library after switching to jQuery 1.4. I ran into a problem with a shadow plugin that I use to provide drop shadows for absolute elements – for Mozilla WebKit browsers the –moz-box-shadow and –webkit-box-shadow CSS attributes are used but for IE a manual element is created to provide the shadow that underlays the original element along with a blur filter to provide the fuzziness in the shadow. Some of the key pieces are:

var vis = el.is(":visible");
if (!vis)
    el.show();  // must be visible to get .position

var pos = el.position();
if (typeof shEl.style.filter == "string")
    sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')');

sh.show()
  .css({ position: "absolute",
      width: el.outerWidth(),
      height: el.outerHeight(),
      opacity: opt.opacity,
      background: opt.color,
      left: pos.left + opt.offset,
      top: pos.top + opt.offset
  });

This has always worked in previous versions of jQuery, but with 1.4 the original filter no longer works. It appears that applying the opacity after the original filter wipes out the original filter. IOW, the opacity filter is not applied incrementally, but absolutely which is a real bummer.

Luckily the workaround is relatively easy by just switching the order in which the opacity and filter are applied. If I apply the blur after the opacity I get my correct behavior back with both opacity:

sh.show()
  .css({ position: "absolute",
      width: el.outerWidth(),
      height: el.outerHeight(),
      opacity: opt.opacity,
      background: opt.color,
      left: pos.left + opt.offset,
      top: pos.top + opt.offset
  });

  if (typeof shEl.style.filter == "string")
      sh.css("filter", 'progid:DXImageTransform.Microsoft.Blur(makeShadow=true, pixelradius=3, shadowOpacity=' + opt.opacity.toString() + ')');

While this works this still causes problems in other areas where opacity is implicitly set in code such as for fade operations or in the case of my shadow component the style/property watcher that keeps the shadow and main object linked. Both of these may set the opacity explicitly and that is still broken as it will effectively kill the blur filter.

This seems like a really strange design decision by the jQuery team, since clearly the jquery css function does the right thing for setting filters. Internally however, the opacity setting doesn’t use .css instead hardcoding the filter which given jQuery’s usual flexibility and smart code seems really inappropriate.

The following is from jQuery.js 1.4:

var style = elem.style || elem, set = value !== undefined;

// IE uses filters for opacity
if ( !jQuery.support.opacity && name === "opacity" ) {
    if ( set ) {
        // IE has trouble with opacity if it does not have layout
        // Force it by setting the zoom level
        style.zoom = 1;

        // Set the alpha filter to set the opacity
        var opacity = parseInt( value, 10 ) + "" === "NaN" ? "" : "alpha(opacity=" + value * 100 + ")";
        var filter = style.filter || jQuery.curCSS( elem, "filter" ) || "";
        style.filter = ralpha.test(filter) ? filter.replace(ralpha, opacity) : opacity;
    }

    return style.filter && style.filter.indexOf("opacity=") >= 0 ?
        (parseFloat( ropacity.exec(style.filter)[1] ) / 100) + "":
        "";
}

You can see here that the style is explicitly set in code rather than relying on $.css() to assign the value resulting in the old filter getting wiped out.

jQuery 1.32 looks a little different:

// IE uses filters for opacity
if ( !jQuery.support.opacity && name == "opacity" ) {
    if ( set ) {
        // IE has trouble with opacity if it does not have layout
        // Force it by setting the zoom level
        elem.zoom = 1;

        // Set the alpha filter to set the opacity
        elem.filter = (elem.filter || "").replace( /alpha\([^)]*\)/, "" ) +
            (parseInt( value ) + '' == "NaN" ? "" : "alpha(opacity=" + value * 100 + ")");
    }

    return elem.filter && elem.filter.indexOf("opacity=") >= 0 ?
        (parseFloat( elem.filter.match(/opacity=([^)]*)/)[1] ) / 100) + '':
        "";
}

Offhand I’m not sure why the latter works better since it too is assigning the filter. However, when checking with the IE script debugger I can see that there are actually a couple of filter tags assigned when using jQuery 1.32 but only one when I use jQuery 1.4.

Note also that the jQuery 1.3 compatibility plugin for jQUery 1.4 doesn’t address this issue either.

Resources

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in jQuery  
kick it on DotNetKicks.com

.NET WebRequest.PreAuthenticate – not quite what it sounds like

$
0
0

I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does:

http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx

Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request.

What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication.

So the client sends a request like this:

GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1
Host: rasnote
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en,de;q=0.7,en-us;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive

and the server responds with:

HTTP/1.1 401 Unauthorized
Cache-Control: private
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/7.5
WWW-Authenticate: basic realm=rasnote"
X-AspNet-Version: 2.0.50727
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
WWW-Authenticate: Basic realm="rasnote"
X-Powered-By: ASP.NET
Date: Tue, 27 Oct 2009 00:58:20 GMT
Content-Length: 5163

plus the actual error message body.

The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth):

GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1
Host: rasnote
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en,de;q=0.7,en-us;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|rstrahl@west-wind.com; WebStoreUser=b8bd0ed9
Authorization: Basic cgsf12aDpkc2ZhZG1zMA==

Once the authorization info is sent the server responds with the actual page result.

Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back:

TwoRequests

and you can see the 401 challenge, the 200 response for both requests.

If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present.

WebRequest.PreAuthenticate

And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded:

string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus";
HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest;
req.PreAuthenticate = true;
req.Credentials = new NetworkCredential("rick", "secret", "rasnote");
req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;
req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)";
WebResponse resp = req.GetResponse();
resp.Close();

req = HttpWebRequest.Create(url) as HttpWebRequest;
req.PreAuthenticate = true;
req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote");
req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;
req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)";
resp = req.GetResponse();

which results in the desired sequence:

PreAuthenticated

where only the first request doesn’t send credentials.

This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request).

Forcing Basic Authentication Credentials on the first Request

On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is.

Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth).

The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password:

string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus";
HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest;

string user = "rick";
string pwd = "secret";
string domain = "www.west-wind.com";

string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd));
req.PreAuthenticate = true;
req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;
req.Headers.Add("Authorization", auth);
req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close();

This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server.

Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this:

        string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus";
        HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest;

        string user = "ricks";
        string pwd = "secret";
        string domain = "www.west-wind.com";

        string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd));
        req.PreAuthenticate = true;
        req.Headers.Add("Authorization", auth);
        req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)";
        WebResponse resp = req.GetResponse();
        resp.Close();


        req = HttpWebRequest.Create(url) as HttpWebRequest;
        req.PreAuthenticate = true;
        req.Credentials = new NetworkCredential(user, pwd, domain);
        req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)";
        resp = req.GetResponse();
        resp.Close();

        req = HttpWebRequest.Create(url) as HttpWebRequest;
        req.PreAuthenticate = true;
        req.Credentials = new NetworkCredential(user, pwd, domain);
        req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)";
        resp = req.GetResponse();

you’ll find the trace looking like this:

Manually

where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request.

Getting Access to WebRequest in Classic .NET Web Service Clients

If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this:

public partial class TaxService 
{ protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; }
}

where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it.

For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx.

FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen

Not a Common Problem, but when it happens…

This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in .NET  CSharp  Web Services  
kick it on DotNetKicks.com

No Preview Images in File Open Dialogs on Windows 7

$
0
0

I’ve been updating some file uploader code in my photoalbum today and while I was working with the uploader I noticed that the File Open dialog using Silverlight that handles the file selections didn’t allow me to ever see an image preview for image files. It sure would be nice if I could preview the images I’m about to upload before selecting them from a list. Here’s what my list looked like:

FileOPenDialog

This is the Medium Icon view, but regardless of the views available including Content view only icons are showing up.

Silverlight uses the standard Windows File Open Dialog so it uses all the same settings that apply to Explorer when displaying content. It turns out that the Customization options in particular are the problem here. Specifically the Always show icons, never thumbnails option:

FolderOptions

I had this option checked initially, because it’s one of the defenses against runaway random Explorer views that never stay set at my preferences. Alas, while this setting affects Explorer views apparently it also affects all dialog based views in the same way. Unchecking the option above brings back full thumbnailing for all content and icon views. Here’s the same Medium Icon view after turning the option off:

FileOpenDialogPics

which obviously works a whole lot better for selection of images.

The bummer of this is that it’s not controllable at the dialog level – at least not in Silverlight. Dialogs obviously have different requirements than what you see in Explorer so the global configuration is a bit extreme especially when there are no overrides on the dialog interface. Certainly for Silverlight the ability to have previews is a key feature for many applications since it will be dealing with lots of media content most likely.

Hope this helps somebody out. Thanks to Tim Heuer who helped me track this down on Twitter.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in Silverlight  Windows  
kick it on DotNetKicks.com

AspNetCompatibility in WCF Services – easy to trip up

$
0
0

This isn’t the first time I’ve hit this particular wall: I’m creating a WCF REST service for AJAX callbacks and using the WebScriptServiceHostFactory host factory in the service:

<%@ ServiceHost Language="C#" 
        Service="WcfAjax.BasicWcfService"                 
        CodeBehind="BasicWcfService.cs"        
        Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>

 

to avoid all configuration. Because of the Factory that creates the ASP.NET Ajax compatible format via the custom factory implementation I can then remove all of the configuration settings that typically get dumped into the web.config file. However, I do want ASP.NET compatibility so I still leave in:

<system.serviceModel>        
  <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>
</system.serviceModel>

in the web.config file. This option allows you access to the HttpContext.Current object to effectively give you access to most of the standard ASP.NET request and response features. This is not recommended as a primary practice but it can be useful in some scenarios and in backwards compatibility scenerios with ASP.NET AJAX Web Services.

Now, here’s where things get funky. Assuming you have the setting in web.config, If you now declare a service like this:

    [ServiceContract(Namespace = "DevConnections")]
#if DEBUG 
    [ServiceBehavior(IncludeExceptionDetailInFaults = true)]
#endif
    public class BasicWcfService

(or by using an interface that defines the service contract)

you’ll find that the service will not work when an AJAX call is made against it. You’ll get a 500 error and a System.ServiceModel.ServiceActivationException System error. Worse even with the IncludeExceptionDetailInFaults enabled you get absolutely no indication from WCF what the problem is.

So what’s the problem?  The issue is that once you specify aspNetCompatibilityEnabled=”true” in the configuration you *have to* specify the AspNetCompatibilityRequirements attribute and one of the modes that enables or at least allows for it. You need either Required or Allow:

[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)]

without it the service will simply fail without further warning.

It will also fail if you set the attribute value to NotAllowed. The following also causes the service to fail as above:

[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.NotAllowed)]

This is not totally unreasonable but it’s a difficult issue to debug especially since the configuration setting is global – if you have more than one service and one requires traditional ASP.NET access and one doesn’t then both must have the attribute specified. This is one reason why you’d want to avoid using this functionality unless absolutely necessary. WCF REST provides some basic access to some of the HTTP features after all, although what’s there is severely limited.

I also wish that ServiceActivation errors would provide more error information. Getting an Activation error without further info on what actually is wrong is pretty worthless especially when it is a technicality like a mismatched configuration/attribute setting like this.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  WCF  AJAX  
kick it on DotNetKicks.com

DevConnections jQuery Session Slides and Samples posted

$
0
0

I’ve posted all of my slides and samples from the DevConnections VS 2010 Launch event last week in Vegas. All three sessions are contained in a single zip file which contains all slide decks and samples in one place:

www.west-wind.com/files/conferences/jquery.zip

There were 3 separate sessions:

Using jQuery with ASP.NET

Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors to select document elements, manipulate these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The session also covers AJAX interaction between jQuery and the .NET server side code using several different approaches including sending HTML and JSON data and how to avoid user interface duplication by using client side templating. This session relies heavily on live examples and walk-throughs.

jQuery Extensibility and Integration with ASP.NET Server Controls

One of the great strengths of the jQuery Javascript framework is its simple, yet powerful extensibility model that has resulted in an explosion of plug-ins available for jQuery. You need it - chances are there's a plug-in for it! In this session we'll look at a few plug-ins to demonstrate the power of the jQuery plug-in model before diving in and creating our own custom jQuery plug-ins. We'll look at how to create a plug-in from scratch as well as discussing when it makes sense to do so. Once you have a plug-in it can also be useful to integrate it more seamlessly with ASP.NET by creating server controls that coordinate both server side and jQuery client side behavior. I'll demonstrate a host of custom components that utilize a combination of client side jQuery functionality and server side ASP.NET server controls that provide smooth integration in the user interface development process. This topic focuses on component development both for pure client side plug-ins and mixed mode controls.

jQuery Tips and Tricks

This session was kind of a last minute substitution for an ASP.NET AJAX talk. Nothing too radical here :-), but I focused on things that have been most productive for myself. Look at the slide deck for individual points and some of the specific samples.

 

It was interesting to see that unlike in previous conferences this time around all the session were fairly packed – interest in jQuery is definitely getting more pronounced especially with microsoft’s recent announcement of focusing on jQuery integration rather than continuing on the path of ASP.NET AJAX – which is a welcome change.

Most of the samples also use the West Wind Web & Ajax Toolkit and the support tools contained within it – a snapshot version of the toolkit is included in the samples download. Specicifically a number of the samples use functionality in the ww.jquery.js support file which contains a fairly large set of plug-ins and helper functionality – most of these pieces while contained in the single file are self-contained and can be lifted out of this file (several people asked).

Hopefully you'll find something useful in these slides and samples.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  jQuery  
kick it on DotNetKicks.com

What’s new in ASP.NET 4.0: Core Features

$
0
0

Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime.

In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio.

Core Engine Features

Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features.

Clean web.config Files Are Back!

If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default.

A default empty ASP.NET 4.0 Web Forms project looks like this:

<?xml version="1.0"?>

<configuration>

<system.web>

<compilation debug="true"

targetFramework="4.0" />

</system.web>

</configuration>

Need I say more?

Configuration Transformation Files to Manage Configurations and Application Packaging

ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file.

To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section:

<?xml version="1.0"?>

<configuration

xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">

<connectionStrings>

<add name="MyDB"

connectionString="Data Source=ReleaseSQLServer;Initial

Catalog=MyReleaseDB;Integrated Security=True"

xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>

</connectionStrings>

<system.web>

<compilation xdt:Transform="RemoveAttributes(debug)" />

<customErrors defaultRedirect="GenericError.htm"

mode="RemoteOnly" xdt:Transform="Replace">

<error statusCode="500" redirect="InternalError.htm"/>

</customErrors>

</system.web>

</configuration>

You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact.

Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine.

You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje.

Improved Routing

Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly.

In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation.

ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]).

The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL:

<%= this.GetRouteUrl("users",new { userId="ricks" }) %>

You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code:

<a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a>

Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL.

Response.RedirectToRoute("users", new { userId = "ricks" });

All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET.

To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5.

Session State Improvements

Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider.

Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data.

In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event.

Output Cache Provider

Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site.

ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios.

For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l.

Response.RedirectPermanent

ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls.

Preloading of Applications

ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing.

This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.

 

The configuration settings need to be made in applicationhost.config:

<sites>

<site name="WebApplication2" id="1">

<application path="/"

serviceAutoStartEnabled="true"

serviceAutoStartProvider="PreWarmup" />

</site>

</sites>

<serviceAutoStartProviders>

<add name="PreWarmup"

type="PreWarmupProvider,MyAssembly" />

</serviceAutoStartProviders>

Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like:

public class PreWarmupProvider

System.Web.Hosting.IProcessHostPreloadClient

{

public void Preload(string[] parameters)

{

// initialization for app

}

}

This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag.

Runtime Performance Improvements

According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0.

Summary

The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful.

A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to.

If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  
kick it on DotNetKicks.com

What’s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

$
0
0

In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier.

WebForms

The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages.

Take Control of Your ClientIDs

Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs.

In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page.

<%@Page Language="C#" 
    MasterPageFile="~/Site.Master"
    CodeBehind="WebForm2.aspx.cs"
    Inherits="WebApplication1.WebForm2" 
%>
<asp:Content ID="content" 
ContentPlaceHolderID="content" 
             runat="server" 
             ClientIDMode="Static" >  
    <asp:TextBox runat="server" ID="txtName" />
</asp:Content>

The four available ClientIDMode values are:

AutoID

This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place.

<input name="ctl00$content$txtName" type="text"
       id="ctl00_content_txtName" />

This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks.

Static

This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids:

<input name="ctl00$content$txtName" 
       type="text" id="txtName" />

Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control.

This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict.

Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix.

Predictable

The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows:

This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_).

The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting.

For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this:

<input name="ctl00$content$txtName" type="text" 
       id="content_txtName" />

which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container.

If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName:

<input name="ctl00$content$txtName" type="text" 
       id="ctl00_content_txtName" />

The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging.

The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode.

Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example:

<asp:GridView runat="server" ID="gvItems" 
            AutoGenerateColumns="false"
            ClientIDMode="Static" 
            ClientIDRowSuffix="Id">
    <Columns>
    <asp:TemplateField>
        <ItemTemplate>
            <asp:Label runat="server" id="txtName"
                       Text='<%# Eval("Name") %>'
                  ClientIDMode="Predictable"/>
        </ItemTemplate>
    </asp:TemplateField>
    <asp:TemplateField>
        <ItemTemplate>
        <asp:Label runat="server" id="txtId" 
                   Text='<%# Eval("Id") %>
                   ClientIDMode="Predictable" />
        </ItemTemplate>
    </asp:TemplateField>
    </Columns
</asp:GridView>

generates client Ids inside of a column in the master page described earlier:

<td>
    <span id="txtName_0">Rick</span>
</td>

where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls:

<span id="ctrl0_txtName_0">Rick</span>

I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway.

Inherit

The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting.

For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx.

ClientID Enhancements Summary

The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls.

ViewStateMode

Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates.

Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems.

In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it.

ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls:

<%@Page Language="C#" 
    CodeBehind="WebForm2.aspx.cs"
    Inherits="Westwind.WebStore.WebForm2"   
    ClientIDMode="Static"           
    ViewStateMode="Disabled"
    EnableViewState="true" 
%>
<!-- this control has viewstate  -->
<asp:TextBox runat="server" ID="txtName" 
ViewStateMode="Enabled" />      

<!-- this control has no viewstate - it inherits 
from parent container -->
<asp:TextBox runat="server" ID="txtAddress" />

Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it.

For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on.

Inline HTML Encoding

HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this:

<%: "<script type='text/javascript'>" +
    "alert('Really?');</script>" %>

which produces properly encoded HTML:

&lt;script type=&#39;text/javascript&#39;
&gt;alert(&#39;Really?&#39;);&lt;/script&gt;

Effectively this is a shortcut to:

<%= HttpUtility.HtmlEncode(
"<script type='text/javascript'>" +
"alert('Really?');</script>") %>

Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values:

<%: Model.Address.Street %>

This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks.

Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature.

ScriptManager Enhancements

The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading.

The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library.

There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath.

[assembly: WebResource(
   "Westwind.Web.Resources.ww.jquery.js",
   "application/x-javascript",
   CdnPath = 
"http://mysite.com/scripts/ww.jquery.min.js")]

Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example.

Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config:

<asp:ScriptManager runat="server" id="Id" 
        EnableCdn="true"
        AjaxFrameworkMode="disabled">
    <Scripts>
        <asp:ScriptReference 
        Name="Westwind.Web.Resources.ww.jquery.js"
        Assembly="Westwind.Web" />
    </Scripts>       
</asp:ScriptManager>

The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js):

<asp:ScriptManager runat="server" id="Id" 
        EnableCdn="true"
        AjaxFrameworkMode="disabled">
    <CompositeScript 
        Path="~/scripts/allscripts.js">
        <Scripts>
            <asp:ScriptReference 
                  Path="~/scripts/jquery.js" />
            <asp:ScriptReference 
                  Path="~/scripts/ww.jquery.js" />
            <asp:ScriptReference 
          Name="Westwind.Web.Resources.editors.js"
                Assembly="Westwind.Web" />
        </Scripts>
    </CompositeScript>
</asp:ScriptManager>

When you render this into HTML, you’ll see a single script reference in the page:

<script src="scripts/allscripts.debug.js
        type="text/javascript"></script>

All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally.

The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy.

Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers.

MetaDescription, MetaKeywords Page Properties

There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself:

Page.MetaKeywords = "ASP.NET,4.0,New Features";
Page.MetaDescription = "This article discusses the
new features in ASP.NET 4.0";

Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag:

<%@Page Language="C#" 
    CodeBehind="WebForm2.aspx.cs"
    Inherits="Westwind.WebStore.WebForm2" 
    ClientIDMode="Static"           
    MetaDescription="Article that discusses what's
                     new in ASP.NET 4.0"
    MetaKeywords="ASP.NET,4.0,New Features"
%>

Nothing earth shattering but quite convenient.

Visual Studio 2010 Enhancements for Web Development

For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general.

Text Editors

Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing.

On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack.

Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors.

Multi-Targeting

Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions.

Multi-Monitor Support

One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment.

IntelliSense Snippets for HTML and JavaScript Editors

The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually.

You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created.

Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets.

jQuery Integration Is Now Native

jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery.

Summary

ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback.

If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  
kick it on DotNetKicks.com

Dynamic Type to do away with Reflection

$
0
0

The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot.

The old Way - Reflection

Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this:

string path = Page.Request.Url.AbsolutePath;

Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-))

If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils:

/// <summary>
/// Retrieve a property value from an object dynamically. This is a simple version
/// that uses Reflection calls directly. It doesn't support indexers.
/// </summary>
/// <param name="instance">Object to make the call on</param>
/// <param name="property">Property to retrieve</param>
/// <returns>Object - cast to proper type</returns>
public static object GetProperty(object instance, string property)
{
    return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null);
}

If you want more control over properties and support both fields and properties as well as array indexers a little more work is required:

/// <summary>
/// Parses Properties and Fields including Array and Collection references.
/// Used internally for the 'Ex' Reflection methods.
/// </summary>
/// <param name="Parent"></param>
/// <param name="Property"></param>
/// <returns></returns>
private static object GetPropertyInternal(object Parent, string Property)
{
    if (Property == "this" || Property == "me")
        return Parent;

    object result = null;
    string pureProperty = Property;
    string indexes = null;
    bool isArrayOrCollection = false;

    // Deal with Array Property
    if (Property.IndexOf("[") > -1)
    {
        pureProperty = Property.Substring(0, Property.IndexOf("["));
        indexes = Property.Substring(Property.IndexOf("["));
        isArrayOrCollection = true;
    }

    // Get the member
    MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0];
    if (member.MemberType == MemberTypes.Property)
        result = ((PropertyInfo)member).GetValue(Parent, null);
    else
        result = ((FieldInfo)member).GetValue(Parent);

    if (isArrayOrCollection)
    {
        indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty);

        if (result is Array)
        {
            int Index = -1;
            int.TryParse(indexes, out Index);
            result = CallMethod(result, "GetValue", Index);
        }
        else if (result is ICollection)
        {
            if (indexes.StartsWith("\""))
            {
                // String Index
                indexes = indexes.Trim('\"');
                result = CallMethod(result, "get_Item", indexes);
            }
            else
            {
                // assume numeric index
                int index = -1;
                int.TryParse(indexes, out index);
                result = CallMethod(result, "get_Item", index);
            }
        }
    }
    return result;
}
/// <summary>
/// Returns a property or field value using a base object and sub members including . syntax.
/// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company")
/// This method also supports indexers in the Property value such as:
/// Customer.DataSet.Tables["Customers"].Rows[0]
/// </summary>
/// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param>
/// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param>
/// <returns></returns>
public static object GetPropertyEx(object Parent, string Property)
{
    Type type = Parent.GetType();

    int at = Property.IndexOf(".");
    if (at < 0)
    {
        // Complex parse of the property    
        return GetPropertyInternal(Parent, Property);
    }

    // Walk the . syntax - split into current object (Main) and further parsed objects (Subs)
    string main = Property.Substring(0, at);
    string subs = Property.Substring(at + 1);

    // Retrieve the next . section of the property
    object sub = GetPropertyInternal(Parent, main);

    // Now go parse the left over sections
    return GetPropertyEx(sub, subs);
}

As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily.

Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this:

string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; 

This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy.

Enter dynamic – way easier!

.NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes:

object objPage = Page;     // force to object for contrivance :)
dynamic page = objPage;    // convert to dynamic from untyped object
string scriptUrl = page.Request.Url.AbsolutePath;

The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.:

Dynamic

In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain.

Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic.

Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance:

string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"];

The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax.

Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance:

void Reflection()
     {
         Stopwatch stop = new Stopwatch();
         stop.Start();

         for (int i = 0; i < reps; i++)
         {
            // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string;
             string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string;
         }

         stop.Stop();

         Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString());
     }

     void Dynamic()
     {
         Stopwatch stop = new Stopwatch();
         stop.Start();

         dynamic page = Page;
         for (int i = 0; i < reps; i++)
         {
             string url = page.Title; //Request.Url.AbsolutePath;    
         }

         stop.Stop();

         Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString());
     }

The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection.

Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in .NET  CSharp  
kick it on DotNetKicks.com

Non-Dom Element Event Binding with jQuery

$
0
0

Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings.

Assume for a second that you have a simple JavaScript object like this:

var item = { sku: "wwhelp" , 
             foo: function() { alert('orginal foo function'); }              
};

and you want to be notified when the foo function is called.

You can use jQuery to bind the handler like this:

$(item).bind("foo", function () { alert('foo Hook called'); } );

Binding alone won’t actually cause the handler to be triggered so when you call:

item.foo();

you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function:

$(item).trigger("foo");

Now if you do the following complete sequence:

var item = { sku: "wwhelp" , 
             foo: function() { alert('orginal foo function'); }              
};
$(item).bind("foo", function () { alert('foo hook called'); } );
$(item).trigger("foo");


You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly.

.trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function.

This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent).

Watch out for an .unbind() Bug

Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event on a non-DOM object and results in a elem.removeEventListener is not a function error. The following code demonstrates:

var item = { sku: "wwhelp",
    foo: function () { alert('orginal foo function'); }
};
$(item).bind("foo.first", function () { alert('foo hook called'); });
$(item).bind("foo.second", function () { alert('foo hook2 called'); });

$(item).trigger("foo");


setTimeout(function () {
    $(item).unbind("foo");
    // $(item).unbind("foo.first");
    //  $(item).unbind("foo.second");

    $(item).trigger("foo");
}, 3000);

The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works:

setTimeout(function () {
    //$(item).unbind("foo");
    $(item).unbind("foo.first");
    //  $(item).unbind("foo.second");

    $(item).trigger("foo");
}, 3000);

this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug>

A partial workaround for unbinding all ‘foo’ events is the following:

setTimeout(function () {
    $.event.special.foo = { teardown: function () { alert('teardown'); return true; } };
    $(item).unbind("foo");

    $(item).trigger("foo");
}, 3000);

which is a bit cryptic to say the least but it seems to work more reliably.

I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in jQuery  
kick it on DotNetKicks.com


Debugging .NET 2.0 Assembly from unmanaged Code in VS2010?

$
0
0

I’ve run into a serious snag trying to debug a .NET 2.0 assembly that is called from unmanaged code in Visual Studio 2010. I maintain a host of components that using COM interop and custom .NET runtime hosting and ever since installing Visual Studio 2010 I’ve been utterly blocked by VS 2010’s inability to apparently debug .NET 2.0 assemblies when launching through unmanaged code.

Here’s what I’m actually doing (simplified scenario to demonstrate):

  • I have a .NET 2.0 assembly that is compiled for COM Interop
  • Compile project with .NET 2.0 target and register for COM Interop
  • Set a breakpoint in the .NET component in one of the class methods
  • Set debugging to point at unmanaged application (Visual FoxPro dev environment in this case)
  • Instantiate the .NET component via COM interop and call method with breakpoint

The result is that the COM call works fine but the debugger never triggers on the breakpoint.

If I now take that same assembly and target .NET 4.0 without any other changes everything works as expected – the breakpoint set in the assembly project triggers just fine. So it appears the debugging failure is specific to .NET 2.0 debugging.

The easy answer to this problem seems to be “Just switch to .NET 4.0” but unfortunately the application and the way the runtime is actually hosted has a few complications. Specifically the runtime hosting uses .NET 2.0 hosting and apparently the only reliable way to host the .NET 4.0 runtime is to use the new hosting APIs that are provided only with .NET 4.0 (which all by itself is lame, lame, lame as once again the promise of backwards compatibility is broken once again by .NET). So for the moment I need to continue using the .NET 2.0 hosting APIs due to the application requirements and runtime availability.

I’ve been searching high and low and experimenting back and forth with the hosting options, posted a few questions on the MSDN forums but haven’t gotten any hints on what might be causing the apparent failure of Visual Studio 2010 to debug my .NET 2.0 assembly properly when called from un-managed code. Incidentally debugging .NET 2.0 targeted assemblies works fine when running with a managed startup application – it seems the issue is specific to the unmanaged code starting up.

Curious if anybody has any ideas on what could be causing the lack of debugging in this scenario?

Incidentally the VS2010 install also killed debugging for me in VS 2008. Running the same project and trying to launch the debugger from the .NET 2.0 assembly project results in Invalid Debugger Target. Really? Oh yeah, the promise of side by side installations – for the .NET runtime AND Visual Studio really are working out great. Not!

© Rick Strahl, West Wind Technologies, 2005-2010
kick it on DotNetKicks.com

Microsoft Introduces WebMatrix

$
0
0

originally published in CoDe Magazine Editorial

Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers.

Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently.

WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server.

WebMatrix is made up of several components and technologies:

IIS Developer Express

IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios.

SQL Server Compact 4.0

Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests.

Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server.

This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature.

ASP.NET Razor View Engine

What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code.

The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions.

Here’s a very simple example of what Razor markup looks like along with some comment annotations:

<!DOCTYPE html>
<html>
    <head>
        <title></title>
    </head>
    <body>
    <h1>Razor Test</h1>
    
    <!-- simple expressions -->
    @DateTime.Now

    <hr />

    <!-- method expressions -->
    @DateTime.Now.ToString("T")
    
    <!-- code blocks -->
    @{
        List<string> names = new List<string>();
        names.Add("Rick");
        names.Add("Markus");
        names.Add("Claudio");
        names.Add("Kevin");
    }
    
    <!-- structured block statements -->
    <ul>
    @foreach(string name in names){
            <li>@name</li>
    }
    </ul>
      
   <!-- Conditional code -->    
   @if(true) {                
       <!-- Literal Text embedding in code -->
       <text>
        true
       </text>;
   }
   else
   {
       <!-- Literal Text embedding in code -->
      <text>
      false
      </text>;
   }

   </body>
</html>

Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC.

It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code.

Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine.

Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0.

Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages.

WebMatrix Development Environment

To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application.

The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder.

WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice!

You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step.

The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods:

@{  
    var db = Database.OpenFile("FirstApp.sdf");
    string sql = "select * from customers";
}


<ul>
@foreach(var row in db.Query(sql)){
        <li>@row.FirstName @row.LastName</li>
}

Likewise Execute and ExecuteNonQuery allow execution of more complex queries. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks.

The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field.

FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required.

The beauty of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder.

In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference

Who needs WebMatrix? Uhm…

Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well.

The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built.

Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community.

The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends.

The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it.

If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations.

By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually.

Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC.

As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling.

It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET   IIS7  
kick it on DotNetKicks.com

RequestValidation Changes in ASP.NET 4.0

$
0
0

There’s been a change in the way the ValidateRequest attribute on WebForms works in ASP.NET 4.0. I noticed this today while updating a post on my WebLog all of which contain raw HTML and so all pretty much trigger request validation. I recently upgraded this app from ASP.NET 2.0 to 4.0 and it’s now failing to update posts. At first this was difficult to track down because of custom error handling in my app – the custom error handler traps the exception and logs it with only basic error information so the full detail of the error was initially hidden.

After some more experimentation in development mode the error that occurs is the typical ASP.NET validate request error (‘A potentially dangerous Request.Form value was detetected…’) which looks like this in ASP.NET 4.0:

RequestValidationErrorScreen

At first when I got this I was real perplexed as I didn’t read the entire error message and because my page does have:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="NewEntry.aspx.cs" Inherits="Westwind.WebLog.NewEntry" 
         MasterPageFile="~/App_Templates/Standard/AdminMaster.master"  
         ValidateRequest="false"         
         EnableEventValidation="false"
         EnableViewState="false" 
%>

WTF? ValidateRequest would seem like it should be enough, but alas in ASP.NET 4.0 apparently that setting alone is no longer enough. Reading the fine print in the error explains that you need to explicitly set the requestValidationMode for the application back to V2.0 in web.config:

<httpRuntime executionTimeout="300" requestValidationMode="2.0" />

Kudos for the ASP.NET team for putting up a nice error message that tells me how to fix this problem, but excuse me why the heck would you change this behavior to require an explicit override to an optional and by default disabled page level switch? You’ve just made a relatively simple fix to a solution a nasty morass of hard to discover configuration settings??? The original way this worked was perfectly discoverable via attributes in the page. Now you can set this setting in the page and get completely unexpected behavior and you are required to set what effectively amounts to a backwards compatibility flag in the configuration file.

It turns out the real reason for the .config flag is that the request validation behavior has moved from WebForms pipeline down into the entire ASP.NET/IIS request pipeline and is now applied against all requests. Here’s what the breaking changes page from Microsoft says about it:

The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing.

In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.

As a result, request validation errors might now occur for requests that previously did not trigger errors. To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file:

<httpRuntime requestValidationMode="2.0" />

However, we recommend that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors.

Ok, so ValidateRequest of the form still works as it always has but it’s actually the ASP.NET Event Pipeline, not WebForms that’s throwing the above exception as request validation is applied to every request that hits the pipeline. Creating the runtime override removes the HttpRuntime checking and restores the WebForms only behavior. That fixes my immediate problem but still leaves me wondering especially given the vague wording of the above explanation.

One thing that’s missing in the description is above is one important detail: The request validation is applied only to application/x-www-form-urlencoded POST content not to all inbound POST data.

When I first read this this freaked me out because it sounds like literally ANY request hitting the pipeline is affected. To make sure this is not really so I created a quick handler:

public class Handler1 : IHttpHandler
{

    public void ProcessRequest(HttpContext context)
    {
        context.Response.ContentType = "text/plain";
        context.Response.Write("Hello World <hr>" + context.Request.Form.ToString());
    }

    public bool IsReusable
    {
        get
        {
            return false;
        }
    }
}

and called it with Fiddler by posting some XML to the handler using a default form-urlencoded POST content type:

FiddlerRequest

and sure enough – hitting the handler also causes the request validation error and 500 server response.

Changing the content type to text/xml effectively fixes the problem however, bypassing the request validation filter so Web Services/AJAX handlers and custom modules/handlers that implement custom protocols aren’t affected as long as they work with special input content types. It also looks that multipart encoding does not trigger event validation of the runtime either so this request also works fine:

POST http://rasnote/weblog/handler1.ashx HTTP/1.1
Content-Type: multipart/form-data; boundary=------7cf2a327f01ae
User-Agent: West Wind Internet Protocols 5.53
Host: rasnote
Content-Length: 40
Pragma: no-cache

<xml>asdasd</xml>--------7cf2a327f01ae

*That* probably should trigger event validation – since it is a potential HTML form submission, but it doesn’t.

New Runtime Feature, Global Scope Only?

Ok, so request validation is now a runtime feature but sadly it’s a feature that’s scoped to the ASP.NET Runtime – effective scope to the entire running application/app domain. You can still manually force validation using Request.ValidateInput() which gives you the option to do this in code, but that realistically will only work with the requestValidationMode set to V2.0 as well since the 4.0 mode auto-fires before code ever gets a chance to intercept the call. Given all that, the new setting in ASP.NET 4.0 seems to limit options and makes things more difficult and less flexible. Of course Microsoft gets to say ASP.NET is more secure by default because of it but what good is that if you have to turn off this flag the very first time you need to allow one single request that bypasses request validation??? This is really shortsighted design… <sigh>

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in ASP.NET  
kick it on DotNetKicks.com

Posting Form Data to an ASP.NET ASMX AJAX Web Service with jQuery

$
0
0

The other day I got a question about how to call an ASP.NET ASMX Web Service with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs.

However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are:

  1. Capture form variables into an array on the client with jQuery’s .serializeArray() function
  2. Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array
  3. On the server create a custom type that matches the .serializeArray() name/value structure
  4. Create extension methods on NameValue[] to easily extract form variables
  5. Create a [WebMethod] that accepts this name/value type as an array (NameValue[])

This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications.

Let’s look at a short example that looks like this as a base form of fields to ship to the server:

InputForm

The HTML for this form looks something like this:

<div id="divMessage" class="errordisplay" style="display: none">
</div>

<div>
    <div class="label">Name:</div>
    <div><asp:TextBox runat="server" ID="txtName"  /></div> 
</div>
<div>
    <div class="label">Company:</div>
    <div><asp:TextBox runat="server" ID="txtCompany"/></div>
</div>
<div>
    <div class="label" ></div>                
    <div>                
        <asp:DropDownList runat="server" ID="lstAttending">
            <asp:ListItem Text="Attending"  Value="Attending"/>
            <asp:ListItem Text="Not Attending" Value="NotAttending" />
            <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" />
            <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" />
        </asp:DropDownList>
    </div>
</div>
<div>
    <div class="label">Special Needs:<br />
            <small>(check all that apply)</small></div>
    <div>
        <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple">
            <asp:ListItem Text="Vegitarian" Value="Vegitarian" />
            <asp:ListItem Text="Vegan" Value="Vegan" />
            <asp:ListItem Text="Kosher" Value="Kosher" />
            <asp:ListItem Text="Special Access" Value="SpecialAccess" />
            <asp:ListItem Text="No Binder" Value="NoBinder" />
        </asp:ListBox>
    </div>
</div>
<div>
    <div class="label"></div>
    <div>        
        <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" />
    </div>
</div>

<hr />

<input type="button" id="btnSubmit" value="Send Registration" />
    

The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values.

Setting up the Server Side [WebMethod]

The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback.

public class PageMethodsService : System.Web.Services.WebService
{
    [WebMethod]
    public string SendRegistration(NameValue[] formVars)
    {
        StringBuilder sb = new StringBuilder();

        sb.AppendFormat("Thank you {0}, <br/><br/>",
                        HttpUtility.HtmlEncode(formVars.Form("txtName")));

        sb.AppendLine("You've entered the following: <hr/>");

        foreach (NameValue nv in formVars)
        {
            // strip out ASP.NET form vars like _ViewState/_EventValidation
            if (!nv.name.StartsWith("__"))
            {
                if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk"))
                    sb.Append(nv.name.Substring(3));
                else
                    sb.Append(nv.name);
                sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>");
            }
        }
        sb.AppendLine("<hr/>");

        string[] needs = formVars.FormMultiple("lstSpecialNeeds");
        if (needs == null)
            sb.AppendLine("No Special Needs");
        else
        {
            sb.AppendLine("Special Needs: <br/>");
            foreach (string need in needs)
            {
                sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>");
            }
        }

        return sb.ToString();
    }
}

The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name.

The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray():

public class NameValue
{
    public string name { get; set; }
    public string value { get; set; }
}


The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array:

/// <summary>
/// Simple NameValue class that maps name and value
/// properties that can be used with jQuery's 
/// $.serializeArray() function and JSON requests
/// </summary>
public static class NameValueExtensionMethods
{
    /// <summary>
    /// Retrieves a single form variable from the list of
    /// form variables stored
    /// </summary>
    /// <param name="formVars"></param>
    /// <param name="name">formvar to retrieve</param>
    /// <returns>value or string.Empty if not found</returns>
    public static string Form(this  NameValue[] formVars, string name)
    {
        var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault();
        if (matches != null)
            return matches.value;
        return string.Empty;
    }

    /// <summary>
    /// Retrieves multiple selection form variables from the list of 
    /// form variables stored.
    /// </summary>
    /// <param name="formVars"></param>
    /// <param name="name">The name of the form var to retrieve</param>
    /// <returns>values as string[] or null if no match is found</returns>
    public static string[] FormMultiple(this  NameValue[] formVars, string name)
    {
        var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray();
        if (matches.Length == 0)
            return null;
        return matches;
    }
}

Using these extension methods it’s easy to retrieve individual values from the array:

string name = formVars.Form("txtName");

or multiple values:

string[] needs = formVars.FormMultiple("lstSpecialNeeds");
if (needs != null)
{
    // do something with matches
}

Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client.

Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code.

POSTing Form Variables from the Client to the AJAX Service

To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page:

    <script src="Scripts/jquery.min.js" type="text/javascript"></script>
    <script src="Scripts/json2.js" type="text/javascript"></script>

json2.js can be found here (be sure to remove the first line from the file):

http://www.json.org/json2.js

It’s required to handle JSON serialization for those browsers that don’t support it natively.

With those script references in the document let’s hookup the button click handler and call the service:

$(document).ready(function () {
    $("#btnSubmit").click(sendRegistration);
});

function sendRegistration() {
    var arForm = $("#form1").serializeArray();

        // Raw jQuery AJAX call (uncomment ServiceProxy.js reference in header)
        $.ajax({ url: "PageMethodsService.asmx/SendRegistration",
            type: "POST",
            contentType: "application/json",
            data: JSON.stringify({ formVars: arForm }),
            dataType: "json",
            success: function (result) {
                var jEl = $("#divMessage");            
                jEl.html(result.d).fadeIn(1000);
                setTimeout(function () { jEl.fadeOut(1000) }, 5000);
            },
            error: function (xhr, status) {
                alert("An error occurred: " + status);
            }
        });
}


The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with:

JSON.stringify({ formVars: arForm })

The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it.

The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call.

On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response:

jEl.html(result.d).fadeIn(1000);

Slightly simpler: Using ServiceProxy.js

If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly:

  • Automatic JSON encoding
  • Automatic fix up of ‘d’ wrapper property
  • Automatic Date conversion on the client
  • Simplified error handling
  • Reusable and abstracted

To add the service proxy add:

<script src="Scripts/ServiceProxy.js" type="text/javascript"></script>

and then change the code to this slightly simpler version:

<script type="text/javascript">
proxy = new ServiceProxy("PageMethodsService.asmx/");

$(document).ready(function () {
    $("#btnSubmit").click(sendRegistration);
});

function sendRegistration() {
    var arForm = $("#form1").serializeArray();

    proxy.invoke("SendRegistration", { formVars: arForm },
                    function (result) {
                        var jEl = $("#divMessage");
                        jEl.html(result).fadeIn(1000);
                        setTimeout(function () { jEl.fadeOut(1000) }, 5000);
                    },
                    function (error) { alert(error.message); } );
}

The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call.

Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client.

Summary

Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions.

Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream!

In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well.

Additional Resources

ServiceProxy.js
A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion.

Making jQuery Calls to WCF/ASMX with a ServiceProxy Client
More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written.

ww.jquery.js
The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in jQuery  ASP.NET  AJAX  
kick it on DotNetKicks.com

The dynamic Type in C# Simplifies COM Calls from Visual FoxPro

$
0
0

I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET.

Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly.

Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code:

*** Create .NET COM Instance
loNet
= CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "rickstrahl@hotmail.com" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC

Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component:

public string PassRecordObject(object FoxObject) 
{
    // *** using raw Reflection
    string Company = (string) FoxObject.GetType().InvokeMember(
        "Company",   
        BindingFlags.GetProperty,null, 
        FoxObject,null);

    // using the easier ComUtils wrappers
    string Name = (string) ComUtils.GetProperty(FoxObject,"Name");

    // Getting Address object – then getting child properties
object Address = ComUtils.GetProperty(FoxObject,"Address");
    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress");

    // using ComUtils 'Ex' functions you can use . Syntax 
    string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress");

return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; }

Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element.

Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner

Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural:

public string PassRecordObjectDynamic(dynamic FoxObject)
{
    // *** using raw Reflection
    string Company = FoxObject.Company; 

    // *** using the easier ComUtils class
    string Name = FoxObject.Name;

    // *** using ComUtils 'ex' functions to use . Syntax
    string Address = FoxObject.Address.StreetAddress;

    return Name + Environment.NewLine +
        Company + Environment.NewLine +
        Address + Environment.NewLine + " FOX";
}

As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required.

Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET .

The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result.

Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object:

DEFINE CLASS FoxData AS SESSION OlePublic

cAppStartPath = ""

FUNCTION INIT
THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) )
SET PATH TO ( THIS.cAppStartpath )
ENDFUNC


FUNCTION GetRecord(lnPk)
LOCAL loCustomer

SELECT * FROM tt_Cust WHERE pk = lnPk ;
         INTO CURSOR TCustomer
         
IF _TALLY < 1
   RETURN NULL
ENDIF    

SCATTER NAME loCustomer MEMO
RETURN loCustomer
ENDFUNC

ENDDEFINE

If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly:

int pk = 0;
int.TryParse(Request.QueryString["id"],out pk);

// Create Fox COM Object with Com Callable Wrapper
FoxData foxData = new FoxData();

dynamic foxRecord = foxData.GetRecord(pk);

string company = foxRecord.Company;
DateTime entered = foxRecord.Entered;

This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain.

For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed.

© Rick Strahl, West Wind Technologies, 2005-2010
Posted in COM  FoxPro  .NET  CSharp  
kick it on DotNetKicks.com

Viewing all 664 articles
Browse latest View live