Quantcast
Channel: Rick Strahl's Web Log
Viewing all 664 articles
Browse latest View live

WebAPI: Getting Headers, QueryString and Cookie Values

$
0
0

Say what you will about how nice WebAPI is, but a few things in the internal APIs are not exactly clean to use. If you decide you want to access a few of the simple Request collections like Headers, QueryStrings or Cookies, you'll find some pretty inane and inconsistent APIs to retrieve values from them. It's not anywhere as easy as ASP.NET's simple Request.Headers[]/QueryString[]/Cookies[] collections. Instead you have to wade through various different implementations  of nested IEnumerable collections which are used to return collections - presumably for multiple values which is the .0005% use case. Each one of these collections need to be accessed differently and not exactly in the way you'd expect from any other Web platform tool.

The syntax to use them is definitely on the verbose side and for me it always throws me for a few minutes on how to best dig into these collections to retrieve a single value. I hate utility code that stops me in my tracks like that, especially for something that should be so trivial.

I finally got tired of trying to remember how to exactly retrieve values from these collections. So, I finally broke down and added a few extension methods that make this job a little simpler using a few one liners.

using System.Collections.Generic;using System.Linq;using System.Net.Http;using System.Net.Http.Headers;namespace System.Web.Http
{/// <summary>
    /// Extends the HttpRequestMessage collection/// </summary>public static class HttpRequestMessageExtensions{/// <summary>
        /// Returns a dictionary of QueryStrings that's easier to work with /// than GetQueryNameValuePairs KevValuePairs collection./// 
        /// If you need to pull a few single values use GetQueryString instead./// </summary>
        /// <param name="request"></param>
        /// <returns></returns>public static Dictionary<string, string> GetQueryStrings(this HttpRequestMessage request)
        {return request.GetQueryNameValuePairs()
                          .ToDictionary(kv => kv.Key, kv=> kv.Value, StringComparer.OrdinalIgnoreCase);
        }/// <summary>
        /// Returns an individual querystring value/// </summary>
        /// <param name="request"></param>
        /// <param name="key"></param>
        /// <returns></returns>public static string GetQueryString(this HttpRequestMessage request, string key)
        {      // IEnumerable<KeyValuePair<string,string>> - right!var queryStrings = request.GetQueryNameValuePairs();if (queryStrings == null)return null;var match = queryStrings.FirstOrDefault(kv => string.Compare(kv.Key, key, true) == 0);if (string.IsNullOrEmpty(match.Value))return null;return match.Value;
        }/// <summary>
        /// Returns an individual HTTP Header value/// </summary>
        /// <param name="request"></param>
        /// <param name="key"></param>
        /// <returns></returns>public static string GetHeader(this HttpRequestMessage request, string key)
        {IEnumerable<string> keys = null;if (!request.Headers.TryGetValues(key, out keys))return null;return keys.First();
        }/// <summary>
        /// Retrieves an individual cookie from the cookies collection/// </summary>
        /// <param name="request"></param>
        /// <param name="cookieName"></param>
        /// <returns></returns>public static string GetCookie(this HttpRequestMessage request, string cookieName)
        {CookieHeaderValue cookie = request.Headers.GetCookies(cookieName).FirstOrDefault();if (cookie != null)return cookie[cookieName].Value;return null;
        }

    }
}

All methods return null if the key value is not found and only a single value is returned (the 99.99995% case).

Now I can see that for efficiency it might be better to read say the query string collection once and read several values out at once rather than re-reading the collection each time. But still this is something that WebAPI should handle internally. At the very least the internal representations of these collections should access in a similar fashion instead of returning crazy shit like IEnumerable<KeyValuePair<string,string>>.

Anyway, I hope this saves some of you some brain cycles - I know it will for me.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Web Api  

A WebAPI Basic Authentication Authorization Filter

$
0
0

ASP.NET Web API allows for a number of different ways to implement security. The 'accepted' way to handle authentication is to use either IIS's built in security (ie. rely on HttpContext and the IIS authentication through Windows Security) or you can roll your own inside of Web API using Web APIs message semantics. If you roll your own the recommended way for authentication is to create a MessageHandler and then add Authorization with a Filter. AFAIK, Web API natively doesn't ship with any authentication handlers at all, so you pretty much have to roll your own if you want to host outside of IIS.

Anyway, in one of my apps we needed custom user authentication based on user credentials and the client explicitly requested Basic authentication due to the client side requirements. Basic Auth is not secure and requires that SSL is used to keep the encoded (not encrypted) credentials somewhat safe from simple attacks. In this case the app runs on an internal network so the risk factor is low.

Filter Only?

When I looked at the various options at implementing custom login security outside of ASP.NET, the first thing I found was Authorization filters. Authorization filters are a really easy way to examine the request, determine whether a user has access and then either going on or exiting out with an UnauthorizedAccess exception.

Filters aren't meant to be full on HTTP request managers that return results, but Basic Authentication is such a simple protocol that requires just a few lines of code to implement, so I went ahead and implement the entire protocol in the filter. Since in this application we have a specific way of authorizing there's only one type of auth happening so there was little need to use a more complex implementation.

Here's the somewhat generic Authorization filter version I ended up with:

/// <summary>
/// Generic Basic Authentication filter that checks for basic authentication/// headers and challenges for authentication if no authentication is provided/// Sets the Thread Principle with a GenericAuthenticationPrincipal./// 
/// You can override the OnAuthorize method for custom auth logic that/// might be application specific.    /// </summary>
/// <remarks>Always remember that Basic Authentication passes username and passwords/// from client to server in plain text, so make sure SSL is used with basic auth/// to encode the Authorization header on all requests (not just the login)./// </remarks>public class BasicAuthenticationFilter : AuthorizationFilterAttribute{bool Active = true;public BasicAuthenticationFilter()
    { }/// <summary>
    /// Overriden constructor to allow explicit disabling of this/// filter's behavior. Pass false to disable (same as no filter/// but declarative)/// </summary>
    /// <param name="active"></param>public BasicAuthenticationFilter(bool active)
    {
        Active = active;
    }/// <summary>
    /// Override to Web API filter method to handle Basic Auth check/// </summary>
    /// <param name="actionContext"></param>public override void OnAuthorization(HttpActionContext actionContext)
    {if (Active)
        {var credentials = ParseAuthorizationHeader(actionContext);if (credentials == null)
            {
                Challenge(actionContext);return;
            }if (!OnAuthorizeUser(credentials.Username, credentials.Password, actionContext))
            {
                Challenge(actionContext);return;
            }
        }base.OnAuthorization(actionContext);
    }/// <summary>
    /// Base implementation for user authentication - you probably will/// want to override this method for application specific logic./// 
    /// The base implementation merely checks for username and password/// present and set the Thread principal./// 
    /// Override this method if you want to customize Authentication/// and store user data as needed in a Thread Principle or other/// Request specific storage./// </summary>
    /// <param name="username"></param>
    /// <param name="password"></param>
    /// <param name="actionContext"></param>
    /// <returns></returns>protected virtual bool OnAuthorizeUser(string username, string password, HttpActionContext actionContext)
    {if (string.IsNullOrEmpty(username) || string.IsNullOrEmpty(password))return false;var principal = new GenericPrincipal(new GenericIdentity(username, "Basic"), null);Thread.CurrentPrincipal = principal;// inside of ASP.NET this is requried!
        //if (HttpContext.Current != null)
        //    HttpContext.Current.User = principal;return true;
    }/// <summary>
    /// Parses the Authorization header and creates user credentials/// </summary>
    /// <param name="actionContext"></param>protected virtual BasicAuthCredentials ParseAuthorizationHeader(HttpActionContext actionContext)
    {string authHeader = null;var auth = actionContext.Request.Headers.Authorization;if (auth != null && auth.Scheme == "Basic")
            authHeader = auth.Parameter;if (string.IsNullOrEmpty(authHeader))return null;

        authHeader = Encoding.Default.GetString(Convert.FromBase64String(authHeader));var tokens = authHeader.Split(':');if (tokens.Length < 2)return null;return new BasicAuthCredentials()
        {
            Username = tokens[0],
            Password = tokens[1]
        };
    }/// <summary>
    /// Send the Authentication Challenge request/// </summary>
    /// <param name="message"></param>
    /// <param name="actionContext"></param>void Challenge(HttpActionContext actionContext)
    {var host = actionContext.Request.RequestUri.DnsSafeHost;
        actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
        actionContext.Response.Headers.Add("WWW-Authenticate", string.Format("Basic realm=\"{0}\"", host));
    }

}


public class BasicAuthCredentials{public string Username { get; set; }public string Password { get; set; }
}

This is a fairly generic implementation that simply checks for a login and sets the Thread principle which can then be checked elsewhere.

Typically however for an application you'll want to validate against a custom store like a business object. You can either implement the logic right in the filter's OnAuthorize method above, or subclass and create a specialized implementation like this:

/// <summary>
/// MyBasicAuthentication Filter used to validate access to the/// API. All API access requires business username/password./// </summary>public class MyBasicAuthenticationFilter : BasicAuthenticationFilter{    public MyBasicAuthenticationFilter()
    { }public MyBasicAuthenticationFilter(bool nonValidated)
        : base(nonValidated)
    { }/// <summary>
    /// Overridden to implement bus user validation /// </summary>
    /// <param name="username"></param>
    /// <param name="password"></param>
    /// <param name="actionContext"></param>
    /// <returns></returns>protected override bool OnAuthorizeUser(string username, string password, HttpActionContext actionContext)
    {var userBus = new BusUser();var user = userBus.AuthenticateAndLoad(username, password);if (user == null)return false;// assign to principalThread.CurrentPrincipal = new GenericPrincipal(new GenericIdentity(username, "Basic"), null);return true;
    }
}

To use the filter now you can simply add the attribute to a controller you want to apply it to:

[MyBasicAuthenticationFilter]public class QueueController : ApiController

or you can globally apply it in the Web API configuration:

GlobalConfiguration.Configuration.Filters.Add(new MyBasicAuthenticationFilter());

Additionally you can also apply the filter attribute on an individual method to either enable or disable the authentication functionality.

Generally I prefer the first approach, since in most of my apps I have at least one section where the authentication security doesn't apply. For this filter it probably doesn't matter, but if you're using something like token based security you might have a Login API that needs to be accessible without authentication.

This works pretty well, and it's fully self contained. What's also nice about this simple implementation is that you have some control over where it is applied. It can be assigned to global filters to fire against every request or against individual controllers and even individual action methods. With a MessageHandler this is considerably more involved as you have to coordinate between a MessageHandler and a Filter to decide where to apply the message handler.

Do we need a Message Handler?

Most other examples I looked at involved message handlers which are a bit more involved to set up and interact with. MessageHandlers in WebAPI are essentially pre-and post request filters that allow to manipulate the request on the way in the response on the way out. To effectively build an authentication message handler is a bit more work than the code I have above. MessageHandlers are also fully async so you have to deal with tasks (or async/await at least) in your code which adds some complexity.

An authentication message handler typically only would have to deal with checking for authentication information in the HTTP headers and if not there fire the challenge requests. Authorization is left to other mechanisms - like a filter. The handler then sets up a principal that can be checked later. The handler also has to check the response output to determine whether to challenge the client. So with a Message Handler implementation you'd have a two fold implementation: the message handler plus an AuthorizationFilter to validate the user.

For reference, you can check out a  MessageHandler Basic Auth implementations from Piotr Walat, and a per request handler from Pablo Cibaro. As you can see in Pablo's example, one problem with message handlers is that there's no real easy way to filter Message Handlers and what they apply to. They are defined globally and fire globally so your code has to figure out how to filter. If you want to do it on a per route or controller basis, this process is not trivial to set up.

I do think that if you are building a generic authentication mechanism that is universally usable, then a MessageHandler makes sense. You can combine multiple message handlers and authentication schemes for example.

But for simple use cases where you use a very application specific logon scheme - you are not going to care about other security implementations, so the filter actually makes sense because it keeps all the code for managing the authentication and authorization logically together. The other advantage as mentioned earlier is that you can specify exactly what the filter applies to which is also nice.

This has been a nice and simple and self-contained solution that's easy to reuse and I've used it on a few projects now. I hope some of you find this useful as well.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Web Api  Security  

A WebAPI Basic Authentication MessageHandler

$
0
0

In my last post I showed a very simple Basic Authentication Filter implementation and several people commented that the 'right' way to do this is by using a MessageHandler instead. In the post I discussed why I opted for a filter rather than the MessageHandler: A filter is much simpler to implement and keeps all the relevant code pieces in one place instead of scattering them throughout the Web API pipeline. This might not be the right choice for all authentication, but if you're doing custom authentication/authorization in your app you're not going to mix and match and plug a multitude of auth mechanisms together. For simple auth scenarios a filter is just fine, especially since even when you implement a MessageHandler you need to implement an AuthorizationFilter anyway.

Just as an exercise, I spend a little time today to put together a message handler based Basic Authentication implementation to contrast the two. There are a few more moving pieces to this implementation:

  • A MessageHandler to handle the Basic Auth processing
  • A custom Identity to pass the username and password around
  • An Authorization filter to validate the user

MessageHandler for Authentication

MessageHandlers in Web API are chainable components that hook into the request/response pipeline. You can plug many message handlers together to provide many module like features. MessageHandlers can handle processing on the inbound request cycle and the output response cycle, via a simple Task<T> abstraction that provides the asynchronous pipeline processing.

To implement the BasicAuthenticationHandler you can create a class derived from DelegatingHandler and override the SendAsync method:

public class BasicAuthenticationHandler : DelegatingHandler{private const string WWWAuthenticateHeader = "WWW-Authenticate";protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {var credentials = ParseAuthorizationHeader(request);if (credentials != null)
        {var identity = new BasicAuthenticationIdentity(credentials.Name, credentials.Password);var principal = new GenericPrincipal(identity, null);Thread.CurrentPrincipal = principal;//if (HttpContext.Current != null)
            //    HttpContext.Current.User = principal;}return base.SendAsync(request, cancellationToken)
            .ContinueWith(task =>
            {var response = task.Result;if (credentials == null && response.StatusCode == HttpStatusCode.Unauthorized)
                    Challenge(request, response);return response;
            });
    }/// <summary>
    /// Parses the Authorization header and creates user credentials/// </summary>
    /// <param name="actionContext"></param>protected virtual BasicAuthenticationIdentity ParseAuthorizationHeader(HttpRequestMessage request)
    {string authHeader = null;var auth = request.Headers.Authorization;if (auth != null && auth.Scheme == "Basic")
            authHeader = auth.Parameter;if (string.IsNullOrEmpty(authHeader))return null;

        authHeader = Encoding.Default.GetString(Convert.FromBase64String(authHeader));var tokens = authHeader.Split(':');if (tokens.Length < 2)return null;return new BasicAuthenticationIdentity(tokens[0], tokens[1]);
    }/// <summary>
    /// Send the Authentication Challenge request/// </summary>
    /// <param name="message"></param>
    /// <param name="actionContext"></param>void Challenge(HttpRequestMessage request, HttpResponseMessage response)
    {var host = request.RequestUri.DnsSafeHost;                    
        response.Headers.Add(WWWAuthenticateHeader, string.Format("Basic realm=\"{0}\"", host));
    }

}
If you looked at my last post this should look fairly familiar - the basic auth logic is very similar to the filter. I reused the Challenge and ParseAuthorizationHeader changing just the inputs to the request and response messages respectively.

The message handler works in two distinct steps - the initial code that fires on the inbound request, which tries to parse the authentication header into a BasicAuthenticationIdentity and assigning that identity to the thread principle.

The second step - the part in the ContinueWith() Task block - handles the processing on the outbound response. Things have to be broken up like this in a MessageHandler because the Response doesn't exist on the inbound request yet. The code here is responsible for issuing the challenge if the response status is unauthorized.

So the logic goes like this:

  • Request is authenticated already - goes through
  • Request is not authenticated and returns a 401 (from an AuthFilter or explicit 401 ResponseMessage from code)
  • Request is not authenticated and returns something other 401 - request goes through

To make all this work there are a couple more things that need to be implemented.

BasicAuthenticationIdentity

Basic Authentication works via a username and password that is passed as a base64 encoded, clear text string. In order to authorize the user in a custom authorization scenario that username and password has to be passed up the pipeline into the AuthorizationFilter that actually handles the authorization of the user.

To do this I opted to create a BasicAuthenticationIdentity class. Using this identity the handler can set the username and password on the Identity and pass it to AuthorizeFilter. Here's the implementation:

/// <summary>
/// Custom Identity that adds a password captured by basic authentication/// to allow for an auth filter to do custom authorization/// </summary>public class BasicAuthenticationIdentity : GenericIdentity{public BasicAuthenticationIdentity(string name, string password) : base(name,"Basic")
    {            this.Password = password;
    }/// <summary>
    /// Basic Auth Password for custom authentication/// </summary>public string Password { get; set; }               
}

AuthorizeFilter

Next we need a filter to handle the authorization of the user. This logic most likely will be application specific. Because all we'll need to do here is validate the user's credentials and return yay or nay, an AuthorizeFilter is the easiest:

public class MyAuthorizationFilter : AuthorizeAttribute{protected override bool IsAuthorized(HttpActionContext actionContext)
    {             var identity = Thread.CurrentPrincipal.Identity;if (identity == null && HttpContext.Current != null)
            identity = HttpContext.Current.User.Identity;if (identity != null && identity.IsAuthenticated)
        {var basicAuth = identity as BasicAuthenticationIdentity;// do your business validation as neededvar user = new BusUser();if (user.Authenticate(basicAuth.Name, basicAuth.Password))return true;                
        }return false;
    }
}

In the filter you can simply override the IsAuthorized() method and return true or false. If you return false WebAPI automatically fires a 401 status code, which triggers the Challenge() in the BasicAuthenticationHandler that's monitoring for 401's.

The IsAuthorized method implementation typically has business specific code in it that handles the authorization of the user. Basically you can capture the Thread Principal and the BasicAuthenticationIdentity and retrieve the username and password. You can then go to town on the username and password. In my example here a business object is fired up to authenticate the user.

By the way, notice that in my last post I used an AuthorizationFilter - and here I'm using an AuthorizeFilter. AuthorizeFilter works great if all you need to do is validate a user and return true or false. If there's more logic involved that, like creating a new response then an AuthorizationFilter is the better choice.

Configuration

Once the handler and filter exist they have to be hooked up. MessageHandlers have to be added in the configuration:

GlobalConfiguration.Configuration.MessageHandlers.Add(new BasicAuthenticationHandler());

The AuthorizationFilter can either be applied via global configuration or on the controller:

GlobalConfiguration.Configuration.Filters.Add(new MyAuthorizationFilter());       

or you can apply it on the controller:

[MyAuthorizationFilter]public class QueueController : ApiController

Filter or MessageHandler - you decide

Comparing the two modes of operation - Authentication MessageHandler or AuthorizationFilter - there's not a tremendous difference in implementation. To me the filter is more compact and easier to follow what's going on simply because everything is in one place. For most typical custom login scenarios that are tied to business logic, that'll be totally sufficient. The advantage of a message handler is that it's globally applied and is part of the WebAPI pipeline so if several components need to take advantage of BasicAuthentication with different Authorization that would work. But then again you can do that with a filter as well, especially since a MessageHandler still requires a filter for it's authorization. <shrug>

Either way you can take your pick from these two implementations.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Web Api  

Publish Individual Files to your Server in Visual Studio 2012.2

$
0
0

In Visual Studio 2012 Update 2 there's a sweet little new gem, that I championed for some time in the past, and that has finally made it into the VS box: You can now publish individual files to the server without having to publish the entire site. You can now publish 1 or more files simply by selecting them in the Solution Explorer and using the Context Menu's Publish Selected Files or Publish File xxxx.xxx.

PublishIndividualFile

In the past if you wanted to publish your site, it was an all or nothing affair: You basically had to rebuild and re-publish the entire site. Publishing works great when you're making major updates that affect binaries and configuration settings. In that case you do want a full publish to push up your binary file changes as well as web.config transformations etc. This is a great feature and the cornerstone of publishing which is as it should be.

But on more than a few occasions I've:

  • Forgotten to include some content file like an image in a full publish
  • Had to make a really minor change to a content file or image and need to push it up
  • Make some quick iterative changes repeatedly to a file to tweak the UI or an image on the server

Now, with Update 2 you have another option to publishing the entire site -you can now publish an individual file.

I know this is a minor thing, but I can't tell you how often I use this for quick image or CSS updates. Sometimes I actually prefer making changes to these sorts of things on a live site rather than firing up the local copy first especially if the live site is running with a full set of data. It's often convenient to just push individual files. This is especially true for my personal content sites, more so than typical business applications.

 

Web Deploy Getting Easier

As a side note I've been a big fan of Web Deployment in Visual Studio - it's such a tremendous time saver over manually uploading files to the server and trying to figure out what needs updating. Prior to Web Deploy features in Visual Studio I actually had used a custom solution I cobbled together using FTP that provided much of the same functinality including the abililty to push individual files which I found very useful.

It's also great in a team environment, since publish settings are typically shared in source control. This ensures that everybody is pushing up code consistently to the staging or live server using the same settings that are configured only once. It's great when a new developer comes on board especially - they don't have to configure anything they are just ready to go.

When Web Publishing was introduced the intial versions were horrible to the point of unusability. In VS 2010 it improved somewhat, but the server side installation of Web Deploy was still a major pain in the ass. Getting Web Deploy configured properly on the server has been a real pain in the ass with 3 different installs required and several manual configuration steps.

With the latest Web Deploy 3.0 release though, Microsoft finally seems to have gotten Web Deploy right to where it's a single simple installation on the server that just works once installed. There no longer are any finicky configuration settings and it just works off the single install. Inside of the Visual Studio 2012 Web Publish client also has made the Publish Settings dialog a bit more user friendly and more flexible in what you can use to connect to the server. VS now understands pure site urls and virtuals as opposed to the base site url and Site ID/Name that was required previously and was always confusing.

WebPublishSettingsDialog

The end effect is I no longer dread setting up Web Deploy for the first time on a server, nor do I have to go look up the configuration for another site to figure out what put in the boxes :-).

It's kind of sad that it took so long for Web Deploy to get it all right, but now the whole thing is ridiculously smooth. There are a still a few issues with web.config transforms that are difficult to deal with from time to time, but that's not really Web Deploy's problem, but a problem of how to partition developer specific settings in configuration files which is always a problem.

In any case, I hope some of you find the new single file or selected file publishing feature as useful as I have. It's just one more little tweak that makes life easier and shaves a few minutes off the development process. Score!

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in ASP.NET  Visual Studio  

A first look at SignalR

$
0
0

SignalR is the latest in a long string of new technologies pouring out from the ASP.NET team recently, when Microsoft rolled out version 1.0 of SignalR when Visual Studio Update 2 was announced.  In a nutshell, SignalR is technology for .NET that allows you to build real time, connected Web applications. Connected in the sense that you can build Web applications that can send and receive and broadcast data in real time. The canonical example of a 'connected' application is a chat application where a client can broadcast messages to all other connected clients. While that's pretty cool in and of itself, that only begins to scratch the surface of what's possible with SignalR as you can communicate in a wide variety of ways between client and server and between all clients to push data around.

Go ahead - Push Me Around!

The idea behind SignalR and other tools like it (like the socket.io or now.js JavaScript libraries) is that you can push data from client to server, from server to client and even from client to client, all in real time without having to poll or check for new data at specified intervals. Callback driven interfaces on both client and server receive pushed messages immediately. The key word here is push - servers and clients can push data at any time and the other end of the connection sees the updated data immediately. It's pretty cool to watch an application, where one browser client updates a value in a text field and all other instances that are connected see that same change at the same time. Or having a server push a notification message down, and having all browsers immediately update to see the new data.

This sort of thing used to be the domain of connected TCP/IP services and peer to peer servers, but SignalR makes all of this available using standard Web protocols, using the HTTP protocol and port 80 or any other HTTP port and with .NET services on the server side. Behind the scenes SignalR combines the use of various connection protocols from WebSockets, to Long Polling to plain AJAX callbacks when all else fails, to bridge the compatibility gap between what modern browsers support and what you can do with legacy browsers. If you want to see all of these techniques highlighted along with a older preview of SignalR, there's a great conference session by Steve Sanderson on Channel 9 that discusses the various Async messaging approaches.

The key feature here is that SignalR's server maintains a persistent connection - abstracted over WebSockets/LongPolling/Ajax depending on what's available on the browser - to the client. So while with AJAX we could always push data to the server, but the server could never push data to the client, with SignalR you can push data both from client to server and from server to client. In order to broadcast messages from one client to all other clients you can callback to the server which can then broadcast a message to all or selected connected clients.

SignalR Server and Client Components

SignalR abstracts these various protocols and allows a seamless experience regardless of what the client supports. Additionally SignalR provides a very easy to use .NET server side framework for creating the backend services that either push data directly or broadcast data in bulk to many clients. SignalR includes the concept of Hubs which use simple methods  in a .NET class as endpoints, as well as a lower level Connection interface that allows for streaming and low level access for the data that is both sent and received. SignalR then also provides rich client libraries for JavaScript, full .NET, Win8, Silverlight and a host of other clients to easily connect to either Hubs or Connections on the server, using an easy to use dynamic mapping model that is very flexible and easy to use. It's surprisingly easy to create SignalR services and consume them, both in Web and non-Web applications.

One of the cool things about SignalR is that you can also easily self host a SignalR server. A typical SignalR server application hosts in ASP.NET and is totally transparent and easy to run as part of the ASP.NET stack. However, you can also host SignalR in a self-hosted application, using an OWIN server host that can bootstrap SignalR and make it available in Console applications, Services or even full blown desktop applications. I recently built a monitoring service application that is running as a Windows Service and was able to use SignalR to efficiently push notification messages to a Web Browser based front end interface, pushing over 50 messages a second to several connected clients in real time. The possibilities this opens up really can't be overstated.

Signal What? A small real World Use Case

When I first heard about SignalR about a year and a half ago, the first thing that came to my mind was: "Yeah that's nice, but it's not a common scenario to have truly connected Web applications." The first thing that springs to mind are chat applications, messaging popups or continuous tickers of updating data pushed down to clients. That demo's nicely and is impressive to see, but it's not exactly a very common use case.

However, recently I had a chance to put SignalR to use in a real application with a scenario that's a little bit different. Specifically we needed a way to connect a standalone Windows Service application  to a Web browser clients to provide real time updates. This project involved a queue service application running as a Windows Service with a Web front end that can monitor and manipulate the queue's operation using any browser.

The specific UI use case was to replace an old and ugly Windows Forms user interface that had to run on the physical server to monitor the real time queue activity and manipulate the queue application settings. With SignalR we were able to move this app to a real time, Web browser based interface.

There are a several important pieces that SignalR provided and made possible for this project:

  • The ability to have a Windows Service push real time messages to a Web browser
  • The ability for many Web browser clients to be connected
  • The ability for many users to modify settings on the Web browser user interface and
    reflect those changes immediately for other browser users

While the application created for this is not very complex it did highlight these various different scenarios of sending messages that you can use with SignalR:

  • Sending messages from the server to all clients (message list display)
  • Sending two-way messages from one client to the server (updates - like AJAX calls but using SignalR)
  • Sending messages from the server a specific caller (individual status updates)
  • Sending messages from a single browser instance to all browsers (Updating the global queue settings or stopping the service)

The interface for this application is basically a list type view of real-time active queue requests as they occur, some status information of the pending items in the queue and the current connection status to the server, as well as a small set of input controls that manage the queue operational status - the number of threads running the wait time and the ability to start and stop the queue.

Here's what the UI for this interface looks like:

QueueViewer

As requests hit the queue they show up in this monitor and the main form's list. The queue service calls SignalR when it starts processing a queue request and again when a queue request either completes successfully or fails. The list status bar then displays the number of pending messages that are  waiting in the queue. The textboxes above let the administrators of this application who have access to the the service manage the queue by tweaking a few settings or by stopping the service altogether. When the update or start/stop button is clicked, a SignalR request is fired to save and or change the service status and then fire a notification to all clients to update their status.

The web site client uses knockout.js for databinding a couple of fairly simple models - the list of visible list items and the queue's status - which are updated by the SignalR callback methods that receive message data from the server. When the model data is updated knockout bindings kick in and refresh the UI immediately resulting in a mostly codeless update process. The only explicit code to display server content are the status messages which are not bound but explicitly called via a showStatus message.

For what it does there's a surprisingly little amount of code involved and the logic involved to make this work is pretty simple.

Granted this isn't a very complex UI, but still it was pretty amazing for me to see hundreds of requests rolling through in a few seconds and updating 10 browser windows simultaneously - until you see this happen with your own application it's hard to appreciate how much satisfaction you get from that very concept working so efficiently! It brings me back to the very early days of the Web when it was exciting to see any dynamic content on a live Web page :-)

I can think of a bunch of use cases where this technology makes a lot of sense:

  • Any sort of two-way messaging applications (chat, messaging)
  • Real time data feeds or ticker displays
  • Real time data monitors for logs, lists, users etc. (Admin interfaces)
  • Long running async requests with real-time status updates
  • A replacement for some Queue type operations with direct real-time connections
  • Screen sharing applications with shared editing data by multiple users
  • Interactive multi-player games with real time screen updates
  • and, and, and…

Ease of Use

Another thing that impressed me about SignalR when I started working with it is that this has to be some of the easiest to use Microsoft technology to come along in a long time, while providing some really powerful features. SignalR is based on dynamic language features which do away with a lot of ceremony in defining of interfaces and mapping client to server. In many cases it's as easy as creating a class on the server and having the client reference the server side hub and method and just call it. End of story. The same is true for broadcasting of messages from the server to the client - there's no contract, no special interface, all you do is call a method that may or may not exist on the client - if you implement it on the client it will get called. End of story - again.

What was really surprising about this project was that going from zero knowledge of SignalR to a fully functional, initial implementation that hit all the initial usage points I mentioned above, took all of 2 work days to accomplish. This included learning about SignalR and experimentation with a few different approaches, dealing with knockout.js, plus building robust connection management code that can deal with disconnects and reconnects (which truthfully was the most complex and time-consuming piece and the only part that required a bit of research and some help on the SignalR Jabbr channel).

This solution uses a Windows Service Project on the server using SignalR's OWIN hosting, which trivial to set up. In fact it takes all of 10 lines of code to hoist up the server.

Low Ceremony

It's extremely easy and low ceremony to broadcast a SignalR message even in a self hosted environment. The following C# server side code calls a JavaScript callback on all clients that are listening  (this method displays a queue list item):

// Write out message to SignalR clients  HubContext.Clients.All.writeMessage(id,"Info",DateTime.Now.ToString("HH:mm:ss"),
    message,  
string.Empty);

where the writeMessage method call is translated to the client side JavaScript handler. On the client there's a callback handler registered on a writeMessage operation which is then fired on the client. The script code then proceeds to bind the values to a view model item using knockout.js and updates the display dynamically.

On the client I can simply register a method that handles a callback which effectively 'publishes' that method on the client:

hub.client.writeMessage = self.writeMessage;

And I can then implement the writeMessage callback method that handles this logic (or use an anonymous method):

writeMessage: function (message, status, time, id, elapsed, waiting) { 
… update collection item viewModel and let knockout.js bind }

The server can now call the writeMessage function on the client using the C# code shown above. It's all dynamic.

Calls in the other direction - from client to server are equally simple. The client calls a server method like this (where self is my top level object container and hub is an instance of the SignalR hub stored on it):

self.hub.server.getServiceStatus()
               .fail(page.statusMessage);

This calls a GetServiceStatus() method on the server's hub.

Once a hub has been created and stored you can simply call the server object which maps any methods you call straight to the .NET server methods implemented on the hub. Here's the server code:

public void GetServiceStatus()
{var instance = Globals.Controller;if (instance == null)
       Clients.Caller.getServiceStatusCallback(null);    else
       Clients.Caller.getServiceStatusCallback(new QueueControllerStatus()
         {
            queueName = instance.QueueName,
            waitInterval = instance.WaitInterval,
            threadCount = instance.ThreadCount,   
            paused = instance.Paused
         });
}

This server code receives the JavaScript client's request and then broadcasts a message back to all connected clients. Effectively this code has a single JavaScript client requesting that the server should broadcast a message to all clients. Here the code basically pushes down status information which is then picked up on the client and bound via knockout to the textboxes and the start/stop button.

The server code can use the Client.All, Clients.Caller, Clients.AllExcept collections to reference common groups or you can add users to specific Groups that you can then broadcast to. Lots of flexibility again in a really easy to use model.

Performance and Resources

Since this was my first time using SignalR I had no idea what to expect in regards to performance with using SignalR's messaging. I was actually surprised that I couldn't overload the UI operation by stuffing even 5000 messages into the queue. SignalR happily and rapidly kept up with the service in sending out messages and updating the list UI faster than you could even begin to keep reading it. 1000 queue requests (x2 for begin/end message) went through in less than 4 seconds! Also tried this with 10 clients connected simultaneously and performance didn't change noticably with the increased number of connected clients.

It's important to understand that this application will be used by a small number of administrative users, so it's not going to be used by thousands of users simultaneously. At most we figure there may be 10 people connected at a time. However, connections are something to consider with SignalR. As cool as this technology is, it's connected technology meaning that each client connected through SignalR is using a persistent and dedicated connection on the Web Server. 10 users or 100 are probably not much of a problem, but thousands of users may bump up against the Web server connection and Windows thread and resource limits eventually. SignalR also includes some scalability features, but these get very complex quickly and if this becomes an issue I personally think that one should reconsider whether SignalR or a real-time connection based interface is the right choice…

While the SignalR's docs claim that it's capable of thousands of simultaneously connected clients (given the connection pool is high enough), there is a finite limit to the amount of connections you can simultaneously run with IIS or self-hosting along with the CPU and memory overhead associated with each connection.

Caveat emptor - make sure you understand the implications of using SignalR in terms of connection, memory, cpu and bandwidth usage.

Documentation and Support

As I mentioned the basics and overall behavior of SignalR are pretty easy to grasp and put into practice and the online documentation does a pretty good job of getting you started and explain the general model of how the messaging flow and program implementation code works. There are nice and simple examples and it works great.

However, some of the more specific documentation is a bit sketchy and often limited or missing altogether. The hardest part of this small component we built was dealing with the connection and disconnection notifications required to determine whether the Windows Service is online and the SignalR server running and how to deal with scenarios where connections are dropped and then reconnecting as necessary. SignalR actually includes some very sophisticated logic to notify you of connect and disconnect events on the client, but there are quite a few overlapping different events that fire on the client and I for one got lost in which one actually needed to be handled to reliably re-connect. There were a handful of other issues at this same lower level that were difficult to resolve through the documentation or even searching.

I also found searching on SignalR content a bit frustrating because a lot of the hits I'd get for topics in blog posts or StackOverflow answers ended up being from preview versions with information that was no longer valid. Hopefully this stuff will work itself out in time.

In the meantime however, I hopped over to the SignalR Jabbr Channel and posted a few of my conundrums over there. There's lots of help offered on this channel from peers and from the authors of SignalR (or David Fowler mostly) frequently jumping in and answering questions. David helped me with two sticky issues and in a few minutes pointed me in the right direction. This type of interactive support is just awesome and it also shows the enthusiasm by some of the people involved with SignalR. I do wonder though how well this type of support will scale in the future. We'll see - in the meantime it's great to see this kind of direct interaction which hopefully helps the SignalR guys iron out some rough spots that come up more frequently.

SignalR - Hell Yeah!

As you can probably tell I'm pretty jazzed about what SignalR offers to .NET developers. Web based real-time communication technology like SignalR offers many opportunities to rethink of what we can actually build with Web applications today, offering many more opportunities to build interactive and collaborative applications. It provides a different way to access information in Web applications in a more direct, real time manner and it does this in a way that is relatively simple to accomplish. SignalR is amongst the cleanest and easiest to implement solutions I've seen coming from Microsoft in a very long time.

I'm especially excited about the ability to interface SignalR's server side code with self-hosted applications that gives us the ability to more easily connect back end services to browser front ends. For administrative applications or dashboards this is incredibly powerful stuff and it's easy to integrate into existing applications. I'll post more on this topic in the future.

But even in plain old Web applications the opportunities to provide real time data or to have users share information across multiple live browser instances is pretty cool. The abstraction provided by SignalR's client and server make it so easy to take advantage of this functionality in just about any application. There are so many opportunities here - from the obvious real time broadcast services to more subdued server callbacks that can replace traditional AJAX interfaces to providing real time access to changing data in Line of Business or even public facing applications. Then there is the use case for running long running async operations on the server and providing real-time feedback to the client page. And there's the whole opportunity with interactive games or productivity applications where multiple users can interact with the same shared information to provide a live and updating interface. Doing simple interactive games like Battleship etc. become almost trivial with this sort of technology and even more interactive graphics intensive   games become a possibility with this toolset.

Lots of opportunities to dream up, so dream on…

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in ASP.NET  JavaScript  SignalR  

Smoothing out <div> scrolling in Mobile WebKit Browsers

$
0
0

One thing that is annoyingly bad in WebKit mobile browsers - and especially on iOS mobile browsers - is the default scroll behavior for HTML elements that have text overflow. Unlike desktop browsers which easily scroll content, many mobile browsers by default have a terrible experience scrolling <div> tags that have overflow-y: scroll or auto when there's more than a few elements of overflow. Instead of smooth, elastic scrolling the default scrolling provides only a slow and often stuttering behavior. If the list gets large or the layout for each item is complex the list may get so slow it appears to lock up altogether or jump unpredictably.

Luckily there's an easy workaround for this behavior via a WebKit specific override style that enables a better touch based browsing experience:

<div style="-webkit-overflow-scroll: touch;">

Applying this style makes a difference of night and day providing elastic and accelerometer based scrolling behavior that you would expect - the harder you press and the faster you swipe the further the scrolling goes. Scrolling is smooth and there's no stuttering even for fairly large lists.

This issue primarily concerns iOS devices like iPhone, iPod Touch and iPad. From early comments it appears that many Android devices have this scroll behavior turned on by default, so it's possible this tweak only applies to iOS devices. I'd be curious to hear what devices outside of iOS have scroll issues that are fixed by using -webkit-overflow-scroll tuning when you try the sample form and compare the two lists it displays.

To demonstrate here's a small sample form that shows a slow list with default scrolling and one that's optimized below it:

http://www.west-wind.com/WestwindWebToolkit/samples/Ajax/html5andCss3/WebkitOverflowScroll.aspx

The page has two lists with the same data. In this example, I generate some repetitive list data in JavaScript and then render the data into a list using handlebars.js (you can look at the self contained source in browser). Here's what the desktop and mobile views of the sample look like:

BrowserView

To see the problem though you'll have to access the page on a mobile device. You can scan the QR code on the sample page, or the one below here (I use QRReader for iOS, or on Android) to bring up the page on your phone without typing out the long URL:

On the mobile page you'll have to use the Slow Box/Fast Box links to switch between the slow and fast fast scrolling boxes.

photo (1)

The list on the top uses default scrolling and while it works it scrolls slowly and has none of the swipe scrolling features, but rather moves exactly as far as you move your finger. While that works, it's not the scroll behavior you want to see on a mobile device - we've become spoiled by smooth accelerator based scrolling that responds to the vigor of your swipe.

The second list adds the -webkit-overflow-scroll: touch; behavior. It contains the same data as the first, but it scrolls smoothly using the accelerometer feature's strength of the swipe to determine how far to scroll. Putting your finger down stops the scrolling. IOW it behaves the way you'd expect it to on a mobile device.

Why isn't this the default Behavior on Mobile WebKit?

As it turns out -webkit-overflow-scroll is not without its own set of problems.

The feature uses native scrolling behavior that like CSS transitions can tax the mobile GPU quite heavily resulting in heavy battery usage. So by default this behavior is not enabled. If your lists are barely overflowing or use overflow only for the odd case where some minor amount of text might overflow, using the default behavior is just fine. There's no reason to have super fast scrolling in those scenarios. Native overflow only makes sense when you explicitly built content that is meant to scroll like the list in the example.

There are also a handful of known issues related to contained content that uses non-static positioning. If you have position: relative; or absolute; content inside of the scrolling region in some instances the relative content doesn't scroll with the rest of the document  resulting in a badly misrendered list.

For me personally I've not had a problem with this however, as I tend to create fairly simple lists for scrolling. I think in typical vertical scrolling scenarios like lists the position issue isn't typically a problem. However, if you're building a touch based carousel or horizontal slider type interface, position can be more of an issue as you're often dealing with more complex content in these slide tile like interfaces.

Tricky, Tricky

I didn't know about this little trick until last week and I ran into it by accident. I was lamenting about some unrelated scroll issues on Bob Yexley (@ryexley) on Twitter jumped in with the -webkit-overflow-scroll style attribute.

The difference this tweak has made on several of my Web apps however, was tremendous. In particular one app that is a local classifieds listing app that uses incremental loading for large numbers of listings, was very painful to use once more than 50 or so items where loaded incrementally with scrolling. In fact, due to the bad scroll performance we limited the list to 50 entries on mobile devices. Now with the improved scroll performance, we've removed the restriction and we can actually manage the list and the current view size by removing items that aren't shown any longer all without losing scroll performance which is awesome. Several other apps could use this as a drop in replacement and also see drastically improved scrolling for some of the app's lists.

It's frustrating however, that there are browser specific tweaks like this. I consider myself reasonably aware of HTML and CSS changes as they come online but this one's escaped me for nearly two years. It was pure accident that made me find this one. Keeping up with browser dependent tweaks like this is not trivial and in this case it was by accident that I landed at finding it.

Nevertheless I'm excited to see this addressed - it's made a couple of internal apps I'm using daily way more usable. Chalk that one up as a winner…

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in HTTP  SignalR  

Setting up and using Bing Translate API Service for Machine Translation

$
0
0

Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. Let that be a less to you in your own application design - when you change APIs make damn sure you have clean documentation on the changes. Something Microsoft should take to heart.

It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration.

In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site…

Signing Up

The first step required is to create a Windows Azure MarketPlace account.

Go to:

If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration.

Once you're signed in you can start adding services.

  • Click on the Data Link on the main page
  • Select Microsoft Translator from the list

This adds the Microsoft Bing Translator to your services.

Pricing

The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine.

Once signed up there are no further instructions and you're left in limbo on the MS site.

Register your Application

Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account.

  • Go to https://datamarket.azure.com/developer/applications/register
  • Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!)
  • Provide your name
  • The client secret was auto-created and this becomes your 'password'
  • For the redirect url provide any https url: https://microsoft.com works
  • Give this application a description of your choice so you can identify it in the list of apps

Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API.

Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to:

https://datamarket.azure.com/developer/applications

You can come back here to look at your registered applications and pick up the ClientID and ClientSecret.

Fun eh? But we're now ready to actually call the API and do some translating.

Using the Bing Translate API

The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency.

Authentication

The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do.

To call the Authentication API use code like this:

/// 
/// Retrieves an oAuth authentication token to be used on the translate/// API request. The result string needs to be passed as a bearer token/// to the translate API./// 
/// You can find client ID and Secret (or register a new one) at:/// https://datamarket.azure.com/developer/applications//// 
/// The client ID of your application
/// The client secret or password
/// public string GetBingAuthToken(string clientId = null, string clientSecret = null)
{string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13;if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret))
    {
        ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided;return null;
    }var postData = string.Format("grant_type=client_credentials&client_id={0}" +"&client_secret={1}" +"&scope=http://api.microsofttranslator.com",HttpUtility.UrlEncode(clientId),HttpUtility.UrlEncode(clientSecret));// POST Auth data to the oauth APIstring res, token;try{var web = new WebClient();
        web.Encoding = Encoding.UTF8;
        res = web.UploadString(authBaseUrl, postData);
    }catch (Exception ex)
    {
        ErrorMessage = ex.GetBaseException().Message;return null;
    }var ser = new JavaScriptSerializer();var auth = ser.Deserialize<BingAuth>(res);if (auth == null)return null;
    token = auth.access_token;return token;
}private class BingAuth{public string token_type { get; set; }public string access_token { get; set; }
}

This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer.

The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call.

Translation

Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string.

Here's what the simple Translate API call looks like:

/// 
/// Uses the Bing API service to perform translation/// Bing can translate up to 1000 characters. /// 
/// Requires that you provide a CLientId and ClientSecret/// or set the configuration values for these two./// 
/// More info on setup:/// http://www.west-wind.com/weblog//// 
/// Text to translate
/// Two letter culture name
/// Two letter culture name
/// Pass an access token retrieved with GetBingAuthToken./// If not passed the default keys from .config file are used if any
/// public string TranslateBing(string text, string fromCulture, string toCulture,string accessToken = null)
{string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate";if (accessToken == null)
    {
        accessToken = GetBingAuthToken();if (accessToken == null)return null;
    }string res;try{var web = new WebClient();                
        web.Headers.Add("Authorization", "Bearer " + accessToken);                        string ct = "text/plain";string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}",HttpUtility.UrlEncode(text),
                                    fromCulture, toCulture,HttpUtility.UrlEncode(ct));

        web.Encoding = Encoding.UTF8;
        res = web.DownloadString(serviceUrl + postData);
    }catch (Exception e)
    {
        ErrorMessage = e.GetBaseException().Message;return null;
    }// result is a single XML Element fragmentvar doc = new XmlDocument();
    doc.LoadXml(res);            return doc.DocumentElement.InnerText;          
}

The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie.

The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API.

The translate API returns an XML string which contains a single element with the translated string.

Using the Wrapper Methods

It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:

[TestMethod]public void TranslateBingWithAuthTest()
{var translate = new TranslationServices();string clientId = DbResourceConfiguration.Current.BingClientId;string clientSecret = DbResourceConfiguration.Current.BingClientSecret;string auth = translate.GetBingAuthToken(clientId, clientSecret);Assert.IsNotNull(auth);string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth);Assert.IsNotNull(text, translate.ErrorMessage);Console.WriteLine(text);
}


[TestMethod]public void TranslateBingIntegratedTest()
{var translate = new TranslationServices();string text = translate.TranslateBing("Hello World we're back home!","en","de");Assert.IsNotNull(text, translate.ErrorMessage);Console.WriteLine(text);
}

Other API Methods

The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string.

You can find additional methods for this API here:

http://msdn.microsoft.com/en-us/library/ff512419.aspx

Soap and AJAX APIs are also available and documented on MSDN:

http://msdn.microsoft.com/en-us/library/dd576287.aspx

These links will be your starting points for calling other methods in this API.

Dual Interface

I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs:

WebTranslation

As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field.

Lost in Translation

There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you end up calling the Auth API for every call.

Hopefully this post will out a few of you trying to navigate the Microsoft bureaucracy at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating…

Related Posts

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Localization  ASP.NET  .NET  

Replacing jQuery.live() with jQuery.on()

$
0
0

jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions).

However, one particular change I found has caused me quite a bit of update trouble is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process although I'm slowly cleaning up my code and switching to the .on() function as the replacement.

jQuery.live()

jQuery.live() was introduced to simplify handling events on matched elements that exist current or that are are added in the future. The way this works generally is that .live() actually captures events on the parent element, and when the event is fired checks to see if the originating element was the selected element. This is easy if some elements exists and you can easily figure out the parent, but not so clean and potentially error prone if no element exist to start with as jQuery has to use a high level element.

I presume due to this uncertainty and the overhead of trying to find a suitable parent element to bind the events to was the reason it was removed.

An Example

Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.

<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"><div class="post-icon"></div><div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div><div class="postitemprice rightalign">$ 3,500 O.B.O.</div><div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div></div><div class="postitem" data-id="2jtvuu17"><div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div><div class="postitemprice rightalign">$950</div><div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div></div>

… </div>

With the jQuery.live() function you could easily select elements and hook up a click handler like this:

$(".postitem").live("click", function() {...});

Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods.

Re-writing with jQuery.on()

With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative.

The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes.

Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working.

You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a qualifier selector expression as to which elements you are actually interested in for handling the event for.

Sounds more complicated than it is. For the list above using jQuery.on() looks like this:

$("#PostItemContainer").on("click", ".postitem", function() {...});

You basically specify a container that can handle the .click event and then provide a filter selector to find the child elements that triggered the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always handled. Sweet.

Here's the full event signature for the .on() function:

.on( events [, selector ] [, data ], handler(eventObject) )

Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen.

While that might look a bit more verbose than what .live() did .on() provides the same functionality by being more explicit on what your parent container for trapping events is. Presumably it's also faster as it removes the need for jQuery to search for a suitable parent element.

One downside of .on() vs. .live in the not-yet-rendered scenario is that in order for it to work you have to know what the parent element is going to be. If you're building generic library code you may not always know what the appropriate parent code is. In that case, it will be on us to do essentially what jQuery did inside of the .live() function which is sniffing for the appropriate parent element to hook the event handler up to. Overall though I've found that even with quite a bit of generic library code that I use and maintain there was only one place where this issue came up. So, probably not a large concern.

.on() is good Practice even for ordinary static Element Lists

As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate().

All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in jQuery  

Fixing a SkyDrive Sync Disaster

$
0
0

For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly!

The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months.

Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders.

To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like:

SkyDriveFail

If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files.

Not exactly how I imagined my synching would work.

At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong.

I quickly realized that doing this by hand would be too hard for the large number of files that got hosed.

Hacking together a small .NET Utility

So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files.

What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files:

FixSkyDriveForm

I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two.

I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few).

To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing.

For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number.

Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile.

Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it.

This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts.

Hope this helps somebody out so you can avoid wasting the better part of a full work day on this…

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Windows  .NET  

Using HTML 5 SessionState to save rendered Page Content

$
0
0

HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications.

SessionState and LocalStorage are easy

The APIs that make up sessionState and localStorage are very simple. Both objects feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with.

Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):

// setvar lastAccess = new Date().getTime();if (sessionStorage)sessionStorage.setItem("myapp_time", lastAccess.toString());// retrieve in another page or on a refreshvar time = null;if (sessionStorage)time = sessionStorage.getItem("myapp_time");
if (time) time = new Date(time * 1);elsetime = new Date();

sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way.

The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example.

You can go check out the following example online in Plunker:
http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview

which looks like this:

Session Sample

Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it).

The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact.

The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code:

function getSessionCount() {var count = 0;if (sessionStorage) {var count = sessionStorage.getItem("ss_count");
        count = !count ? 0 : count * 1;                    
    }
    $("#txtSession").val(count);return count;
}function setSessionCount(count) {if (sessionStorage)
        sessionStorage.setItem("ss_count", count.toString());
}

These two functions essentially load and store a session counter value. The two key methods used here are:

  • sessionStorage.getItem(key);
  • sessionStorage.setItem(key,stringVal);

Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:

var user = { name: "Rick", id="ricks", level=8 }
sessionStorage.setItem("app_user",JSON.stringify(user));

to retrieve it:

var user = sessionStorage.getItem("app_user");if (user)
    user = JSON.parse(user);

Simple!

If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:

SessionDebugger 

You can also use this tool to refresh or remove entries from storage.

What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running.

But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later.

Let's look at the latter with a real life example.

Why do I need Client-side Page Caching for Server Rendered HTML?

I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content.

So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and…

…you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool!

Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back…

In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting.

Content Caching

If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again.

But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it???

This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this.

A real World Use Case

Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here:

http://classifieds.gorge.net/

However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation:

QrCodeList


http://classifeds.gorge.net/mobile 
(or browse the base url with your browser width under 800px)

 


Here's what the mobile, non-frames view looks like:

  ClassifiedsListing[4]ClassifiedsView

As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place.

Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage!

Using Client SessionStorage to cache Server Rendered Content

To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page.

It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation.

Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this:

https://classifieds.gorge.net/list?back=True

The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:

@model MessageListViewModel@{    ViewBag.Title = "Classified Listing";bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]);}<form method="post" action="@Url.Action("list")"><div id="SizingContainer">@if (!isBack)
        {@Html.Partial("List_CommandBar_Partial", Model)<div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;">@Html.Partial("List_Items_Partial", Model)@if (Model.RequireLoadEntry)
                {<div class="postitem loadpostitems" style="padding: 15px;">            <div  id="LoadProgress" class="smallprogressright"></div><div class="control-progress">Load additional listings...</div>            </div>}            </div>}</div></form>

As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored.

Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.

page.saveData = function(id) {if (!sessionStorage)return;var data = {
        id: id,
        scroll: $("#PostItemContainer").scrollTop(),
        html: $("#SizingContainer").html()
    };
    sessionStorage.setItem("list_html",JSON.stringify(data));
};
page.restoreData = function() {if (!sessionStorage)return;    var data = sessionStorage.getItem("list_html");if (!data)return null;return JSON.parse(data);
};

The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML.

In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits.

Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:

$( function() {
var isBack = getUrlEncodedKey("back", location.href); if (isBack) {// remove the back key from URLsetUrlEncodedKey("back", "", location.href);var data = page.restoreData();// restore from sessionStateif (!data) {// no data - force redisplay of the server side default list window.location = "list"; return; }

$("#SizingContainer").html(data.html);var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); }else if (window.noFrames)page.saveData(null);// save when page loads$("#SizingContainer").on("click", ".postitem", function() {var id = $(this).attr("data-id");if (!id)return true;if (window.noFrames)page.saveData(id);var contentFrame = window.parent.frames["Content"];if (contentFrame) contentFrame.location.href = "show/" + id;elsewindow.location.href = "show/" + id;return false; });…

The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always!

If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:

$("#SizingContainer").html(data.html);

It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone.

The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element.

For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements.

The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view).

Some things to watch out for

As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious.

Here are a few things to pay attention to:

  • Event Handling Logic
  • Timing of manipulating DOM events
  • Inline Script Code
  • Bookmarking to the Cache Url when no cache exists

 

JavaScript Event Hookups

The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:

$("#btnSearch").click( function() {…});

that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet.

Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:

$("#SelectionContainer").on("click","#btnSearch", function() {…});

which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events.

Timing of manipulating DOM Elements

Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line.

Inline Script Code

Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script.

This code broke when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As with the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it).

Bookmarking Cached Urls

When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL.

The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting.

This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything.

Don't use <body> to replace Content

Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags, top level forms and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container.

Other Approaches

The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic.

Here are a few other ways that come to mind:

  • Using Partial HTML Rendering via AJAX
    Instead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case.
  • Using JSON Data and Client Rendering
    The hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated.

I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general.

State of the Session

SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved.

Always keep in mind that both SessionState and LocalStorage have size limitations that are per domain, so keep item storage optimized by removing storage items you no longer need to avoid overflowing the available storage space.

If you haven't played with sessionStorage or localStorage I encourage you to give it a try. It's highly useful when it comes to caching information and managing client state even in primarily server driven applications. Check it out…

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in JavaScript  HTML5  ASP.NET  MVC  

HTML5 and CSS3 Editing in Windows Live Writer

$
0
0

Windows Live Writer is a wonderful tool for editing blog posts and getting them posted to your blog. What makes it nice is that it has a small set of useful features, plus a simple plug-in model that has spawned many useful add-ins. Small tool with a reasonably decent plug-in model to extend equals a great solution to a simple problem. If you're running Windows, have a blog and aren’t using Live Writer you’re probably doing it wrong…

One of Live Writer’s nice features is that it can download your blog’s CSS for preview and edit displays. It lets you edit your content inside of the context of that CSS using the WYSIWYG editor, so your content actually looks very close to what you’ll see on your blog while you’re editing your post. Unfortunately Live Writer renders the HTML content in the Web Browser Control’s  default IE 7 rendering mode.

Yeah you read that right: IE 7 is the default for the Web Browser control and most applications that use it, are stuck in this modus unless the application explicitly overrides this default. The Web Browser control does not use the version of Internet Explorer installed on the system (IE 10 on my Win8 machine) but uses IE 7 mode for ‘compatibility’ for old applications.

If you are importing your blog’s CSS that may suck if you’re using rich HTML 5 and CSS 3 formatting.

Hack the Registry to get Live Writer to render using IE 9 or 10

In order to get Live Writer (or any other application that uses the Web Browser Control for that matter) to render you can apply a registry hack that overrides the Web Browser Control engine usage for a specific application. I wrote about this in detail in a previous blog post a couple of years back.

Here’s how you can set up Windows Live Writer to render your CSS 3 by making a change in your registry:

livewriterHtml5

The above is for setup on a 64 bit machine, where I configure Live Writer which is a 32 bit application for using IE 10 rendering.

The keys set are as follows:

32bit Configuration on 64 bit machine:

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION

Key: WindowsLiveWriter.exe
Value: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)

On a 32 bit only machine:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION

Key: WindowsLiveWriter.exe
Value: 9000 or 10000  (IE 9 or 10 respectively) (DWORD value)

Use decimal values of 9000, 10000 or 11000 to specify specific versions of Internet Explorer.

This is a minor tweak, but it’s nice to actually see my blog posts now with the proper CSS formatting intact.

Html5Rendering

Notice the rounded borders and shadow on the code blocks as well as the overflow-x and scrollbars that show up. In this particular case I can see what the code blocks actually look like in a specific resolution – much better than in the old plain view which just chopped things off at the end of the window frame. There are a few other elements that now show properly in the editor as well including block quotes and note boxes that I occasionally use.

It’s minor stuff, but it makes the editing experience better yet and closer to the final things so there are less republish operations than I previously had. Sweet!

Note that this approach of putting an IE version override into the registry works with most applications that use the Web Browser control. If you are using the Web Browser control in your own applications, it’s a good idea to switch the browser to a more recent version so you can take advantage of HTML 5 and CSS 3 in your browser displayed content by automatically setting this flag in the registry or as part of the application’s startup routine if not dedicated setup tool is used. At the very least you might set it to 9000 (IE 9) which supports most of the basic CSS3 features and is a decent baseline that works for most Windows 7 and 8 machines. If running pre-IE9, the browser will fall back to IE7 rendering and look bad but at least more recent browsers will see an improved experience.

I’m surprised that there aren’t more vendors and third party apps using this feature. You can see in my first screen shot that there are only very few entries in the registry key group on my machine – any other apps use the Web Browser control are using IE7. Go figure. Certainly Windows Live Writer should be writing this key into the registry automatically as part of installation to support this functionality out of the box, but alas since it does not, this registry hack lets you get your way anyway…

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Live Writer  Windows  

Rendering ASP.NET MVC Razor Views outside of MVC revisited

$
0
0

Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code.

However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor Engine. While it works, it’s an awkward solution when running inside of ASP.NET, even if quite workable and useful outside of it.

Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance.

Creating a Controller Instance outside of MVC

The trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().

The key to make this work is the following method:

/// 
/// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present       /// 
/// Type of the controller to create
/// public static T CreateController(RouteData routeData = null)where T : Controller, new()
{
    T controller = new T();// Create an MVC Controller Contextvar wrapper = new HttpContextWrapper(System.Web.HttpContext.Current);if (routeData == null)
        routeData = new RouteData();if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller"))
        routeData.Values.Add("controller", controller.GetType().Name
                                                    .ToLower()
                                                    .Replace("controller", ""));

    controller.ControllerContext = new ControllerContext(wrapper, routeData, controller);return controller;
}

This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.

So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule:

protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model)
{var Response = HttpContext.Current.Response;
    Response.ContentType = "text/html";
    Response.StatusCode = errorHandler.OriginalHttpStatusCode;var context = ViewRenderer.CreateController<ErrorController>().ControllerContext;var renderer = new ViewRenderer(context);string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model);Response.Write(html);           
}

That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code.

This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:

public ViewRenderer(ControllerContext controllerContext = null)
{// Create a known controller from HttpContext if no context is passedif (controllerContext == null)
    {if (HttpContext.Current != null)
            controllerContext = CreateController<ErrorController>().ControllerContext;else
            throw new InvalidOperationException("ViewRenderer must run in the context of an ASP.NET " +"Application and requires HttpContext.Current to be present.");
    }
    Context = controllerContext;
}

In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.

While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!

You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in ASP.NET  MVC  

IIS Default Documents vs. ASP.NET MVC Routes

$
0
0

Here's a question that I've quite a few times over the years and that takes me a minute to remember myself every time I try to use a static Default document in an ASP.NET MVC application - as I often do for demos.

Suppose you have a static index.htm page in your project, have IIS configured to include index.htm as your default document (as it is by default) and you want it to come up when the browser navigates to the default url of your site or virtual directory. Now when you create a new empty or basic MVC project and leave everything set at the default settings and you go to:

http://localhost:30735/

you'll unpleasantly find:

ResourceCantbefound

So why is IIS not finding your default resource? The file exists and using:

http://localhost:30735/index.htm

works, so what's the deal?

ASP.NET MVC takes over URL management and by default the routing is such that all extensionless URLs are controlled by the extensionless Url handler defined in web.config:

<handlers><remove name="ExtensionlessUrlHandler-Integrated-4.0" /><add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="*" 
type="System.Web.Handlers.TransferRequestHandler"
preCondition="integratedMode,runtimeVersionv4.0" /></handlers>

This handler routes all extensionless URLs into ASP.NET's routing mechanism which MVC then picks up to define its internal route handling. Since

http://localhost:30735/

is an extensionless URL it's treated like any other MVC routed URL and tries to map to a configured routing endpoint/controller action. ASP.NET MVC tries to map the URL to a controller and action, and if the default routing is in place it'll try to find the HomeController and the Index action on it. If that exists it'll display, otherwise the above 404 and corresponding error page shows up.

To display a static default page for the root folder there's luckily an easy way to accomplish the task by using routes.IgnoreRoute(""):

public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.IgnoreRoute(""); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); }

The  routes.IgnoreRoute("") ensures that the default route is ignored and that your index.htm file is found by IIS's default document handling as MVC ignores the route and lets IIS do its thing.

Alas your index.htm page is now served.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in MVC  

The Search Engine Developer

$
0
0

Scott Hanselman yesterday posted a piece on how easy it is to become what I call a Search Engine Developer. His post really hit home with me as I've done a lot of research for various development challenges in my last few work weeks where Google was a mainstay. I don't know about you, but I find myself using an ever increasing amount of intellectual content by searching for solutions online, which is a change from how I used to work not all that long ago. That doesn't just mean finding code and cutting and copying, but also searching for ideas and problems solved and applying that knowledge to my particular programming problems. Nevertheless, when I compare how I used to work in the days prior to the Internet and the mountains of shared programming knowledge today, it sometimes feels like I've become a slacker at best and a plagiarizer at worst as I don't have to stretch as far to solve common and not-so common problems today by using a search engine and the many developer resources available on the Internet.

To put this into context, here's a good one to ask yourself: If you had a job interview tomorrow and had to answer a bunch of heavy duty technical questions without Internet access, could you do it? To be honest, I'm not so sure I could - I've come to rely on being able to look up information online whether it's in search engines, code repositories or even my own blog. The concepts and ideas are all in my head, but many of the details and implementation issues often are not. I commit very little detail information these days into permanent memory it seems. My actual memory retention rate is probably very low outside of the commonly used stuff I use day in day out.

I suspect I'm not the only one…

How did we get here?

But it's not always been this way though at least not for me.

I've been doing software development for nearly 30 years now (scary thought that!) and I can remember the times back then when I didn't have any online resources. The only way I learned stuff was by going to school, reading a book or magazine article (lots of that for me) or going to a user group meeting to discuss programming in person with a few other like-minded souls. In those days there was a lot of 'discovery' on my own, even of small and common things, simply because there was no easy way to look things up. The 'offline', off-memory storage simply wasn't there - you either had it in your head or in a book or magazine buried in a pile at the back of a closet somewhere. :-) Sounds like fun dunn'it?

In a way looking back at it, it was fun. You really *had to* learn stuff and figure it out on your own very often - there was no easy cheating. No going online to snake a RegEx expression for validating a phone number, or finding an easy to use SMTP library in FoxPro, or easily do something as simple as base64 encoding in C. You had to sit down and figure it out on your own.

This went on for a long time too - it wasn't really until the end of the 90's that more development content than documentation started coming online. Most of it came from big vendors with product documentation, published articles from classic print magazines as well as the messy content from various developer forums that was difficult to wade through.

Then in the early to mid 2000's things changed as blogging started getting popular, and that's when online programming content really started taking off. It was free form user participation and the ability to pick your own topics to write in detail, about often complex technical content, that really got massive amounts of quality content online and started driving the Search Engine Developer paradigm.

Then in the late 2000's we started seeing more collaborative sites like StackOverflow for question and answer style conversations, and the proliferation of source code sharing sites like GitHub, CodePlex, BitBucket etc. that really brought code sharing to the general masses.

Today it's easy to find solutions to a fair percentage of programming problems that help both for troubleshooting and the learning process. If you look back and think about how far we've come especially in the last 10 years, it's pretty amazing how much of an impact the amount of online programming information and the ability to search it has brought us.

Hail the Search Engine Developer

One of Scott's points in his piece is that it's easy to get lazy and lose some of your edge by being purely a search engine developer. There's definitely something to be said for creating something from scratch, learning from the experience, pushing your knowledge limits, the thrill of discovery and for working through an idea from concept to completion on one's own.

But, this sort of coding seems to get less and less common, as there are fewer solutions that have to be worked out from scratch like that. I know it's true for me - I used to build lots of components and utilities from scratch. I still do to some extent but not nearly as in the past. Today a lot of those kinds of things are much more easily picked up through some utility code in a shared library or code snippet found online and modified and directly integrated into code.

Now I'm not suggesting that pure cut and paste or library integration is always great idea. It's always a good idea to understand the code you're integrating to some extent.

But to be honest - especially with libraries that's not always the case. When I look for a QR reader library to integrate into an application, I'm not going to ask too many questions on how it works for example. OTOH, if I find a short code snippet for integration I usually spend a bit of time experimenting around with that code and usually end up modifying, customizing and abstracting it before integrating to mitigate the 'external code' aspect.

The obvious advantage is that in that process you understand the concepts involved and the code you're working with isn't just a black box. I think that's useful when already so much of the code we use from vendors and tools/libraries that we have very little control over, even if source code is provided. While the QR library I used recently might be open source, I have no innate desire to dig into that source since I have no particular interest or even the background to deal with that sort of interface.

We may joke about the Search Engine Developer and Cut and Paste Development, but the truth is that we are much, much better off NOT having to reinvent the wheel for all these little programming problems. While it may not be very complex to build some string translation routines that extract text easily with a few parameters, even simple code like that that is reliable and works in many different scenarios does take some time to create if you had to do it from scratch. Is it be better to create it from scratch or use somebody's (hopefully) tested and already written solution? I think we all know what the answer to that is, ideals be damned. It's perfectly acceptable to not reinvent that sort of wheel. Getting some of the trivial things taken care of by searching and *adapting* code found online lets us focus on things that really matter in an application rather than the mundane plumbing code that is necessary, but not necessarily a key piece of the application.

With all this shared code available we have more time to build new solutions, come up with new ideas and expand the wealth of knowledge that is already out there. As they say, we can stand on the shoulder of giants to extend the reach of our skills even further.

It's fun to wax nostalgic about 'back in the day' stories, but I don't miss those days one bit. I don't think anybody does - having more information, at least for our jobs as software developers is definitely beneficial and it allows us to focus on doing the work in our problem domain and leaving the little things to solutions that have been solved a million times over.

As one commenter on Scott's post - Nathan - pointed out:

"Never commit to memory what can be easily looked up in books"
   - Albert Einstein

In today's terms, Einstein's books are the Internet. It's much easier to 'store' and access information there. Whether you write it down for later retrieval - in a blog post perhaps - or whether you search, there's no shame in using the Internet as a retrieval source and an extension of our somewhat limited memory store.

'Already been done' Syndrome

These days I find myself doing much less development truly from scratch. I'd like to, but the reality is that it's getting much harder to justify and build something truly unique. The Internet is a vast place, and there's so much stuff out there already that's already been done. If you think of some sort of problem you need to solve, chances are pretty good that somebody's already thought of it and has tackled that problem and solved it.

It's also very easy to fall into the trap that that that's already been done. In some cases that's a great thing - if it's something trivial or non-relevant to your main problem domain then that's already been done is a godsend.

In some cases however, it can also lead to giving up on good ideas. If you have some smart new idea and you check around on the Internet to see if anybody else has done this already, you may find a previous implementation and simply decide it's not worth to build your great new idea out. The problem with this thinking can be that some good ideas are not getting the benefit of alternate and possibly much better implementations. An opportunity lost.

I know there've been a few things I've wanted to do in the past that I didn't, simply because there were already similar solutions out there. Even if those other solutions weren't perfect and I felt that maybe I could do better, it's hard to get motivated with existing solutions out there and then playing catch-up and probably ending up having to compete with an incumbent. I suspect a lot of new development never happens because of this.

But sometimes it definitely makes sense to reinvent a wheel and do a better job. We've all seen really bad libraries or applications out there that could benefit from somebody with a different point of view and maybe more dedication taking a stab at it.

How are your Search Engine Skills?

If you are a good developer today, you have to be good at using search engines to make yourself productive in your work. Being good at searching and finding answers to development questions is one of many critical skills these days. Today's software development involves so many technologies, tools and environments that there are very few - if any - people around who can keep it all in their heads. Information overload is a problem and you probably have a core set of that you use regularly that are always 'just there', but for the rest of it you might as well use The Google to help you jog your memory.

I'm always amazed however that a lot of developers are not very good at coaxing useful information out of search engines. I often feel like sending people off to Let me Google that for you. In this day and age, especially with the wealth of cumulative Q&A knowledge available in StackOverflow there's no reason not be searching efficiently and finding answers to a lot of developer information. Yet I still find developers who are not actually managing to get useful information out of search queries.

There are several things that are critical here. You need to be able to:

  • Find the right keywords to search on - get familiar with advance search options
  • Narrow down search results
  • Differentiate the good from the bad results
  • Use what you find responsibly - learn from the code

One important point about any code you find is that cutting and pasting code blindly without actually understanding it is a recipe for disaster. Always play with code you find and learn from it, then integrate. I find it's often a good idea to review the code then implement it by typing it in (preferably without peeking at the original) instead of cutting and pasting. This helps understanding and also retention of the code that was just snatched and integrated into an application. For more complex pieces like full libraries that's not always an option or even desirable, but especially with shorter solutions like stuff you find on StackOverflow this is good advice.

The big problem with sites like StackOverflow or tons of open source code available to plug in, it's not easy to avoid the simple solution of just using cut and paste or plunking in a library and forgetting about it. And unfortunately that temptation often results in untested or misunderstood code getting integrated into solutions. Bottom line is: make an effort to understand what you're integrating.

Best of both Worlds

We live in an information rich world - it's not 1995 any more and the Internet and the ability to search its vast resources are here to stay. We can choose to stay sharp and build our skillset the old fashioned way and as Scott so frequently suggests we should Sharpen the Saw to keep learning and improving on what we already know. Search engines are another powerful tool in our arsenal and we can and should let them help  us do our job and make that job easier. But at the same time we shouldn't let them lull us into a false sense of security - into a sense of thinking that all we need is information at our fingertips. Mad Skillz is still a desirable feature for the modern developer and keeping up with those skills is an important part of being a developer.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Musings  

Self-Hosting SignalR in a Windows Service

$
0
0

A couple of months ago I wrote about a self-hosted SignalR application that I've been working on as part of a larger project. This particular application runs as a service and hosts a SignalR Hub that interacts with a message queue and pushes messages into an application dashboard for live information of the queue's status. Basically the queue service application asynchronously notifies the SignalR hub when messages are processed and the hub pushes these messages out for display into a Web interface to any connected SignalR clients. It's a wonderfully interactive tool that replaces an old WinForms based front end interface with a Web interface that is now accessible from anwhere, but it's a little different SignalR implementation in that the main broadcast agent aren't browser clients, but rather the Windows Service backend. The backend broadcasts messages from a self-hosted SignalR server.

As I mentioned in that last post, being able to host SignalR inside of external non-Web applications opens up all sorts of opportunities and the process of self-hosting - while not as easy as hosting in IIS - is pretty straight forward. In this post I'll describe the steps to set up SignalR for self-hosting and using it inside of a Windows Service.

Self-Hosting SignalR

The process of self-hosting SignalR is surprisingly simple. SignalR relies on the new OWIN architecture that bypasses ASP.NET for a more lightweight hosting environment. There's no dependencies on System.Web and so the hosting process tends to be pretty lean.

Creating a OWin Startup class

The first step is to create a startup class that is called when OWIN initializes. The purpose of this class is to allow you to configure the OWIN runtime, hook in middleware components (think of it like HttpModules) etc. If you're a consumer of a high level tool like SignalR, the OWIN configuration class simply serves as the entry point to hook up the SignalR configuration. In my case I'm using hubs and so all I have here is a HubConfiguration:

public class SignalRStartup{public static IAppBuilder App = null;public void Configuration(IAppBuilder app)
    {var hubConfiguration = new HubConfiguration { 
            EnableCrossDomain = true,
            EnableDetailedErrors = true};app.MapHubs(hubConfiguration);                        
    }        
}

SignalR provides the HubConfiguration class and IAppBuilder extension method called MapHubs that's used for the hub routing. MapHubs uses Reflection to find all the Hub classes in your app and auto-registers them for you. If you're using Connections then the MapConnect<T> class is used to register each connection class individually.

If you're using SignalR 2.0 (currently in RC) then the configuration looks a little different:

public void Configuration(IAppBuilder app)
{
    app.Map("/signalr", map =>
        {
            map.UseCors(CorsOptions.AllowAll);var hubConfiguration = new HubConfiguration{
                    EnableDetailedErrors = true,
                    EnableJSONP = true};map.RunSignalR(hubConfiguration);
        });
} 
Note the explicit CORS configuration, which enabled cross domain calls for XHR requests, has been migrated to the OWIN middleware rather than being directly integrated in SignalR. You'll need the Microsoft.OWIN.Cors NuGet package for this functionalty to become available. This is pretty much required on every self-hosted server accessed from the browser since self-host always implies a different domain or at least a different port which most browsers (except IE) also interpret as cross-domain.

And that's really all you need to do to configure SignalR.

Starting up the OWIN Runtime

Next we need to kickstart OWIN to use the Startup class created above. This is done by calling the WebApp.Start<T> factory method passing the startup class as a generic parameter:

SignalR = WebApp.Start<SignalRStartup>("http://*:8080/");

Start<T> passes in the startup class as  generic parameter and the hosting URI. Here I'm using the root site as the base on port 8080. If you're hosting under SSL, you'd use https://*:8080/.

The method returns an instance of the Web app that you can hold onto. The result is a plain IDisposable interface and when it goes out of scope so does the SignalR service. In order to keep the app alive, it's important to capture the instance and park it somewhere for the lifetime of your application. 

In my service application I create the SignalR instance on the service's Start() method and attach it to a SignalR property I created on the service. The service sticks around for the lifetime of the application and so this works great.

Running in a Windows Service

Creating a Windows Service in .NET is pretty easy - you simply create a class that inherits from System.ServiceProcess.ServiceBase and then override the OnStart() and OnStop() and Dispose() methods at a minimum.

Here's an example of my implementation of ServiceBase including the SignalR loading and unloading:

public class MPQueueService : ServiceBase{MPWorkflowQueueController Controller { get; set; }IDisposable SignalR { get; set; }public void Start() { Controller = new MPWorkflowQueueController(App.AdminConfiguration.ConnectionString);var config = QueueMessageManagerConfiguration.Current; Controller.QueueName = config.QueueName; Controller.WaitInterval = config.WaitInterval; Controller.ThreadCount = config.ControllerThreads;SignalR = WebApp.Start<SignalRStartup>(App.AdminConfiguration.MonitorHostUrl);// Spin up the queue
Controller.StartProcessingAsync();LogManager.Current.LogInfo(String.Format("QueueManager Controller Started with {0} threads.", Controller.ThreadCount));// Allow access to a global instance of this controller and service // So we can access it from the stateless SignalR hubGlobals.Controller = Controller;Globals.WindowsService = this; }public new void Stop() {LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); SignalR.Dispose();Thread.Sleep(1500); }/// <summary> /// Set things in motion so your service can do its work./// </summary>protected override void OnStart(string[] args) { Start(); }/// <summary> /// Stop this service./// </summary>protected override void OnStop() { Stop(); } protected override void Dispose(bool disposing) {base.Dispose(disposing);if (SignalR != null) { SignalR.Dispose(); SignalR = null; } } }

There's not a lot to the service implementation. The Start() method starts up the Queue Manager that does the real work of the application, as well as SignalR which is used in the processing of Queue Requests and sends messages out through the SignalR hub as requests are processed.

Notice the use of Globals.Controller and Globals.WindowsService in the Start() method. SignalR Hubs are completely stateless and they have no context to the application they are running inside of, so in order to pass the necessary state logic and perform tasks like getting information out of the queue or managing the actual service interface, any of these objects that the Hub wants access to have to be available somewhere globally.

public static class Globals{public static MPWorkflowQueueController Controller;public static MPQueueService WindowsService;
}

By using a global class with static properties to hold these values they become accessible to the SignalR Hub which can then act on them. So inside of a hub class I can do things like Globals.Controller.Pause() to pause the queue manager's queue processing. Anything with persistent state you need to access from within a Hub has to be exposed in a similar fashion.

Bootstrapping the Windows Service

Finally you also need to also bootstrap the Windows service so it can start and respond to Windows ServiceManager requests in your main program startup (program.cs).

[STAThread]static void Main(string[] args) { string arg0 = string.Empty;if (args.Length > 0) arg0 = (args[0] ?? string.Empty).ToLower();if (arg0 == "-service" ) { RunService();return; }if (arg0 == "-fakeservice") { FakeRunService();return; }

}static void RunService() { var ServicesToRun = new ServiceBase[] { new MPQueueService() };LogManager.Current.LogInfo("Queue Service started as a Windows Service."); ServiceBase.Run(ServicesToRun); }

static void FakeRunService() { var service = new MPQueueService(); service.Start();LogManager.Current.LogInfo("Queue Service started as FakeService for debugging."); // never ends but waitsConsole.ReadLine(); }

Once installed a Windows Service calls the service EXE with a  -service command line switch to start the service the first time it runs. At that point ServiceBase.Run is called on our custom service instance and now the service is running. While running the Windows Service Manager can then call into OnStart(),OnStop() etc. as these commands are applied against the service manager. After a OnStop() operation the service is shut down, which shuts down the EXE.

Note that I also add support for a -fakeservice command line switch. I use this switch for debugging, so that I can run the application for testing under debug mode using the same Service interface. FakeService simply instantiates the service class and explicitly calls the Start() method which simulates the OnStart() operation from the Windows Service Manager. In short this allows me to debug my service by simply starting a regular debug process in Visual Studio, rather than using Attach Process and attaching to a live Windows Service. Much easier and highly recommended while you're developing the service.

Windows Service Registration

Another thing I like to do with my services is provide the ability to have them register themselves. My startup program also corresponds to -InstallService and -UninstallService flags which allow self-registration of the service. .NET doesn't include a native interface for doing this however, with some API calls to the Service Manager API's it's short work to accomplish this. I'm not going to post the code here, but I have a self-contained C# code file that provides this functionality:

With this class in place, you can now easily do something like this in the startup program when checking for command line arguments:

else if (arg0 == "-installservice" || arg0 == "-i")
{WindowsServiceManager SM = new WindowsServiceManager();if (!SM.InstallService(Environment.CurrentDirectory + "\\MPQueueService.exe -service","MPQueueService", "MP Queue Manager Service"))MessageBox.Show("Service install failed.");return;
}else if (arg0 == "-uninstallservice" || arg0 == "-u")
{WindowsServiceManager SM = new WindowsServiceManager();if (!SM.UnInstallService("MPQueueService"))MessageBox.Show("Service failed to uninstall.");return;
}

So now we have the service in place - let's look a little closer at the SignalR specific details.

Hub Implementation

The key piece of the SignalR specific implementation of course is the SignalR hub. The SignalR Hub is just a plain hub with any of the SignalR logic you need to perform. If you recall typical hub methods are called from the client and then typically use the Clients.All.clientMethodToCall to broadcast messages to all (or a limited set) of connected clients.

The following is a very truncated example of the QueueManager hub class that includes a few instance broadcast methods for JavaScript clients, as well as several static methods to be used by the hosting EXE to push messages to the client from the server:

public class QueueMonitorServiceHub : Hub{ /// <summary> /// Writes a message to the client that displays on the status bar/// </summary>public void StatusMessage(string message, bool allClients = false) {if (allClients) Clients.All.statusMessage(message);elseClients.Caller.statusMessage(message); }/// <summary> /// Starts the service/// </summary>public void StartService() {// unpause the QueueController to start processing againGlobals.Controller.Paused = false; Clients.All.startServiceCallback(true); Clients.All.writeMessage("Queue starting with " +
Globals.Controller.ThreadCount.ToString() +
" threads.","Info", DateTime.Now.ToString("HH:mm:ss")); }public void StopService() {// Pause - we can't stop service because that'll exit the server Globals.Controller.Paused = true; Clients.All.stopServiceCallback(true); Clients.All.writeMessage("Queue has been stopped.","Info",
DateTime.Now.ToString("HH:mm:ss")); }

/// <summary> /// Context instance to access client connections to broadcast to/// </summary>public static IHubContext HubContext {get{if (_context == null) _context = GlobalHost.ConnectionManager.GetHubContext<QueueMonitorServiceHub>();return _context; } }static IHubContext _context = null;/// <summary> /// Writes out message to all connected SignalR clients/// </summary> /// <param name="message"></param>public static void WriteMessage(string message, string id = null,
string icon = "Info", DateTime? time = null) {if (id == null) id = string.Empty;// if no id is passed write the message in the ID area // and show no messageif (string.IsNullOrEmpty(id)) { id = message; message = string.Empty; }if (time == null) time = DateTime.UtcNow;// Write out message to SignalR clients HubContext.Clients.All.writeMessage(message, icon, time.Value.ToString("HH:mm:ss"), id, string.Empty); }/// <summary> /// Writes out a message to all SignalR clients/// </summary> /// <param name="queueItem"></param> /// <param name="elapsed"></param> /// <param name="waiting"></param>public static void WriteMessage(QueueMessageItem queueItem,
i
nt elapsed = 0, int waiting = -1,
DateTime? time = null) {string elapsedString = string.Empty;if (elapsed > 0) elapsedString = (Convert.ToDecimal(elapsed) / 1000).ToString("N2");var msg = HtmlUtils.DisplayMemo(queueItem.Message);if (time == null) time = DateTime.UtcNow;// Write out message to SignalR clients HubContext.Clients.All.writeMessage(msg, queueItem.Status, time.Value.ToString("HH:mm:ss"), queueItem.Id, elapsedString, waiting); } }

This hub includes a handful of instance hub methods that are called from the client to update other clients. For example the ShowStatus method is used by browser clients to broadcast a status bar update on the UI of the browser app. Start and Stop Service operations start and stop the queue processing and also update the UI. This is the common stuff you'd expect to see in a SignalR hub.

Calling the Hub from within the Windows Service

However, the static methods in the Hub class are a little less common. These methods are called from the Windows Service application to push messages from the server to the client. So rather than having the browser initiate the SignalR broadcasts, we're using the server side EXE and SignalR host to push messages from the server to the client. The methods are static because there is no 'active' instance of the Hub and so every method call basically has to establish the context for the Hub broadcast request.

The key that makes this work is this snippet:

GlobalHost.ConnectionManager
.GetHubContext<QueueMonitorServiceHub>()
.Clients.All.writeMessage(msg, queueItem.Status, time.Value.ToString("HH:mm:ss"), queueItem.Id, elapsedString, waiting);

which gives you access to the Hub from within a server based application.

The GetHubContext<T>() method is a factory that creates a fully initialized Hub that you can pump messages into from the server. Here I simply call out to a writeMessage() function in the browser application, which is propagated to all active clients.

In the browser in JavaScript I then have a mapping for this writeMessage endpoint on the hub instance:

hub.client.writeMessage = self.writeMessage;

where self.writeMessage is a function on page that implements the display logic:

// hub callbackswriteMessage: function (message, status, time, id, elapsed, waiting) {

}

If you recall SignalR requires that you map a server-side method to a handler function (the first snippet) on the client, but beyond that there are no additional requirements. SignalR's client library simply calls the mapped method and passes any parameters you fired on the server to the JavaScript function.

For context, the result of all of this looks like the figure below where the writeMessage function is responsible for writing out the individual line request lines in the list display. The writeMessage code basically uses a handlebars.js template to merge the received data into HTML to be rendered in the page.

 

It's very cool to see this in action especially with multiple browser windows open. Even at very rapid queue processing of 20+ requests a second (for testing) you can see multiple browser windows update nearly synchronously. Very cool.

Summary

Using SignalR as a mechanism for pushing server side processing messages to the client is a powerful feature that opens up many opportunities for dashboard and notification style applications that used to run in server isolated silos previously. By being able to host SignalR in a Windows Service or any EXE based application really, you can now offload many UI tasks that previously required custom desktop applications and protocols, and push the output directly to browser based applications in real time. It's a wonderful way to rethink browser based UIs fed from server side data. Give it some thought and see what opportunities you can find to open up your server interfaces.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in SignalR  

JavaScript Arrays, References and Databinding in Angular

$
0
0

I ran into a little snag in a small demo application I'm building last night. I've been using Angular and as part of the app I'm adding an object from the $scope to an array that also lives on the $scope instance. Seems straight forward enough, but as it turns out there was a snag with the last addition of the object to the array basically resetting all items in the array to the same value as the last one added.

For some context I have an Angular $scope that looks like this:

$scope.locationData = {        
    Name: "",
    Address: "",
    Usage: "",
    Description: "",
    Longitude: 0,
    Latitude: 0
};    
$scope.locationHistory = [];

The locationData object is bound via Angular bindings to a few textbox controls that are updated from the UI. Then the user clicks a Save Location button and it should add the new locationData item into the locationHistory:

this.saveLocationData = function() {    $scope.locationData.Entered = new Date();
    $scope.locationData.Updated = new Date();               
    $scope.locationHistory.splice(0, 0,$scope.locationData);localStorage.setItem("locationScope", JSON.stringify($scope.locationHistory));

    self.showPage();        };

The logic works fine on the first pass. A new locationData item is added to the locationHistory array.

However, on the second (and any subsequent) pass, the locationHistory array is changed completely - every item in the array is updated with the last item added. In other word on the second pass the first and second item are exactly the same.

Even weirder I can do something like this in the Console window:

bc.$scope.locationData.Name = "New Name"

and it will change the name property in all the locationData objects in the array

Hold on a Sec!

When I first ran this, it took me quite a while to track this down. I was looking for a logic error, thinking I was adding the wrong item to the array. So off I went debugging with a bunch of console.log() statements echoing out the items that were added - and oddly enough it all looked right. I was adding the right items and console.log() seemed to show the right items being passed and added. Not until I looked more closely at the array, did I realize that all items in the array were always the same as the last item added.

My first reaction - of course was: WTF is Angular doing here mainly because there were actually only some items in the array changing. Only the ones added in the current session changed. Since I'm storing the list data to localStorage, some items are 'persistent' and some weren't. It took a minute to realize this however, especially if I only added 2 items - it would just seem that I added the same item twice (or the last item was overwritten with the current one). It wasn't until I added a whole bunch of entries that I realized that this must be something else: A reference problem.

After some more experimenting it turns out this isn't an Angular problem at all, but really a JavaScript reference issue. What's happening here is that there's only a single instance of a locationData object. And even though the values of that object are changed the reference of that single object is all that Angular uses for data binding. When I change a textbox value Angular updates that objects matching bound value in that single instance.

When I add that object to the array, I am in effect always adding the same object - by reference. A pointer is passed and added. So all the objects are pointers to the same instance! Change one, change all!

Using data binding sort of obfuscates that simple fact and it's easy to miss. Similar issues will creep up with other binding frameworks like Knockout.js and Ember.js etc. It's a simple problem to solve, once you know what the problem is.

Watch those References with Arrays

To demonstrate the issue more simply here's a small example completely outside of Angular (jsFiddler here):

var arr = [];var obj = { name: "rick" };

    arr.push(obj);

    obj.name = "markus";
    arr.push(obj);for (var i = 0; i < 2; i++) {// both entries print 'markus'console.log(arr[i].name);
    }

When you run this you'll find that both objects print "markus". Both objects point to the same instance.

In this code the problem is much more obvious though because you can actually see that the same object being referenced. But in effect, Angular's data binding is doing exactly the same thing: It's updating a single instance with values that are changed in the UI. There's only one locationData object instance.

I threw this out on Twitter and got a bunch of responses back.

De-referencing a JavaScript Object

The easiest solution for me was to use an Angular helper function: angular.copy() which is a deep object copy function (ie. it copies all properties down the hierarchy). It basically copies an object and creates a new instance, which effectively de-references the original object. Now when I add the copied locationData I have a new instance that's being written out.

Here's what the splice operation that adds the locationData to the array looks like:

$scope.locationHistory.splice(0, 0, angular.copy($scope.locationData));

This is nice and simple and because of the deep copy should work reliably with most objects and arrays.

If you're not using Angular, you can also use jQuery's $.extend() method to do the same thing. You can do both shallow and deep copies with:

// Shallow copyvar n1 = $.extend({}, old);// Deep copyvar n2 = $.extend(true, {}, old);

Extend adds properties of the second object to the first and if you pass an empty object in it effectively creates a copied object.

Resetting the Reference

Another approach to the de-referencing issue is to re-create a new instance of the object after it's been assigned to the array. Rather than leaving the single instance of this object instance around and live, deleting the original reference by creating a new reference effectively de-references the object as well. So I can simply set the object to null or {} to de-reference.

When using Angular this isn't a good idea, especially if items are bound. So in this case I'd have to recreate an empty object and rebind it. To do this a factory method for the base object is in order:

 

$scope.createLocationData = function() {return {        
        Name: "",
        Address: "",
        Usage: "",
        Description: "",
        Longitude: 0,
        Latitude: 0
    };    
};$scope.locationData = $scope.createLocationData();
$scope.locationHistory = [];

The same function can then be used when saving to recreate the object:

$scope.locationHistory.splice(0, 0, $scope.locationData);
$scope.locationData = $scope.createLocationData();//$scope.$apply();

This is an easy solution to this problem and it doesn't require any special libraries. Simply recreating the object is efficient and it also provides a nice way to reset the object after it's been saved to show an empty form if no location data is actually loaded.

JavaScript 101

Yes I realize that reference objects is a pretty basic JavaScript 101 issue, but it's one that's becoming more common in light of the various databinding frameworks. The issue totally makes sense, but when I looked at it originally I was scratching my head for a while, trying to understand why it seemed like my data was being changed by the mere act of databinding. It turns out it's nothing more than simple JavaScript logistics that has a few simple workarounds. I hope some of you might find this useful when running into a similar issue.

Thanks to Ben Maddox and Nick Berardi for their help on Twitter.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in JavaScript  Angular  

Hosting SignalR under SSL/https

$
0
0

As I've described in several previous posts, self hosting SignalR is very straight forward to set up. It's easy enough to do, which is great if you need to hook SignalR as an event source to standard Windows based application such as a Service or even a WPF or Winforms desktop application that needs to send push notifications to many users.

One aspect of self-hosting that's not quite so transparent or documented though, is running a self hosted SignalR service under SSL. The Windows certificate store and creation, configuration and installation of certificates is still a pain as there's no UI in Windows that provides linking endpoints to certificates and the process is not very well documented end to end. It's easy enough once you know what command line tool you need to call, but this process certainly could be a little smoother and better documented. Hence I'm rehashing this topic here to provide a little more detail and hopefully a more coherent description of setting up a certificate for self-hosting an OWIN service in general and specifically for SignalR.

Self-Hosting and OWIN

When you're self hosting SignalR you're essentially using the hosting services provided by OWIN/Katana. OWIN is a low level spec that for implementing custom hosting providers that can be used interchangeably. The idea is to decouple the hosting process from a specific implementation and make it pluggable, so you can choose your hosting implementations.

Katana is Microsoft's implementation of OWIN, that provides a couple of specific implementations. For self-hosting there's the HttpListener based host which is completely decoupled from IIS and its infrastructure. For hosting inside of ASP.NET there also is an ASP.NET based implementation that is used for SignalR apps running inside of ASP.NET. Both implementations provide the base hosting support for SignalR, so that for the most part the same code base can be used for running SignalR under ASP.NET or under your own self-hosted EXEs like services, console or desktop apps.

Binding certificates to SSL Ports for Self-Hosting

Self hosting under HttpListener is wonderful and completely self-contained, but one of the downsides of not being part of IIS is that it also doesn't know about certificates that are installed for IIS, which means that certificates you want to use have to be explicitly bound to a port. Note that you can use IIS certificates and if you need to acquire a full certificate for use with a self-hosted application, going through the IIS certificate process is the easiest way to get the certificate loaded. If you need a certificate for local testing too IIS's self-signed certificate creation tool makes that very easy as well (I'll describe that below).

For now let's assume you already have a certificate installed in the Windows certificate store. In order to bind the certificate to a self-hosted endpoint, you have to use the NETSH command line utility to register it on the machine (all on one line):

netsh http add sslcert ipport=0.0.0.0:8082
appid={12345678-db90-4b66-8b01-88f7af2e36bf
}
certhash=d37b844594e5c23702ef4e6bd17719a079b9bdf

For every endpoint mapping you need to supply 3 values:

  • The ipport which identifies the ip and port
    Specified as ipport=0.0.0.0:8082 where the zeros mean all ip addresses on port 8082. Otherwise you can also specify a specific Ip Address.
  • The certhash which is the Certificate's Thumbprint
    The certhash is the id that maps the certificate to the IP endpoint above.  You can find this hash by looking at the certificate in the Windows Certificate store. More on this in a minute.
  • An AppID which is fixed for HttpListener Hosting
    This value is static so always use appid={12345678-db90-4b66-8b01-88f7af2e36bf}

Once the above command has been run you should check if it worked by looking at the binding. Use this:

netsh http show sslcert ipport=0.0.0.0:8082

which gives you a display like this:

netsh

Finding the CertHash

I mentioned the certhash above: To find the certhash, you need to find the certificate's ThumbPrint which can be found in a couple of ways using:

  • The IIS Certificate Manager
  • The Windows Certificate Storage Manager

Using IIS to get Certificate Info

If IIS is installed the former is the easiest. Here you can easily see all installed certificates and this UI is also the easiest way to create local self-signed certificates.

To look up an existing certificate, simply bring up the IIS Management Console, go to the Machine node, then Server Certificates:

IISCerts

You can see the certificate hash in the rightmost column. You can also double click and open the certificate and go in the Details of the certificate. Look for the thumbprint which contains the hash.

CertificateDetails 

Unfortunately neither of these places makes it easy to copy the hash, so you either have to copy it manually or remove the spaces from the thumbprint data in the dialog.

Using IIS to create a self-signed Certificate

If you don't have a full server certificate yet, but you'd like to test with SSL operations locally you can also use the IIS Admin interface to very easily create a self-signed certificate. The IIS Management console provides one of the easiest ways to create a local self-signed certificate.

Here's how to do it:

  • Go to the machine root of the IIS Service Manager
  • Go to the Server Certificates Item in the IIS section
  • On the left click Create Self-Signed Certificate
  • Give it a name, and select the Personal store
  • Click OK

IisCreateCert

That's all there is to create the self-signed local certificate.

Copy the self-signed Certificate to the Trusted Root Certification Store

Once you have a self-signed certificate, you need one more step to make the certificate trusted, so Http clients will accept it on your machine without certificate errors. The process involves copying the certificate from the personal store to the trusted machine store.

To do this:

  • From the StartMenu use Manage Computer Certificates
  • Go into Personal | Certificates and find your certificate
  • Drag and Copy (Ctrl-Drag) the certificate to Trusted Root Certificates | Certificates

TrustCertificate

You should now have a certificate that browsers will trust. This works fine for IE, Chrome and Safari, but FireFox will need some special steps (thanks to Eric Lawrence) and Opera also requires specific registration of certificates.

Using a full IIS Certificate

Self signed certificates are great for testing under SSL to make sure your application works, but it's not so nice for production apps as the certificate would have to be installed on any machine you'd expect to trust this certificate which is a hassle.

Once you go to production, especially public production you'll need an 'official' certificate signed by a one of the global certificate authorities for $$$.

The easiest way to do this is to purchase a full IIS certificate and install it in IIS. The IIS certificate can also be used for self-hosted applications using the HttpListener so it will work just fine with a self-hosted SignalR or any HttpListener application.

So once the time comes to go live, register a new certificate through IIS, then use netsh http add sslcert  to register that certificate as shown above. A public SSL certificate in most cases is already recognized so no further certificate store moving is required - all you need is the netsh registration to tie it to a particular port and app Id.

Running SignalR with SSL

With the certificate installed, switching SignalR to start with SSL is as easy as changing the startup URL.

Self Hosted Server Configuration

In the self hosted server, you now specify the new SSL URL in your startup factory invocation:

var signalR = WebApp.Start<SignalRStartup>(https://*:8082/);

This binds SignalR to the all ip addresses on port 8082. You can also specify a specific IP address, but using * is more portable especially if you set the value as part of a shared configuration file.

If you recall from my last self-hosting post, OWIN uses a startup class (SignalRStartup in this case) to handle OWIN and SignalR HubConfiguration, but the only thing that needs to change is the startup URL and your self-hosted server is ready to go.

SignalR Web App Page Url Configuration

On the Web Page consume the SignalR service to hubs or connections change the script URL that loads up the SignalR client library for your hubs or connections like this:

<script src="https://RasXps:8082/signalr/hubs"></script>

where RasXps here is my exact local machine name that has the certificate registered to it. As with all certificates make sure that the domain name matches the certificate's name exactly. For local machines that means don't use localhost if the certificate is assigned to your local machines NetBios name as it is by default. Don't use your IP address either - use whatever the certificate is assigned to.

You'll also need to assign the hub Url to your SSL url as part of the SignalR startup routine that calls $connection.hub.start:

$.connection.hub.url = self.hubUrl;  // ie. "https://rasxps:8082/signalR;"

For more context here's a typical hub startup/error handler setup routine that I use to get the hub going:

startHub: function () {$.connection.hub.url = self.hubUrl;  // ie. "https://rasxps:8082/signalR";// capture the hub for easier accessvar hub  = $.connection.queueMonitorServiceHub;// This means the <script> proxy failed - have to reloadif (hub == null) {
        self.viewModel.connectionStatus("Offline");                
        toastr.error("Couldn't connect to server. Please refresh the page.");return;
    }// Connection Eventshub.connection.error(function (error) {                if (error)
            toastr.error("An error occurred: " + error.message);
        self.hub = null;
    });
    hub.connection.disconnected(function (error) {                
        self.viewModel.connectionStatus("Connection lost");
        toastr.error("Connection lost. " + error);                // IMPORTANT: continuously try re-starting connectionsetTimeout(function () {                    
            $.connection.hub.start();                    
        }, 2000);
    });            
    // map client callbackshub.client.writeMessage = self.writeMessage;
    hub.client.writeQueueMessage = self.writeQueueMessage;            
    hub.client.statusMessage = self.statusMessage;…
// start the hub and handle after start actions$.connection.hub .start() .done(function () { hub.connection.stateChanged(function (change) {if (change.newState === $.signalR.connectionState.reconnecting) self.viewModel.connectionStatus("Connection lost");else if (change.newState === $.signalR.connectionState.connected) { self.viewModel.connectionStatus("Online");// IMPORTANT: On reconnection you have to reset the hubself.hub = $.connection.queueMonitorServiceHub; }else if (change.newState === $.signalR.connectionState.disconnected) self.viewModel.connectionStatus("Disconnected"); }) .error(function (error) {if (!error) error = "Disconnected"; toastr.error(error.message); }) .disconnected(function (msg) { toastr.warning("Disconnected: " + msg); }); self.viewModel.connectionStatus("Online"); // get initial status from the server (RPC style method)self.getServiceStatus(); self.getInitialMessages(); }); },

From a code perspective other than the two small URL code changes there isn't anything that changes for SSL operation, which is nice.

And… you're done!

SSL Configuration

SSL usage is becoming ever more important as more and more application require transport security. Even if your self-hosted SignalR application doesn't explicitly require SSL, if the SignalR client is hosted inside of a Web page that's running SSL you have to run SignalR under SSL, if you want it to work without browser error messages or failures under some browsers that will reject mixed content on SSL pages.

SSL configuration is always a drag, as it's not intuitive and requires a bit of research. It'd be nice if the HttpListener certificate configuration would be as easy as IIS configuration is today or better yet, if self-hosted apps could just use already installed IIS certificates. Unfortunately it's not quite that easy and you do need to run a command line utility with some magic ID associated with it.

Installing a certificate isn't rocket science, but it's not exactly well documented. While looking for information I found a few unrelated articles that discuss the process but a few were dated and others didn't specifically cover SignalR or even self-hosting Web sites. So I hope this post makes it a little easier to find this information in the proper context.

This article focuses on SignalR self-hosting with SSL, but the same concepts can be applied to any self-hosted application using HttpListener.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in OWIN  SignalR  

Disable User Account Control On Windows 8

$
0
0
User Account Control can be a real pain and in Windows 8 there's no easy way to turn it off. However, using Group Policy you can still completely disable it if you decide to do so. Here's how.

Use IIS Application Initialization for keeping ASP.NET Apps alive

$
0
0
Ever want to run a service-like, always-on application inside of ASP.NET instead of creating a Windows Service or running a Console application? Need to make sure that your ASP.NET application is always running and comes up immediately after an Application Pool restart even if nobody hits your site? The IIS Application Initialization Module provides this functionality in IIS 7 and later, making it much easier to create always-on ASP.NET applications that can act like a service.

Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

$
0
0
Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies, but here's how to do it.
Viewing all 664 articles
Browse latest View live