Quantcast
Channel: Rick Strahl's Web Log
Viewing all 670 articles
Browse latest View live

Process.Start() and ShellExecute() fails with URLs on Windows 8

$
0
0

Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior.

The scenario for this is a bit bizarre:

  • Running on Windows 8
  • Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file
  • Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off
  • A browser other than Internet Explorer is set as your Default Web Browser

Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails.

It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior.

Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights.

Try it out

Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser.

To see if this is a problem create a small Console application and call Process.Start() with a URL in it:

namespace Win8ShellBugConsole
{class Program{static void Main(string[] args)
        {Console.WriteLine("Launching Url...");Process.Start("http://microsoft.com");Console.Write("Press any key to continue...");Console.ReadKey();Console.WriteLine("\r\n\r\nLaunching image...");Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg"));Console.Write("Press any key to continue...");Console.ReadKey();
        }
    }
}

Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions).

If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading.

Now run the same thing with Run As Administrator:

RunAsAdministratorExplorer

Now when you run it you get a nice error when Process.Start() is fired:

AdminCrash

The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account.

Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs.

The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found.

What's the Deal?

It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work.

Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application.

Associations

Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others.

What Works? Reinstall the Browser

In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works.

Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least:

http://code.google.com/p/chromium/issues/detail?id=156400

As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works…

Epilog - Chrome

It's now a few days later and I'm still seeing problems although now the issues clearly have to do with Chrome, rather than Windows.

So, Chrome actually has two different installers - one for regular user install and administrative install. The Admin install installs Chrome for all users and requires admin privileges where the regular installer does not. This seems to make sense now: When using the regular installer none of the HKLM keys can be set, so most likely this is the real cause of the problems I saw previously.

However, once I installed the admin install (how that happened I don't know because I didn't explicitly realize that I was) I ended up with all sorts of other problems. Chrome seemed to continually lose its settings, forcing me to log in with my Google account frequently, losing some of my installed plugins, and occasionally throwing up error dialogs on ShellExecute links that the browser about to be launched was the admin version that couldn't be launched from the current user environment. In other words the Admin version had lots of problems for me.

After some more headscratching I uninstalled completely again, then reinstalled under the non-Admin installer. Now finally I seem to have a stable Chrome install that's keeping my settings and configuration as well as properly launching with ShellExecute.

On another machine, however, I followed the basic uninstall Chrome and reinstall chrome as a regular user, and on that machine everything now also seems to work. I'm still not sure how this works since the regular user install does not prompt for the administrative UAC dialog, but yet somehow manages to write the HKLM keys that ShellExecute needs to launch the browser under elevated rights. It's all very puzzling, but at least resolved.

No doubt this is all an artifact of Chrome and other browsers wanting to play nice in Windows 8 and RT and judging from the failures I saw from all the browsers, this is proving harder than expected to get right. I suspect we're going to see a lot more of this kind of craziness in configurations and associations in applications in the future and I'm not looking forward to having to fight that sort of thing in our own apps. Thanks Windows 8 - for making a process that was already a PITA even more convoluted.

Not so fast

Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction on top of the underlying system identity that can cause all sort of small side effect headaches like this.

Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in Windows  .NET  

Building a better .NET Application Configuration Class - revisited

$
0
0

Many years ago, when I was just starting out with .NET (in the 1.0 days - now nearly 10 years ago), I wrote an article about an Application Configuration component I created to improve application configuration using strongly typed classes. To me, application configuration is something that I've been adamant about since - well forever, even pre-dating .NET. An easy to access configuration store along with an easy application access mechanism to access and maintain configuration settings through code are crucial to building applications that are adaptable. I found that the default configuration features built into .NET are pretty good, but they still take a bit of effort to maintain and manage.

So I built my own long ago, with focus on a code-first approach. The configuration library I created is small, low-impact and simple to use - you create a class and add properties, instantiate and fire away at configuration values. The library has been updated and majorly refactored a few times over the years to adapt to changes in .NET and common usage patterns. In this post I'll describe the latest updated version of this library as I think many of you might find it useful.

The very latest version of this library is now available on its own as a smaller self contained component. You can find the actual files and basic usage info here:

But before I jump in and describe the library lets spend a few minutes reviewing what configuration options are available in the box in .NET and why I think it made sense to maintain a custom configuration solution over the years.

Configuration? How boring, but…

I consider configuration information a vital component of any application and I use it extensively for allowing customization of the application both at runtime and through external configuration settings typically using .config files and occasionally in shared environments with settings stored in a database. The trick to effective configuration in any application is to make creating new configuration values and using them in your application drop dead easy. To me the easiest way to do this is by simply creating a class that holds configuration values, along with a mechanism for easily serializing that configuration data. You shouldn't have to think about configuration - it should just work like just about any other class in your projects :-)

In my applications, I try to make as many options as possible user configurable and configure everything from user application settings, to administrative configuration details, to some top level business logic options all the way to developer options that allow me to do things like switch in and out of detailed debug modes, turn on logging or tracing and so on. Configuration information can be varied so it should also be easy to have multiple configuration stores and switch between them easily.

This is not exactly a sexy feature, but one that is quite vital to the usability and especially the configurability of an application. If I have to think about setting or using of configuration data too much, have to remember every setting in my head (ie. "magic strings"), or have to write a bunch of code to retrieve values, I'll end up not using them as much as I should, and consequently end up with an application that isn’t as configurable as it could be.

What does .NET provide for Configuration Management?

So what do you use for configuration? If you're like most like developers you probably rely on the AppSettings class which provides single level configuration values at the appSettings key. You know the kind that's stored in your web.config or application.config file:

<configuration><appSettings><add key="ApplicationTitle" value="Configuration Sample (default)" /><add key="ApplicationSubTitle" value="Making ASP.NET Easier to use" /><add key="DebugMode" value="Default" /><add key="MaxPageItems" value="0" />
</appSettings></configuration>

All the configuration setting values are stored in string format in the appSettings section of an application's configuration file.

To access settings, the System.Configuration assembly in .NET provides a fairly easy way to access these configuration values via code from within applications:

string ConnectionString = ConfigurationSettings.AppSettings["ConnectionString"];

Easy enough, right? But it gets a little more complex if you need to grab a value that's not a string. For numeric or enum values you need to first ensure the value exists (is non-null) and then convert the string explicitly to whatever type, since configuration values are always strings. Here's what this looks like:

int maxPageItems = 0;string tInt = ConfigurationManager.AppSettings["MaxPageItems"];if (tInt != null)int.TryParse(tInt, out maxPageItems);DebugModes mode = DebugModes.Default;string tenum = ConfigurationManager.AppSettings["DebugMode"];if (tenum != null)    Enum.TryParse(tenum,out mode);

which is a bit verbose if you have to go through this every time you want to use a configuration value. You also have to remember the values and use the ConfigurationManager.AppSettings class. Minor but 'noisy' in application level code.

AppSettings values are also limited to a single AppSettings section inside of an application's or Web config file. Luckily you can also create custom configuration sections use the same configuration format in custom sections in a config file as long as those custom sections get declared:

<configuration><configSections><section name="CustomConfiguration" requirePermission="false" 
type="System.Configuration.NameValueSectionHandler,System,Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /></configSections><CustomConfiguration><add key="ApplicationName" value="Configuration Tests" /><add key="DebugMode" value="Default" /><add key="MaxDisplayListItems" value="15" /><add key="SendAdminEmailConfirmations" value="False" /><add key="MailServer" value="3v7daoNQzllLoX0yJE2weBlljCp0MgyY8/DVkRijRTI=" /><add key="MailServerPassword" value="ud+2+RJyqPifhK4FXm3leg==" /></CustomConfiguration></configuration>

You can then access a custom section with:

var settings = ConfigurationManager.GetSection("CustomConfiguration") as NameValueCollection;            Console.WriteLine(settings["ApplicationName"]);

and essentially get the same behavior as you get with the AppSettings keys. The collection you get back is a read-only NameValueCollection that's easy to run through and read from.

.NET's configuration provider also supports strongly typed configuration sections via code, which involves creating classes based base on the ConfigurationSection class. This gives you a slightly different configuration format that's a little less verbose than the add/key/value structure of NameValue type configuration:

<?xml version="1.0" encoding="utf-8" ?><configuration><configSections><section name="MyCustomConfiguration" requirePermission="false"type="Westwind.Utilities.Configuration.MyCustomConfigurationSection,Westwind.Utilities.Configuration.Tests"/></configSections><MyCustomConfiguration ApplicationName="Configuration Tests"  MaxDisplayItems="25"DebugMode ="ApplicationErrorMessage"         /></configuration>

This is a little more involved in that you need to define a class and define each property along with some inherited logic to retrieve the configuration value.

class MyCustomConfigurationSection : ConfigurationSection{        
    [ConfigurationProperty("ApplicationName")]public string ApplicationName
    {get { return (string) this["ApplicationName"]; }set { this["ApplicationName"] = value; }
    }

    [ConfigurationProperty("MaxDisplayItems",DefaultValue=15)]public int MaxDisplayItems
    {get { return (int) this["MaxDisplayItems"]; }set { this["MaxDisplayItems"] = value; }
    }

    [ConfigurationProperty("DebugMode")]public DebugModes DebugMode
    {get { return (DebugModes) this["DebugMode"]; }set { this["DebugMode"] = value; }
    }                
}

but the advantage is that you can reference the class as a strongly typed class in your application. With a bit of work you can even get Intellisense to work on your configuration settings inside of the configuration file. You can find out more from this detailed article from Rob Seeder. Strongly typed configuration classes are useful for static components that have lots of configuration settings, but for typical dynamic configuration settings that frequently change in applications the more dynamic section of key value pairs is more flexible and easier to work with dynamically.

For certain kinds of desktop applications, Visual Studio can also create a strongly typed Settings class. If you create a WinForms or WPF project for example it adds a Settings.settings file, which lets you visually assign properties in a designer. When saved the designer creates a class that accesses the AppSettings values indirectly.

SettingsDesigner

This is pretty nice in that it keeps all configuration information inside of a class that is managed for you as you add values. You also get default values and you can easily use the class in code:

var settings = new Settings();var mode = Settings.DebugMode;MessageBox.Show(mode.ToString());

The class is strongly typed and internally simply references a custom configuration section of values that are read from the config file. This is a nice feature, but it's limited to desktop Windows applications Console, WinForms, WPF and Windows 8 applications. It's also limited to a single configuration class.

Writing Values to the Config File

.NET also has support for writing configuration values back into configuration files via the configuration manager. You can load up a configuration, load a section and make changes to it, then write the entire configuration back out to disk assuming you have permissions to do this.

var config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None);var section = config.GetSection("appSettings") as AppSettingsSection;            
section.Settings.Add("NewKey", "Value");
config.Save();

This also works for Web Applications where you can use:

var config = WebConfigurationManager.OpenWebConfiguration("~");

to read the top level configuration.

Permissions are crucial here and often you will not be able to write configuration back using this approach. For Web applications Full Trust and read/write access to the web.config file are required. For desktop applications write file rights in the folder are required,  which is often not available - with User Account Control on you typically don't have rights to write into the Configuration folder.

Clearly there are a lot of choices available in .NET to handle configuration storage and retrieval. It's great that the ConfigurationManager is available to provide base features to create simple configuration storage quickly.

Creating a better Application Configuration Class

Personally though, I prefer a more structured approach for configuration management in my applications. Like everything else in my applications I expect my configuration settings to be based on one or more classes that I can simply add properties to and persist that data easily in my application.

There are some native choices available for that - after all .NET includes easy to use tools for serializing to XML and JSON. It's pretty trivial to create some code to arbitrarily take a class and serialize it. However, wouldn't it be nice if the format was easily switchable and if you didn't have to worry about writing out the data yourself?

When I created the ApplicationConfiguration component years ago that was my goal. The current incarnation of the Westwind ApplicationConfiguration library provides the following features:

  • Strongly typed Configuration Classes
  • Simply create a class and add Properties
  • Automatic type conversion for configuration values
  • Default values so you never have to worry about read failures
  • Automatic synching of class and configuration store if values are missing
  • Easily usable from any kind of .NET application or component
  • Support for multiple configuration objects
  • Multiple configuration formats
    • Standard .NET config files
      • Custom Sections
      • External Config files
      • AppSettings
    • Standalone XML files (XML Serialization)
    • Strings
    • Sql Server Tables
    • Customizable with easy to create ConfigurationProviders

How to use the AppConfiguration Class

The core of the Westwind Application Configuration library relies on a configuration class that you implement simply by inheriting from the Westwind.Utilities.Configuration.AppConfiguration class. This base class provides the core features for reading and writing configuration values that are properties of the class that you create. You simply create properties and instantiate the class and call Initialize() to initially assign the provider and load the initial configuration data.

To create a configuration class is as easy as creating a class and adding properties:

class MyConfiguration : Westwind.Utilities.Configuration.AppConfiguration{public string ApplicationName { get; set; }public DebugModes DebugMode { get; set; }public int MaxDisplayListItems { get; set; }public bool SendAdminEmailConfirmations { get; set; }public string MailServer { get; set; }public string MailServerPassword { get; set; }public MyConfiguration()
    {
        ApplicationName = "Configuration Tests";
        DebugMode = DebugModes.Default;
        MaxDisplayListItems = 15;
        SendAdminEmailConfirmations = false;
        MailServer = "mail.MyWickedServer.com:334";
        MailServerPassword = "seekrity";
    }
}

To use the configuration class you can then simply instantiate the class and call Initialize() with no parameters to get the default provider behavior and then fire away at the configuration values with the class properties:

var config = new MyConfiguration();
config.Initialize();// Read valuesstring appName = config.ApplicationName;DebugModes mode = config.DebugMode;int maxItems = config.MaxDisplayListItems;

Note that the Initialize() method should always be called on a new instance to initialize the configuration class. Initialize() internally assigns the provider and reads the initial configuration data from a store like the configuration file/section.

Once the class is instantiated and initialized you can go ahead and read values from the class. The values are loaded only once during Initialize() (or Read() if you decide to re-read settings manually) and are cached in the properties after the initial load. The values of the properties reflect the values of the configuration store - here from the application's config or web.config file, in a MyConfiguration section.

If the configuration file or section or values don't exist and the file is writable the releavant .config file is created. The content of the file looks like this:

<configuration><configSections><section name="MyConfiguration" requirePermission="false" 
type="System.Configuration.NameValueSectionHandler,System,Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /></configSections><MyConfiguration><add key="ApplicationName" value="Configuration Tests" /><add key="DebugMode" value="Default" /><add key="MaxDisplayListItems" value="15" /><add key="SendAdminEmailConfirmations" value="False" /><add key="MailServer" value="mail.MyWickedServer.com:334" /><add key="MailServerPassword" value="seekrity" /></MyConfiguration></configuration>

Note that a custom Section is created in the config with standard key values. The Initialize() method also takes an optional sectionName parameter that lets you explicitly override the section name. You can also use appSettings as the section name in which case the standard appSettings section is used without any custom configuration section configuration.

In the code above the configuration section is written automatically as part of the Initialize() code - but you can also explicitly write out configuration information using the Write() method:

var config = new MyConfiguration();
config.Initialize();config.DebugMode = DebugModes.ApplicationErrorMessage;
config.MaxDisplayListItems = 20;config.Write();

The key with calling the Write() method is that you have to have the permissions to write to the configuration store. For example, typical Web applications don't have rights to write to web.config unless you give explicit Write permissions to the file to the Web user account. Likewise, typical Windows applications installed in the Program Files folder can't write to files in the installation folder due to User Account Control permissions, unless you explicitly add rights for the user to write there. Location matters, so it's important to understand your environment before writing configuration values or expecting them to auto-created when initializing.

Using the Configuration Class as a Singleton

Because configuration data tends to be fairly static in most applications, it's a not a good idea to instantiate the Configuration class every time you need to access the configuration data. It's fairly expensive to read a file from disk, or access a database and the deserialize the values from the configuration into an object. It's much better to set up the configuration class once at application startup or using a static property to keep an active instance of the configuration class around.

Personally I prefer the latter by using a 'global' application object that I tend to have in every application. I can attach the configuration object on this application class as a static property. The advantage of a static property on a 'global' object is that it's portable and I can stick it into my business layer and use it in a Web app, a service or desktop application without any changes (at least when using config files). In Web applications static properties are also available to all threads so many simultaneous Web requests can share configuration information from the single instance.

To create a static Singleton is easy with code like this:

public class App{public static MyConfiguration Configuration { get; set; }static App()
    {
        Configuration = new MyConfiguration();
        Configuration.Initialize();
    }
}

Now anytime you need access to the configuration class you can simply use:

DebugModes mode = App.Configuration.DebugMode;int maxItems = App.Configuration.MaxDisplayListItems;

You never need to worry about instantiating the configuration class in your application code, it's just always there and cached to boot.

Using and customizing  Configuration Providers

So far I've used only the default provider which is the ConfigurationFileConfigurationProvider<T> class using default options, which use the standard .NET application configuration file and a section with the same name as the class within it. This means yourexe.exe.config or web.config for Web applications typically.

The default provider behavior using the ConfigurationFileConfigurationProvider is the most likely use case for the configuration configuration, but you can certainly customize the provider or even the behavior of the a provider by passing a custom provider to the Initialize() method. Initialize() takes a parameter for a Provider instance, a section name and arbitrary configData.

For example to use a custom section in the default configuration file you can specify the sectionName parameter in Initialize():

var config = new MyConfiguration();
config.Initialize(sectionName: "MyAdminConfiguration");

Of course you can also pass in a completely configured ConfigurationProvider instance which allows you to set all the options available on a provider:

var config = new AutoConfigFileConfiguration();// Create a customized provider to set provider optionsvar provider = new ConfigurationFileConfigurationProvider<AutoConfigFileConfiguration>()
{
    ConfigurationSection = "MyCustomConfiguration",
    EncryptionKey = "seekrit123",
    PropertiesToEncrypt = "MailServer,MailServerPassword"                };

config.Initialize(provider);  
            
// Config File and custom section should existstring text = File.ReadAllText(TestHelpers.GetTestConfigFilePath());Assert.IsFalse(string.IsNullOrEmpty(text));Assert.IsTrue(text.Contains("<MyCustomConfiguration>"));

You can also opt to use a completely different provider than the ConfigurationFileConfigurationProvider used so far in the examples. It's easy to create a provider instance and assign it during initialization, but realistically you'll want to embed that default logic directly into the provider itself so the logic to instantiate is encapsulated within the provider itself.

The following is an example of a configuration class that defaults to a database provider:

public class DatabaseConfiguration : Westwind.Utilities.Configuration.AppConfiguration{public string ApplicationName { get; set; }public DebugModes DebugMode { get; set; }public int MaxDisplayListItems { get; set; }public bool SendAdminEmailConfirmations { get; set; }public string Password { get; set; }public string AppConnectionString { get; set; }// Must implement public default constructorpublic DatabaseConfiguration()
    {
        ApplicationName = "Configuration Tests";
        DebugMode = DebugModes.Default;
        MaxDisplayListItems = 15;
        SendAdminEmailConfirmations = false;
        Password = "seekrit";
        AppConnectionString = "server=.;database=hosers;uid=bozo;pwd=seekrit;";
    }///// <summary>
    ///// Override this method to create the custom default provider - in this case a database
    ///// provider with a few options.
    ///// </summary>protected override IConfigurationProvider OnCreateDefaultProvider(string sectionName, object configData)
    {string connectionString = "LocalDatabaseConnection";string tableName = "ConfigurationData";var provider = new SqlServerConfigurationProvider<DatabaseConfiguration>()
            {
                Key = 0,
                ConnectionString = connectionString,
                Tablename = tableName,
                ProviderName = "System.Data.SqlServerCe.4.0",
                EncryptionKey = "ultra-seekrit", // use a generated value herePropertiesToEncrypt = "Password,AppConnectionString"};return provider;
    }}

This class implements the OnCreateDefaultProvider() method which is overridden to provide… a customized provider instance. The method receives a section name (which may or may not be used) and an optional configData parameter. configData can contain any arbitrary data that you can pass to the Initialize() method. For example, you might pass in a connection string value, or an anonymous object that contains both the connection string and table name that are hardcoded into the method above.

By implementing the above method the default behavior now loads the database provider, but you can still override the provider by explicitly passing one into the Initialize() method.

Providers really are the key to the functionality provided by the Application Configuration library - they're the work horses that do the work of retrieving and storing configuration data. The Westwind Application Configuration library consists of the AppConfiguration base class, plus a host of configuration providers that provide the actual logic of reading and writing of configuration values.

Here's the overall class layout:

AppConfiguration

Your class inherits from AppConfiguration which in turn contains a configuration provider instance. The instance is created during the Initialize() call - either using the default Configuration File provider or the custom provider passed in, or the provider you explicitly implement in OnCreateDefaultProvider(). The providers then implement the Read() and Write() methods responsible for retrieving the configuration data.

Configuration File Provider

The configuration file provider uses the .NET ConfigurationManager API to read values and direct XML DOM manipulation to add values to the config file. I opted for using the DOM rather than that ConfigurationManager to write values out, as there are fewer permissions issues. The Configuration API requires Full Trust to write because it has access to machine level configuration and using XML DOM and file IO allows writing of config files as long as the file permission are valid and it can work even in Medium trust. Configuration values are read one at a time and populated on the object in place, which means that if you use this you can call Initialize() as part of the constructor and automate instantiation without requiring a separate call to Initialize() from application code.

The configuration file provider allows for writing configuration files in separate locations, in customized sections as well as using the standard appSettings section. The default is to use the normal application configuration file in a section with the same name as the class.

Key Properties:

  • ConfigurationFile
  • ConfigurationSection

XmlFileConfigurationProvider

At first glance the XML file configuration provider sounds a lot like the ConfigurationFile provider. Both write output into files and use XML, but the XML provider is separate from .NET's configuration file system. This means configuration values written out to file don't automatically notify the calling app of changes. The XML File provider also relies on standard .NET serialization to produce the file output, which means that you can save much more complex data in a configuration class than with configuration file sections, which require a flat object structure.

XML files allow for more complex structure as you can go directly from object to serialized XML output. So if your configuration data includes nested data or needs to track collections of values, the XML Configuration provider can be a much better choice than .config files. This provider uses XML serialization to write XML directly to and from files.

Key Properties:

  • XmlConfigurationFile

SqlServer Configuration Provider

The SQL Server configuration provider stores configuration information in a SQL Server table with two fields: A numeric Id and a single text field that contains an XML serialized string. Multiple sets of configuration values can be stored in configuration table using data rows based on the Id. The provider works with SQL Server and SQL Server Compact and may also work with other providers (not tested). To use this provider you provide a connection string to the database plus an optional table name and an optional Id value for the configuration. The table name defaults to ConfigurationSettings and that table has an integer Id and ConfigData text field.

This provider also relies on XML Serialization - it attempts to read the data from the database, if it doesn't exist creates the table and inserts the value. You can specify an Id number that identifies the configuration instance - so you can create multiple configuration classes and map them to separate records in the configuration table.

Key Properties:

  • ConnectionString (Connection string or Connection Strings Entry)
  • Tablename
  • ProviderName (ADO.NET Provider name)
  • Key (integer Id for configuration record)

String Configuration Provider

String serialization is mostly useful to capture the configuration data and push it into some alternate and unsupported storage mechanism. Some applications might store configuration data entirely in memory, or maybe the configuration data is user specific and can live in the ASP.NET Session store for example.

This provider is largely unnecessary as string serialization is built directly into the core AppConfiguration class itself. You can use assign XML data to the config object with config.Read(string xml) to read configuration values from an XML string in XmlSerialization format and use the WriteAsString() method to produce a serialized XML string.

Key Property:

  • InitialStringData (the initial string to deserialize from)

Configuration Property Encryption

The base configuration provider also allows for encryption of individual configuration properties. Rather than encrypting an entire configuration section it's possible to encrypt only certain sensitive values like passwords and connection strings etc. which makes it easy to change most keys in a configuration file as needed, and leave a few sensitive values to be encrypted either on a development machine or via a Write() operation of the configuration.

To create encrypted keys specify the PropertiesToEncrypt property with a comma delimited list of properties that are to be encrypted. You also need to provide a string encryption key, which is used to handle the two-way encryption:

var provider = new ConfigurationFileConfigurationProvider<MyConfiguration>()
{ConfigurationSection = sectionName,    EncryptionKey = "ultra-seekrit",  // recommend to use a generated value herePropertiesToEncrypt = "Password,AppConnectionString"};

This provider produces a section in the configuration file that looks like this:

<MyConfiguration><add key="ApplicationName" value="Changed" /><add key="DebugMode" value="DeveloperErrorMessage" /><add key="MaxDisplayListItems" value="12" /><add key="SendAdminEmailConfirmations" value="True" /><add key="Password" value="ADoCNO6L1HIm8V7TyI4deg==" /><add key="AppConnectionString" value="z6+T5mzXbtJBEgWqpQNYbBss0csbtw2b/qdge7PUixE=" /></MyConfiguration>

When the AppConfiguration class reads the values from the configuration file (or other configuration store), the values are automatically decrypted so the configuration properties are always unencrypted when accessed. The Write() operation writes out the encrypted values into the configuration file. As you can see, using encryption only works if you can somehow write to the file, otherwise the encrypted values never are stored in the configuration. This means you need to have permissions to write to the file, either at development time to create the original values, or on the live site.

Writing Values to the Configuration Store - Permissions

Application level configuration is pretty nice and because the Configuration class is just a plain class it's easy to create an updatable configuration management interface in your applications. You can bascially display and capture configuration values directly from the UI via Databinding or ModelBinding and then simply call config.Write() to write out configuration data to the configuration store. It's easy to do. For example, in an MVC application I can simply have a Configuration Controller Action and View that displays and captures the configuration data directly of the Configuration object. You can update the values in the UI and then simply call the Write() method to write the configuration data out to the store.

The key is that you have to have permissions for this to work. If you store configuration settings in web.config you need to give rights to Web account to be able to write the file. For Web applications it might actually be better to use an external configuration file for the configuration settings to avoid having to explicitly give write access to web.config. Similar issues might be necessary in some desktop scenarios - rather than writing configuration information into a file in the installation/execution folder of an application, read and write the configuration data to a file located in My Documents or AppData where the logged on user has full access.

Summary

Configuration is an important part for just about any application, and this component has been very useful to me over the years, making it an absolute no brainer to just drop in a configuration class into just about any application I build. As I go along during development, just about any parameterizable setting gets added to one or more configuration classes. In most of my applications I have an application level configuration class that holds app specific settings like customizable messages, sizes, measurements, default values etc. as well as an Admin specific configuration that holds things like mail server and sender information, logging options, debugging and profiling options etc.. In Web applications in particular it's super nice to make these kind of changes in web.config files, and have the change immediately take effect. It's a very satisfying experience.

Recently I took the time to clean up this component a bit and extract it from the West Wind Web Toolkit where's it's been living for some time in obscurity. It's still in the toolkit and its forthcoming new version, but I figured pulling it out as a standalone component and sharing it on GitHub might give a little more attention to this useful component. I hope some of you find it useful.

Resources

© Rick Strahl, West Wind Technologies, 2005-2012
Posted in .NET  ASP.NET  

Basic Spatial Data with SQL Server and Entity Framework 5.0

$
0
0
Spatial data has been available for a while in SQL Server, but if you wanted to use it with Entiry Framework you had to jump through some hoops. In this post I show how basic SQL Spatial data works and then how you can utilize the new features in EF 5.0 to directly access spatial data using your CodeFirst models.

.NET HTML Sanitation for rich HTML Input

$
0
0
If you need to sanitize raw HTML for display in Web applications, the job at hand is scary for .NET backends. Unfortunately it seems there aren't a lot of tools available to help in this formidable tasks and the tools that are tend to be inflexible to the point of often being unusable. In this post I show a base implementation of an HTML Sanitizer that can be customized for your own needs.

Where does ASP.NET Web API Fit?

$
0
0
With the pending release of ASP.NET Web API we're finally getting a good HTTP Service solution 'in the box ' in ASP.NET. Web API provides many needed and cool features, but it's not always clear whether you should use Web API or some other technology like MVC to handle HTTP service requests. In this post I discuss what Web API is and a few options of where it fits and potentially doesn't fit.

Mapping UrlEncoded POST Values in ASP.NET Web API

$
0
0
Surprsingly Web API does not support POST value mapping to multiple simple parameters on a Web API endpoint. While you can map POST values with model binding or the FormDataCollection native parameter mapping in Web API is a missing feature. Here's what you can and can't do with POST values in Web API.

An Introduction to ASP.NET Web API

$
0
0
This article is a hands on tour of ASP.NET Web Api. It covers a fair variety of functionality and goes beyond the most basic introductions by digging into some of the mundane details you're likely to run when first starting out with Web API.

.NET 3.5 Installation Problems in Windows 8

$
0
0
I ran into a major headache with getting .NET 3.5 properly on my Windows 8 install - although installed SP1 was missing and wouldn't properly install. Here's what happened, how to check for the version actually installed and how to work around it.

Using JSON.NET for dynamic JSON parsing

$
0
0
Parsing JSON dynamically rather than statically serializing into objects is becoming much more common with today's applications consuming many services of varying complexity. Sometimes you don't need to map an entire API, but only need to parse a few items out of a larger JSON response. Using JSON.NET and JObject,JArray,JValue makes it very easy to dynamically parse and read JSON data at runtime and manipulate it in a variety of different ways. Here's how.

ASP.NET Frameworks and Raw Throughput Performance

$
0
0
I got curious the other day: How do the various ASP.NET framework compare in raw throughput performance? With so many development choices on the ASP.NET stack available today it's interesting to take an informal look at how raw throughput performance compares.

Passing multiple simple POST Values to ASP.NET Web API

$
0
0
One feature conspicuously missing from ASP.NET Web API is the inability to map multiple urlencoded POST values to Web API method parameters. In this post I show a custom HttpParameterBinding that provides this highly useful functionality for your Web APIs.

Creating STA COM compatible ASP.NET Applications

$
0
0
When it comes to deploying STA COM components in ASP.NET only WebForms has native support for STA component. Other technologies like MVC, ASMX Web Services and WCF run only in MTA mode. If you need to run your STA COM Components in ASP.NET here is what you need to know and a few tools that help you create STA compatible handlers.

Visual Studio Web Publish Lockup? Check for invisible Window

$
0
0

Today, while sitting through the MVP 2013 sessions and watching Scott Gu's Azure demos I decided to finally try quickly setting up a free Azure Web site and publish it. The process is super easy. Create the site online using the Azure portal and then use the management portal to download the publish settings. Then on the local machine create a new site, set up a simple sample page, and then use Web Publish to push it up to Azure.

Or so it should be. But I ran into an ugly snag: When I click "Publish" on my Web project - unceremoniously nothing happens. I see the publish window flash, but after that Visual Studio is locked up solid. Not the experience I hoped for.

Turns out the the problem is that while I've been travelling I'm running on a single monitor. Last time I published though, I published my project with multi-monitors, and the publish dialog showed on the second monitor. Web Publish apparently rememembers its screen position and so this modal dialog is off screen invisibly. You get the idea: Modal child dialog in Visual Studio and you basically have an IDE that looks like it's locked up. Menus and Window controls don't work - the IDE is just dead.

Thanks to rescue line tweet from Sayed Hashimi who pointed me in the right direction I was able to recover my hidden window by:

  • Pressing Alt-Spacebar to bring up the child window menu
  • Pressing the Size option to drag the window corner out onto the first monitor
  • Then pulling the window in and properly resizing it

Voila my Web Publish window is back.

Note that the window menu also has a Move option on it - for the desktop window menu you can usually just use the arrow keys or the mouse to force the window into the first monitor, but that didn't seem to work for me. Only Size worked, but it's worth trying that first anyway - it seems that that should work.

Anyway, that's a nasty little bugger when it bites you, because it seems like a Visual Studio lockup. According to Sayed this is supposed to be fixed in Visual Studio Update 2 (which released just last week), but I'm running that and I still ran into it.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Visual Studio   ASP.NET  

Sql Connection Strings in .Config Files vs. Source Control

$
0
0

One basic question that I see time and time again with source control is how to manage connection strings stored in .config files within source control. Such a small thing, but I see that as a stumbling block for many projects starting up under source control especially for new developers. The problem is that on most developer machines there are differences in how connection strings - and even more specifically the server name - are referenced to access a database.

.config file differences in general are problematic under source control. In Web projects in Visual Studio there are .config transformations that can be applied that can mitigate some of this pain, but this only solves the problem for Web projects. If you have test projects or re-use components in other types of applications like services or desktop apps, .config transformations on their own can't solve that problem easily. There are workarounds described in this StackOverflow Question, but they're not built-in and require some extra effort as well.

Config File Connection Strings under Source Control

While most settings in .config files are pretty universal across configurations and can stay stable, the most common problem are connection string differences amongst users. For example some people use a full version of SQL Server on the local machine (.), others use SQL Express (.\SQLEXPRESS) or the new localDb Sql Server support (localdb) or simply run SQL Server on another machine altogether with a different server name.

If you have any divergence at all and you check in your changes to source control, and then another person with a different server requirement pulls those changes down, they'll likely get a merge conflict and accepting the change results the compiled application not running due to an invalid connection string.

Now you can fix the connection string in your project, but of course then when you push back your changes to the source server you then hose other people who are using a different string and the process repeats. Not ideal to say the least.

There are a number of different ways to address this, but I've been using one of the following two approaches to deal with this particular problem.

Server Aliases

The most common issue I've seen with connection strings is merely the server name rather than other stuff on the connection string. As the connection string is the same except for the server name for all involved, Server Aliases are an easy way to use a single connection string in all configurations.

If only the server name is different the easiest way to set up common base for all users is to set up a Server Alias for each of the users. SQL Server "Server Aliases" can be configured in the SQL Server Configuration Manager and as the name suggest let you specify a server's connection properties via a simplified alias name.  Typically this feature is meant to simplify complex connection features like TCP/IP ports and long SQL connection names, but it also works great to hide the server name from your .config files so that all users can use the same Server name regardless of the variant of SQL Server they are using.

To use it start the Sql Server Configuration Manager from the Start Menu, select the SQL Client configuration and then Aliases:

SqlConfigurationWizard

In the dialog that pops up you can configure the address of the server (. or here a local domain DbBeast server) and protocol, plus a configuration value such as the TCP/IP port (if custom) or a Named Pipe name for example.

Here I set up an Alias called CodePasteAlias that points at my local SQL server which is . or could also be my local machine name. If I were running SQL Express I'd use .\SQLEXPRESS instead for the server name.

Click OK to save and your configuration is set. Note that there are separate 32 bit and 64 bit configurations - you might have to set up the 32 bit or 64 bit or both depending how your application runs.

Once you've configured the Server Alias you can now use it in a connection string in lieu of a server name.

<connectionStrings><add name="CodePasteContext" connectionString="server=CodePasteAlias;database=CodePasteData;integrated security=true" providerName="System.Data.SqlClient" />connectionStrings>

This approach is very easy to deal with as it's a one time config option and it will work across projects without changes required to the config files. But - and it's a big but - it only works works if the server name is the only thing that is different. It doesn't work if you have completely different connection strings or you have differing login names.

If you need more control over all of the connection string, this approach won't work.

In practice I've found that Server Aliasing is sufficient most of the time. If necessary creating a custom login as part of the database can ensure that everybody uses the same authentication as well (if integrated doesn't work universally) which is often the only other difference than server name.

External Configuration Sections

If you need more control over the differences in the connection string (or other parts of configuration files for that matter) you can externalize parts of the .config and keep the external pieces out of source control.

.NET Configuration files support the ability to externalize the content of a configuration section quite easily using a configSource attribute. This makes it possible to externalize the entire connectionStrings section for example and then keep the externalized file out of source control.

For example you can do something like this:

<connectionStrings configSource="web_ConnectionStrings.config">connectionStrings>

You can then create a web_ConnectionStrings.config file and store the following in there:

<connectionStrings><add name="CodePaste" connectionString="server=.;database=CodePasteData;integrated security=true" providerName="System.Data.SqlClient" />connectionStrings>

You now have externalized the database configuration settings into an external file which can be kept out of source control, so that each user has their own settings of the connection strings. But this has the unwelcome side effect that a new clone from a repository has no connection strings at all in place. They'll have to be manually added or copied in from some other location.

One Abstraction Further

One issue with this approach is that it doesn't solve the problem of multiple configurations when it comes time to deploy the app.

One hacky way of doing this is to store external configuration files for each build in a separate folder and then copy them in with a build task. To do this:

  • Create a folder somewhere in your solution path root (I use /config)
  • Copy one file for each configuration (ConnectionStrings.config.debug, *.release, *.deploy)
  • Change each file to match your environment.
  • Add post build event to copy the appropriate file from the folder into your project

The post-build task would look like this:

"copy" "$(SolutionDir)config\connectionstrings.config.$(ConfigurationName)"
       "$(ProjectDir)web_ConnectionStrings.config"

Using this approach you'd still have to instruct people to create the files inside of Solution /config folder outside of source control, but the advantage with this is that a) the project has all the files to run (even if the connection strings may be invalid you get an error message to that effect) and b) you can have multiple connection string settings files for the different configurations. So if you need to deploy you can be assured you're deploying with the correct connection string for the live site (or whatever config).

In this scenario you would set up the .debug and .release configurations to your dev setup and the .deploy (and whatever other versions like .staging) to the appropriate settings.

Hacky Stuff

This is all very ugly and just for the sake of source control. For small or personal projects it's easy to bypass this altogether by simply using the same connection strings for development and the server. But when working in even smal teams it's likely you'll run into divergence.

If only server names vary, Server Aliases can simulate a single connection string easily. This works just fine as long as connection strings only vary by server name. If this is possible this is definitely the easiest approach as nothing has to change in the project - you just need the manual connection setup once. Personally for me Server Aliasing works most of the time and it's easy enough to set up for all involved as long as the process is documented.

Back in the days of ODBC connections there was a repository of DSNs that could be declared at the system level, which in light of this particular issues seems like a good way to handle this - set up a connection globally once and then reference that remotely. Unfortunately, AFAIK the .NET universal provider doesn't recognize DSN connections.

© Rick Strahl, West Wind Technologies, 2005-2013

Using plUpload to upload Files with ASP.NET

$
0
0

Uploading files to a Web Server is a common task in Web applications these days. All sorts of Web applications require accepting media content, images, or zip attachments with various pieces of data that the application needs. While HTML supports basic file uploading via the <input type="file"> control, let's face it: Uploading of anything with just the HTML file input control sucks. There are lots of problems with this approach, from the inability to select multiple files (in older browsers - HTML 5 can help for some modern browsers), from lack of any sort of progress information while your HTML upload is going, to the inability to potentially resume uploading an earlier upload.

In this article I demonstrate using plUpload to upload files to an ASP.NET server, using a single Image Uploader page as an example and provide a base HttpHandler (Westwind.plUploadHandler) implementation that greatly simplifies the process of handling uploaded files from the plUpload client.

Everything you need is available online:

The sad State of HTML/HTTP Uploads

It's sad to think that this important aspect of HTML and HTTP hasn't been addressed in the interim of Web proliferation. Instead developers still have to fumble with hacked together solutions to get files transferred to servers. So much so that many people simply give up and fall back to FTP… yuk!

For this reason, HTML Upload controls of various types have been around for a long time, both commercial and open source. These controls wrap the somewhat complex process of pushing data to the server in small chunks that can report progress while files are being sent. Until recently, getting access to local files wasn't possible through the HTML DOM APIs, so the only way to do chunked uploads was to use plug-ins like Flash or Silverlight to the job. Recently with various new HTML 5 APIs you can now gain access to user opened files using the FileReader API and you can select multiple files in order to open the files. However, support for the HTML5 FileReader API has been spotty and inconsistent among browsers until very recently and even now there are still quite a few implementation differences that it's not the best idea to screw around with this as part of an application as an all inclusive solution - you still need fallback to other avenues to upload like Flash or Silverlight when HTML5 is not available.

plUpload to the Rescue

One control that I've used in a number of projects is the plUpload component. plUpload provides a simple component along with a couple of user interface controls that can present a nice looking upload UI that you can just drop into an application in most cases. plUpload works by supporting chunked uploads to server via various plug-in technologies, depending on what the browser supports which means it works even with ancient IE browsers. plUpload supports various technologies to perform these uploads and you can choose which ones you want to support as part of your app. Maybe you really want to support only HTML5 or maybe HTML5 and Flash to get most of the reach you need. Or add Silverlight and Flash components to provide a better experience for those that have these components installed. You just provide the technologies you want and plUpload will use your preferences in order until it finds one that works with the browser its running.

plUpload has jQuery and jQuery UI components that provide a user interface. I've only used the jQuery jquery.plUploadQueue component because this component provides a richer interface. Here's is what the uploader looks like as part of the image uploader example I'll show in this post:

plUploadComponent

The plUploader Queue component is the square box on the bottom of the screen shot. You basically can click the Add Files button to add one or more files using the standard OS file open dialog which lets you see images (or other content) as icons for easy selection which is great for image selection:

FileOpenDialog

If using the HTML 5 (or Google Gears) mode, you can also drag and drop images into the control out of any Shell interface like Internet Explorer.

Implementing a plUpload Image Uploader

Lets look at the example and how to build it using plUpload on the client and an ASP.NET HTTP Handler on the server using a custom HttpHandler that greatly simplifies handling the incoming file or file chunks. Using this custom plUploadHandler implementation you can listen for a number of 'events', but for most applications the completion event is the only one that you need to implement.

Let's look at the example and the application code involved to make it work. Let's start on the client with the plUpload component.

Client Side plUpload Setup

To install plUpload you can download the zip distribution and then copy the relevant files into your project. I plan to use HTML5, Flash, and Silverlight so I pull the relevant components for those and drop it all into an /scripts/plUpload folder:

ProjectSetup 

I like keeping all the plUpload related files in a single folder so it's easy to swap in a new version at a later point.

Then once these scripts and styles are added to the project you can then reference them in your HTML page that will display the upload component:

<link href="scripts/plupload/jquery.plupload.queue/css/jquery.plupload.queue.css" rel="stylesheet"type="text/css" /><script src="scripts/jquery.min.js"></script><script src="scripts/plupload/plupload.full.js"></script>    <script src="scripts/plupload/jquery.plupload.queue/jquery.plupload.queue.js"></script><!-- just for this demo: draggable, closable, modalDialog --><script src="Scripts/ww.jquery.min.js"></script>    <!-- page specific JavaScript that puts up plUpload component --><script src="UploadImages.js"></script>

You need to reference the style for the UI and the scripts for plUpload and the jQuery plUpload.queue component which are separate. You also need jQuery since the queue component relies on it. The other two scripts are application specific - UploadImage.js contain the page JavaScript and ww.jquery.js includes some UI helpers and jQuery plug-ins from my UI library. The latter is not required - just used for the sample.

To embed the plUpload component into the page create an empty <div> element like this:

<div id="Uploader">        </div> 

If you want to see the full HTML you can check out the UploadImages.htm page on GitHub along with the rest of this example.

Setting up the basic plUpload Client Code

Next, we'll need some script to actually display and make the component work. The page script code lives in UploadImages.js.

The base code to create a plUpload control and hook it up looks like this:

$(document).ready(function () {
    $("#Uploader").pluploadQueue({
        runtimes: 'html5,silverlight,flash,html4',
        url: 'ImageUploadHandler.ashx',
        max_file_size: '1mb',
        chunk_size: '65kb',
        unique_names: false,// Resize images on clientside if we canresize: { width: 800, height: 600, quality: 90 },// Specify what files to browse forfilters: [{ title: "Image files", extensions: "jpg,jpeg,gif,png" }],
        flash_swf_url: 'scripts/plupload/plupload.flash.swf',
        silverlight_xap_url: 'scripts/plupload/plupload.silverlight.xap',
        multiple_queues: true});// get uploader instancevar uploader = $("#Uploader").pluploadQueue();      });

Most of these settings should be fairly self explanatory. You can find a full list of options on the plUpload site.

The most important setting is the url which points at the server side HttpHandler URL that will handle the uploaded Http chunks, and the runtimes that are supported. I like to use HTML5 first, then fallback to Silverlight, then Flash and finally plain HTML4 uploads if all else fails. In order for Flash and Silverlight to work I have to specify the flash_swf_url and silverlight_xap_url and point them at the provided Flash and Silverlight components in the plUpload folder.

I can also specify the max file size which is detected on the client and prevents uploads of very large files. plUpload - like most upload components sends data one small chunk at a time which you can set in the chunk_size option. The server then picks up these chunks and assembles them, one HTTP request at a time by appending them to a file or other data store.

You can specify whether plUpload sends either the original filename or a random generated name. Depending on your application a random name might be useful to avoid guessing what the uploaded filename is. In this example I want to actually display the image that were uploaded immediately, so I don't want unique names.

plUpload also can automatically resize images on the client side when using the Flash and Silverlight components which can reduce bandwidth significantly.

You can also specify a filename filter list when picking files. Here I basically filter the list to a few image formats I'm willing to accept from the client. Note that although I'm filtering extensions here on the client, it's important that you also check the file type on the server, as it's possible to directly upload to the server bypassing the plUpload UI. A malicious hacker might try to upload an executable file or script file and then call it via a Web browser. ALWAYS check file names on the server and make sure you don't write out files in formats that can be executed in any way. You'll see in the server side code that I explicitly check for the same image formats specified here on the client and if it's a different extension, I disallow the file from being written to disk.

Detecting when a File has been uploaded

If you want to have something happen when a file is uploaded you can implement the FileUploaded event on the uploader instance in created in the code above. In my example, the server returns the URL where the uploaded and resized image is available on the server. In the code below response.response contains the URL to the image, which is then used to construct an <img>element that is then appended to the ImageContainer<div> tag in the document.

Here's what that code looks like:

// bind uploaded event and display the image
// response.response returns the last response from server
// which is the URL to the image that was sent by OnUploadCompleteduploader.bind("FileUploaded", function (upload, file, response) {// remove the file from the listupload.removeFile(file);// Response.response returns server output from onUploadCompleted
    // our code returns the url to the image so we can display itvar imageUrl = response.response;

    $("<img>").attr({ src: imageUrl })
                .click(function () {
                    $("#ImageView").attr("src", imageUrl);
                    setTimeout(function () {
                        $("#ImagePreview").modalDialog()
                                        .closable()
                                        .draggable();
                        $("#_ModalOverlay").click(function () {
                            $("#ImagePreview").modalDialog("hide");
                        });
                    }, 200);
                })
                .appendTo($("#ImageContainer"));
});

The image that is added to the page can also be clicked and a bigger overlay is then displayed of the image. This code is obviously application specific. What you do when an upload completes is entirely up to you. If you upload a Zip file, maybe you want to update your UI with the filename of the Zip file or show an attachment icon etc. With images you typically want to immediately display them as soon as they are uploaded which is nice.

Note that I explicitly remove the uploaded file from the list first - by default plUpload leaves files uploaded in the list with a success icon next to it. I find that this adds too much clutter and is confusing to users, especially if you allow users to do multiple uploads, so I prefer removing files from the list so when uploads are complete the list is empty again to accept more files.

Error Handling

Error handling is one of the weak points of plUpload as it doesn't report much information when an error occurs. Some errors - like a file that's too big - pop up alert() windows which is kind of annoying. Client and IO errors while uploading result in fairly generic errors like Http Error even if the server side returns a full error message,  but at least you get some feedback so you can tell what happened.

To handle these errors implement the Error event like this:

// Error handler displays client side errors and transfer errors
// when you click on the error iconsuploader.bind("Error", function (upload, error) {
    showStatus(error.message,3000,true);
});

Limiting the Number of Files added to plUpload

There's no built-in way to limit the number of files that can be uploaded. However there's a FilesAdded event you can hook and you can look at how many files are in the control before the new files are displayed. To check for the number of files to be uploaded at once you can use code like the following:

// only allow 5 files to be uploaded at onceuploader.bind("FilesAdded", function (up, filesToBeAdded) {if (up.files.length > 5) {
        up.files.splice(4, up.files.length - 5);
        showStatus("Only 5 files max are allowed per upload. Extra files removed.", 3000, true);return false;
    }return true;
});

Overall the client process of uploading is pretty simple to implement. There are a few additional events you can capture and use to determine how to handle the UI, but there are not a ton of options for managing reloading the UI or disabling downloads altogether after the first upload for example. To do this you have manually hide/remove DOM elements.

ASP.NET Server Side Implementation

The client side of plUpload is reasonably well documented, but when it comes to the server side code, you can basically look at the sample PHP code and the reverse engineer it. I've provided a base handler implementation that parses the plUpload client format and accepts the chunks in a generic handler interface. The Westwind.plUploadHandler component contains:

  • plUploadHandlerBase
  • plUploadHandlerBaseAsync
  • plUploadFileHandler

The first two handle the base parsing logic for plUploads request data, and then expose a few 'event' hook methods that you can override to receive data as it comes in. The plUploadFileHandler is a more specific implementation that accepts incoming data as files and writes them out to a path specified in the handler's properties.

ClassHierarchy

Before I look at how these handlers work, let's look at the implementation example for the sample Image Uploader sample app. The process for the server side here is simple:

  • Add a reference to the Westwind.plUploadHandler assembly (or project)
  • Create a new HttpHandler that derives from any of the above handlers - here plUploadFileHandler
  • Override at least the OnUploadCompleted() method to do something on the server when a file has completed uploading

In this example I use an ASHX based HttpHandler called ImageUploadHandler.ashx and inherit it from the plUploadFileHandler since we are effectively uploading images files to the server. The code for this application level handler looks like this:

public class ImageUploadHandler : plUploadFileHandler{const string ImageStoragePath = "~/UploadedImages";        public static int ImageHeight = 480;public ImageUploadHandler()
    {// Normally you'd set these values from config valuesFileUploadPhysicalPath = "~/tempuploads";
        MaxUploadSize = 2000000;
    }protected override void OnUploadCompleted(string fileName)
    {var Server = Context.Server;// Physical Path is auto-transformedvar path = FileUploadPhysicalPath;var fullUploadedFileName = Path.Combine(path, fileName);var ext = Path.GetExtension(fileName).ToLower();if (ext != ".jpg" && ext != ".jpeg" && ext != ".png" && ext != ".gif")
        {
            WriteErrorResponse("Invalid file format uploaded.");return;
        }// Typically you'd want to ensure that the filename is unique
        // Some ID from the database to correlate - here I use a static img_ prefixstring generatedFilename = "img_" + fileName;try{// resize the image and write out in final image folderResizeImage(fullUploadedFileName, Server.MapPath("~/uploadedImages/" + generatedFilename), ImageHeight);// delete the temp fileFile.Delete(fullUploadedFileName);
        }catch (Exception ex)
        {
            WriteErrorResponse("Unable to write out uploaded file: " + ex.Message);return;
        }string finalImageUrl = Request.ApplicationPath + "/uploadedImages/" + generatedFilename;// return just a string that contains the url path to the fileWriteUploadCompletedMessage(finalImageUrl);
    }
}

Notice that this code doesn't have to deal with plUpload's internal upload protocol format at all - all that's abstracted in the base handlers. Instead you can concentrate on what you want to do the with the incoming data - in this case a single completed, uploaded file. Here I only deal with the OnUploadCompletedEvent() which receives a single parameter of the filename that plUpload provided. This filename is either the original filename the user entered or a unique name if unique_names was set to true on the client. In this case I had the setting false so that I do receive the original filename.

The code in this completion handler basically resizes the uploaded file and then copies it to another folder as part of the resizing operation. Simple.

In general you don't want to expose your upload folder directly to the Web to avoid people guessing the name of uploaded files and accessing them before the file has completely uploaded. Here files upload to ~/tempUploads and then copies the files to the ~/UploadedImages folder from which the images can be displayed.

Note also that I check for valid extensions as mentioned earlier - you should always check to ensure that files are of the proper type that you want to support to avoid uploading of scriptable files. If you don't let people access files after upload (as I'm doing here for image viewing) you should ensure that your upload folders either are out of the Web site/virtual space, or that permissions are such that unauthenticated users can't access those files.

If any kind of error occurs I can use WriteErrorResponse to write out an error message that is sent to the client. plUpload can display these errors.

When all is done, I can write out an optional string of data that is sent back to plUploads FileUploaded event that I handled in script code. I can use WriteUploadCompletedMessage() to write out this message. Whatever string you write here goes directly back to the FileUploaded event handler on the client and becomes available as response.response. Here I send back the image url of the resized image, so that the client can display it as soon as the individual file has been uploaded.

I can implement additional events on my handler implementation. For example, my code for this handler also includes the OnUploadStarted() method which basically deletes files that are older than 15 minutes to avoid cluttering the upload folders:

protected override bool OnUploadStarted(int chunk, int chunks, string name)
{// time out files after 15 minutes - temporary upload filesDeleteTimedoutFiles(Path.Combine(FileUploadPhysicalPath, "*.*"), 900);// clean out final image folder tooDeleteTimedoutFiles(Path.Combine(Context.Server.MapPath(ImageStoragePath), "*.*"), 900);return base.OnUploadStarted(chunk, chunks,name);            
}// these aren't needed in this example and with files in general
// use these to stream data into some alternate data source
// when directly inheriting from the base handler

//protected override bool  OnUploadChunk(Stream chunkStream, int chunk, int chunks, string fileName)
//{
//     return base.OnUploadChunk(chunkStream, chunk, chunks, fileName);
//}

//protected override bool OnUploadChunkStarted(int chunk, int chunks, string fileName)
//{
//    return true;
//}
The other two event methods are not used here, but if you want to do more low level processing as data comes in you can capture OnUploadChunkStarted() and OnUploadChunk(). For a plUploadFileHandler subclass this doesn't make much sense, but if you are subclassing directly from plUploadHandlerBase this is certainly useful. For example, you might capture incoming output and stream it one chunk at a time into a database using OnUploadChunk().

How plUploadHandler works

The format that plUpload uses is pretty simple - it sends multi-part form data for chunks of data uploaded with each message containing a chunk of file data plus information about the upload - specifically it includes:

  • name - the name of the file uploaded (or a random name if you chose to unique_names to true)
  • chunks - the number of total chunks that are sent
  • chunk - the number of the chunk that is being sent
  • file - the actual file binary data

Here's what a typical raw chunk looks like:

POST http://localhost/plUploadDemo/ImageUploadHandler.ashx HTTP/1.1
Host: localhost
Connection: keep-alive
Content-Length: 41486
Origin: http://localhost
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.152 Safari/537.22
Content-Type: multipart/form-data; boundary=----pluploadboundaryp17l6c9gil157rq4pdhp7kc180a5
Accept: */*
Referer: http://localhost/plUploadDemo/UploadImages.htm
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,de-DE;q=0.6,de;q=0.4
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

------pluploadboundaryp17l6c9gil157rq4pdhp7kc180a5
Content-Disposition: form-data; name="name"

DSC_0154.JPG
------pluploadboundaryp17l6c9gil157rq4pdhp7kc180a5
Content-Disposition: form-data; name="chunk"

0
------pluploadboundaryp17l6c9gil157rq4pdhp7kc180a5
Content-Disposition: form-data; name="chunks"

3
------pluploadboundaryp17l6c9gil157rq4pdhp7kc180a5
Content-Disposition: form-data; name="file"; filename="DSC_0154.JPG"
Content-Type: image/jpeg


… binary data first chunk

You can see that this request has 3 chunks and this is the first (0 = 1st in 0 based) chunk. The binary data contains the first chunk of data. The server receives the chunk and returns a JSON response which plUpload processes. The message returned looks like this:

{"jsonrpc" : "2.0", "result" : null, "id" : "id"}

The success response can include a message, but this comes from the default handler and the value is simply null. Errors send n error object as part of the json along with an error message that can be passed back to the plUpload UI (actually that doesn't work - plUpload always reports Http Error). Although you can send these messages the plUpload UI doesn't actually display them unfortunately instead sending generic error messages.

The base Http Handler Code

To deal with the plUpload protocol implementation here is the HttpHandler that sets up the base processing of these plUpload chunk requests.

The main class is the plUploadBaseHandler which provides the core cracking of the incoming chunk messages and then firing 'event hook methods' (the Onxxxx methods) that can be overridden and subclassed to more easily handle the file upload without worrying about the semantics of the plUpload protocol.

Here's what base plUploadBaseHandler looks like:

/// <summary>
/// Base implementation of the plUpload HTTP Handler./// 
/// The base handler doesn't handle storage in any way/// it simply gets message event methods fired when /// the download is started, when a chunk arrives and when /// the download is completed./// 
/// This abstract class should be subclassed to do something/// with the received chunks like stream them to disk /// or a database./// </summary>public abstract class plUploadBaseHandler : IHttpHandler        {protected HttpContext Context;protected HttpResponse Response;protected HttpRequest Request;        public bool IsReusable
    {get { return false; }
    }/// <summary>
    /// Maximum upload size in bytes/// default: 0 = unlimited/// </summary>protected int MaxUploadSize = 0;public void ProcessRequest(HttpContext context)
    {
        Context = context;
        Request = context.Request;
        Response = context.Response;            // Check to see whether there are uploaded files to process themif (Request.Files.Count > 0)
        {HttpPostedFile fileUpload = Request.Files[0];string fileName = Request["name"] ?? string.Empty;string tstr = Request["chunks"] ?? string.Empty;int chunks = -1;if (!int.TryParse(tstr, out chunks))
                chunks = -1;

            tstr = Request["chunk"] ?? string.Empty;int chunk = -1;if (!int.TryParse(tstr, out chunk))
                chunk = -1;// If there are no chunks assume the file is sent as one 
            // single chunk (plain HTTP Upload)if (chunks == -1)
            {if (MaxUploadSize == 0 || Request.ContentLength <= MaxUploadSize)
                {if (!OnUploadChunk(fileUpload.InputStream, 0, 1, fileName))return;
                }else{
                    WriteErrorResponse(Resources.UploadedFileIsTooLarge, 413);return;
                }

                OnUploadCompleted(fileName);

                return;
            }else{// this isn't exact! We can't see the full size of the upload
                // and don't know the size of the large chunkif (chunk == 0 && Request.ContentLength * (chunks - 1) > MaxUploadSize)
                    WriteErrorResponse("Uploaded file is too large.", 413);
            }if (!OnUploadChunkStarted(chunk, chunks, fileName))return;// chunk 0 is the first oneif (chunk == 0)
            {if (!OnUploadStarted(chunk, chunks, fileName))return;
            }if (!OnUploadChunk(fileUpload.InputStream, chunk, chunks, fileName))return;// last chunkif (chunk == chunks - 1)
            {// final response should just return
                // the output you generateOnUploadCompleted(fileName);return;
            }// if no response has been written yet write a success responseWriteSucessResponse();
        }
    }/// <summary>
    /// Writes out an error response/// </summary>protected void WriteErrorResponse(string message, int statusCode = 100, bool endResponse = false)
    {
        Response.ContentType = "application/json";
        Response.StatusCode = 500;            // Write out raw JSON string to avoid JSON requirementResponse.Write("{\"jsonrpc\" : \"2.0\", \"error\" : {\"code\": " + statusCode.ToString() + ", \"message\": " + JsonEncode(message) + "}, \"id\" : \"id\"}");if (endResponse)
            Response.End();
    }/// <summary>
    /// Sends a message to the client for each chunk/// </summary>
    /// <param name="message"></param>protected void WriteSucessResponse(string message = null)
    {
        Response.ContentType = "application/json";string json = null;if (!string.IsNullOrEmpty(message))
            json = JsonEncode(message);elsejson = "null";

        Response.Write("{\"jsonrpc\" : \"2.0\", \"result\" : " + json  + ", \"id\" : \"id\"}");
    }/// <summary>
    /// Use this method to write the final output in the OnUploadCompleted method/// to pass back a result string to the client when a file has completed/// uploading/// </summary>protected void WriteUploadCompletedMessage(string data)
    {
        Response.Write(data);
    }/// <summary>
    /// Completion handler called when the download completes/// </summary>protected virtual void OnUploadCompleted(string fileName)
    {  }/// <summary>
    /// Fired on every chunk that is sent/// </summary>protected virtual bool OnUploadChunkStarted(int chunk, int chunks, string fileName)
    {return true;
    }/// <summary>
    /// Fired on the first chunk sent to the server - allows checking for authentication/// file size limits etc./// </summary>protected virtual bool OnUploadStarted(int chunk, int chunks, string fileName)
    {return true;
    }/// <summary>
    /// Fired as the upload happens/// </summary>
    /// <returns>return true on success false on failure</returns>protected virtual bool OnUploadChunk(Stream chunkStream, int chunk, int chunks, string fileName)
    {return true;
    }/// <summary>
    /// Encode JavaScript/// </summary>protected string JsonEncode(object value)
    {var ser = new JavaScriptSerializer();return ser.Serialize(value);
    }
}

This key is the ProcessRequest() method which cracks the plUpload method and then fires of various event hook methods in response and sends them the relevant data from the uploaded chunk. This handler will be called multiple times potentially (depending on the number of chunks). The rest of the class are simply the default 'event hook method' base implementations that don't do anything in this abstract class.

This class is abstract so it has to be inherited to do something useful. The base handler requires that you implement at least the OnUploadChunk() method to actually capture the uploaded data, and probably also the OnUploadCompleted() method to do something with the upload once has completed.

One provided specialization is the plUploadFileHandler subclass which inherits the base handler and writes the chunked output to file, cumulatively appending to the same file:

public class plUploadFileHandler : plUploadBaseHandler{/// <summary>
    /// Physical folder location where the file will be uploaded./// 
    /// Note that you can assign an IIS virtual path (~/path)/// to this property, which automatically translates to a /// physical path./// </summary>public string FileUploadPhysicalPath
    {get{if (_FileUploadPhysicalPath.StartsWith("~"))
                _FileUploadPhysicalPath = Context.Server.MapPath(_FileUploadPhysicalPath);return _FileUploadPhysicalPath;
        }set{
            _FileUploadPhysicalPath = value;
        }
    }private string _FileUploadPhysicalPath;public plUploadFileHandler()
    {
        FileUploadPhysicalPath = "~/temp/";
    }/// <summary>
    /// Stream each chunk to a file and effectively append it. /// </summary>protected override bool OnUploadChunk(Stream chunkStream, int chunk, int chunks, string uploadedFilename)
    {var path = FileUploadPhysicalPath;// try to create the pathif (!Directory.Exists(path))
        {try{Directory.CreateDirectory(path);
            }catch (Exception ex)
            {
                WriteErrorResponse(Resources.UploadDirectoryDoesnTExistAndCouldnTCreate);return false;
            }                
        }string uploadFilePath = Path.Combine(path, uploadedFilename);if (chunk == 0)
        {if (File.Exists(uploadFilePath))File.Delete(uploadFilePath);
        }Stream stream = null;try{
            stream = new FileStream(uploadFilePath, (chunk == 0) ? FileMode.CreateNew : FileMode.Append);
            chunkStream.CopyTo(stream, 16384);                
        }catch{
            WriteErrorResponse(Resources.UnableToWriteOutFile);return false;
        }finally{if (stream != null)
                stream.Dispose();
        }return true;
    }
}

This class adds a property for the upload path where files uploaded are stored and overrides the OnUploadChunk() method to write the received chunk to disk which effectively appends the chunk to the existing file.

If all you want to do is fire and forget file uploads into a specified folder you can use this handler directly. But typically - as in the ImageUploadHandler.ashx application implementation - you'll want to do something with the data once it arrives. In our example, I subclassed from plUploadBaseHandler and implemented the OnUploadCompleted method to handle the resizing and storing of the image in a separate folder. I suspect most applications will want to do that sort of thing.

The application level ImageUploadHandler then simply inherits from the plUploadFileHandler class and only implements the OnUploadCompleted() method which simply receives the finished file to do something with - resize and copy in this case.

In summary, implementation of custom plUpload handlers is easy with this class. I've used this uploader on quite a few applications of reasonably large volume and it's been solid. At this point I have an easy framework for plugging uploads into any ASP.NET application. Hopefully you some of you find this useful as well.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in ASP.NET  JavaScript  

Text Editor Associations/Extensions in Visual Studio getting lost

$
0
0

Here's something that's coming up frequently since I maintain a framework that requires custom script map extensions: So you create a custom script map for your application and then map that script map into the Visual Studio editor. And you go happily along editing your files using the custom extension mapping in Visual Studio. For my particular scenario I have an extension .wcsx mapped to the WebForms Editor.

So now I'm happily using my custom extension document. Everything is hunky dory. Then one day I shut down and open the project back up the next day only to find something like this when opening the .wcsx file:

NoAssociation

What I should be seeing is syntax coloring for your custom extension, but instead I just get a wall of black text - with the text not highlighted. What the hey? It worked last night, right?

Unfortunately, this can happen from time to time with Visual Studio (problem has existed since Visual Studio 2008).

Fix it

When you lose the file association and your previously mapped extensions no longer show syntax highlighting, there's an easy fix - annoying as it may be.

Go to Tools | Options | Text Editor | File Extensions

Now when you open this dialog you're probably going to find your extension actually mapped. My dialog looks like this even while displaying the text non-highlighted as shown in the image:

ExtensionEditro

It looks like the association is set, the .wcsx extension is there, but even so the syntax coloring does not work. Visual Studio internally tracks the associations but for some reason or other is not actually applying them when opening the file in the editor.

To fix this is a two step process - and yes it's not very intuitive:

  • Select the extension that doesn't work
  • Click on Remove
  • Type the extension into the Extension textbox
  • Click Apply

It's important that you remove the extension first before adding it back in! Just clicking apply on the extension has no effect.

Once that's done the extension once again works properly and my text shows up properly syntax colored:

SyntaxColoring 

It's a hassle when this happens, because usually it's not just one association that gets wiped out but all of them. I tend to have 5-10 of them active and it takes a few minutes to go through all of those and reset them.

What sucks is that I've not been able to trace this down reliably to repro. One of my products - West Wind Web Connection - relies on script map extensions and this is a common problem that pops up with customers who are utterly perplexed that their editor window all of a sudden doesn't show the syntax highlighting anymore. It's even more perplexing to them when they go to the file extensions dialog and find that the mapping apparently is there and doesn't work. It's not exactly intuitive to remove the mapping and then put it back in… heck it took me a while to figure this out myself.

So I hope somebody from Microsoft is looking at this and decides to fix this once and for all. This bug has been in Visual Studio since VS 2005. If anybody else has run into this and has some reliable repo steps that would be useful too - please post those and I can forward them to the right folks at Microsoft. I've brought this up a few times, but I've always been called to provide a reliable repro which unfortunately I've not been able to provide.

In any case, hopefully this post will help out somebody that's stuck with this.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Visual Studio  

Firing an Entity Framework Database Initializer from within DbContext

$
0
0

One thing about Entity Framework that bugs me are how database initializers work. Database Initializers are used to initialize the database and determine the behavior of how the Entity Framework model interacts with the database. The default EF behavior compares the model to any meta data in the database and if it's not valid, throws a failed meta data exception. Initializers serve a valid purpose, but I tend to forget exactly what object I have to call to initialize them and what class I have to override to create them. Worst of all though, because database initializers are supposed to run only once per AppDomain they have to be placed in a very specific location in a project to fire at application startup. In other words, it's fairly hard to internalize database initializers.

How Database Initializers work

Database initializers initialize the database and are intended to run exactly once per AppDomain. This means they need to be run on application startup - either in a desktop apps startup code or in Application_Start in Web application.

For example to use an initializer in ASP.NET, you use syntax like the following:

protected void Application_Start()
{Database.SetInitializer<ClassifiedsContext>(new DropCreateDatabaseIfModelChanges());}

It's unlikely that you'll use one of the default database initializers in a live application, since they are pretty destructive but they can be useful for testing or during development when the database changes a lot. The more common scenario like is to use the Migration initializer:

Database.SetInitializer<ClassifiedsContext>(new MigrateDatabaseToLatestVersion<ClassifiedsContext,MigrationConfiguration>());

Uff, that's a mouthful, eh? I tend to subclass this mouthful with my own class to make the initializer a little more approachable.

public class ClassifiedsModelMigrateInitializer :MigrateDatabaseToLatestVersion<ClassifiedsContext,ClassifiedsBusiness.Migrations.MigrationConfiguration>        
{
}

Initializers can override the Initialize() and Seed methods(). The Initialize method is responsible for checking the context and perform any additional initialization you might have to do. The default implementation checks the metadata in the database (if it exists) against the model. If you don't want that to happen  you can simply implement an empty initializer like so:

public class QueueMessageManagerContextInitializer : IDatabaseInitializer<QueueMessageManagerContext>
{protected void Seed(QueueMessageManagerContext context)
    {
    }public void InitializeDatabase(QueueMessageManagerContext context)
    {// do nothingSeed(context);
    }
}

which can then be used like this:

Database.SetInitializer<QueueMessageManagerContext>(new QueueMessageManagerContextInitializer());

Turns out there's actually an even simpler way to have a non-actionable database Initializer - simply pass null:

Database.SetInitializer<QueueMessageManagerContext>(null);

My Problem

Earlier today I ran into just this problem. I have a single database with two DbContexts connected to it. The first is a very rich model that uses migrations and so uses the Db's meta data for validation and triggering migrations as needed. The second is a small component with a tiny model of two tables that is intended to just work without checking the existing metadata in the database. The second context tests out fine when running in its own database with hand created tables to map to, but if dumping those same tables to the live database that is also accessed by the first context it fails.

The above 'empty' initialization strategies work well to allow me to bypass the model meta data validation on startup.

But there's another problem here: The second context is part of a small reusable component that's meant to be generic. Requiring a custom database initializer is a pain for this because the initializer forces the consumer of my component to externally create the initializer and call it. IOW, the consumer has to remember to set up the initializer and place it in the right place in the startup sequence.

That sucks!

Internalizing the Database Initializer

Incidentally this is something that's bugged me a quite a bit about EF in other places. I always forget exactly what I need for implementing a database initializer in an app. How does it relate to the project what do I need to instantiate and where's the best place to put it etc. etc. It's ugly and not very discoverable. Frankly the only way I remember is to open up another project and see how I did it previous :-)

It seems to me that the initializer invocation is not an application responsibility but the responsibility of the context and that's where I would expect that behavior to live.

So I got to thinking - wouldn't it be nice to actually make the initialization more generic so that it can be called from anywhere and still be guaranteed to always fire just once.

Here's a DbContext utility method that does just that:

public static class DbContextUtils<TContext>where TContext : DbContext{static object _InitializeLock = new object();static bool _InitializeLoaded = false;/// <summary>
    /// Method to allow running a DatabaseInitializer exactly once/// </summary>   
    /// <param name="initializer">A Database Initializer to run</param>public static void SetInitializer(IDatabaseInitializer<TContext> initializer = null)
    {            if (_InitializeLoaded)return;// watch race conditionlock (_InitializeLock)
        {// are we sure?if (_InitializeLoaded)                return;

            _InitializeLoaded = true;// force Initializer to load only onceSystem.Data.Entity.Database.SetInitializer<TContext>(initializer);
        }
    }
}

Nothing fancy here - all this code really does is check to see if this code was previously by using a static flag to hold the state. If you recall, static properties/fields are global to the AppDomain so this SetInitializer call fires exactly once per AppDomain. This code needs to be called before the first full context invocation.

If the goal is to internalize this code as part of the context, it's easy to stick it into the constructor of the DbContext subclass you create. Here's an example:

public class QueueMessageManagerContext : DbContext{public QueueMessageManagerContext()
    {            // don't validate the schemaDbContextExtensions<QueueMessageManagerContext>.SetInitializer(null);                        
    }public DbSet<QueueMessageItem> QueueMessageItems { get; set; }        
}

Now you don't have to worry about when the initializer is called because the first access to your context automatically initializes the context using the specified initializer. This also keeps all behavior relative to the Context in one place, so personally I like this. You can also still use app startup code to call the method directly just like calling SetInitializer directly.

This is a small thing of course, but it's something that's important to me as in my current app I'm working on with a client we have many small self contained components that have micro EF models. I can now easily force all of those components to non check the meta data when they start and all can share the same database easily.

Resources:

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Entity Framework  

Experimenting with Online Backups

$
0
0

For the last couple of weeks I finally decided that it's time to get an online backup plan of some sort to do 'live' backups of my data and work environment. I figured this process would be fairly straight forward since online backup services have been around for a while now. To my chagrin though I discovered that while there are plenty of services out there, finding one that provides the right features at the right price was anything but trivial. I spent a bunch of time playing around with various services. Here's what I played with:

This list is not exhaustive by any means - there are many more services out there but these are the ones I looked at based on recommendations and after poking around a bit before playing. In this post I'll describe what I found. I'd be curious to hear what some of you are using and how you fared.

What I've been doing

To give some context first, before going down this path I've been been doing timed backups that do a backup every night to an attached USB drive that I swap between two physical devices every few days a third drive is backed up about once a month and that one is stored with a friend off site (we basically swap drives). That's actually been working pretty good except that on occasion the machine is off at night or the drive isn't connected and so a backup here and there fails.

Sure enough on one of those occasions when the backup didn't happen one night a few months ago, my SSD drive froze up and I lost a bunch of data from the previous days' work (I got it back when a couple of days later the SSD miraculously decided to start up again :-). Since then I've been a lot more meticulous in either manually backing up or making sure the box does wake up to run the backup in early AM hours.

What I have been doing - and still do for daily backups - is to simply do a batch file backup where I map all of my folders that I care about and push them off to a backup drive. The batch file I use, utilizes RoboCopy which ensures speedy and reliable copying of files using recursive directory copying/mirroring:

@echo off

set source=c:
set target=e:

if "%1"=="" goto continue
set source=%1

if "%2"=="" goto continue
set target=%2

:continue
echo Copying from drive %source% to %target%   %1 %2
echo

iisreset /stop
net stop mssqlserver

robocopy %source%\projects2010 %target%\backups\dev\projects2010 /MIR /R:2 /W:5 /MT:8
robocopy %source%\wwapps %target%\backups\wwapps /MIR /R:2 /W:5 /MT:8
robocopy %source%\westwind %target%\backups\westwind /MIR /R:2 /W:5 /MT:8

net start mssqlserver
iisreset

robocopy %source%\data %target%\backups\data /MIR /R:2 /W:5 /MT:8
robocopy %source%\saved %target%\backups\saved /MIR /R:2 /W:5 /MT:8
robocopy %source%\articles %target%\backups\articles /MIR /R:2 /W:5 /MT:8
robocopy %source%\utl %target%\backups\utl /MIR /R:2 /W:5 /MT:8

robocopy %source%\Users\ricks\Documents %target%\backups\documents /MIR /R:2 /W:5 /MT:8
robocopy %source%\Users\ricks\Music %target%\backups\music /MIR /R:2 /W:5 /MT:8
robocopy %source%\Users\ricks\Pictures %target%\backups\photos /MIR /R:2 /W:5 /MT:8

robocopy %source%\Users\ricks\AppData\Roaming %target%\backups\Roaming /MIR /R:2 /W:5 /MT:8
robocopy %source%\Users\ricks\AppData\LocalLow %target%\backups\LocalLow /MIR /R:2 /W:5 /MT:8

This file basically backs up all my development folders first - shutting down IIS and SQL Server so files locked by those are released. Then a bunch of personal data stuff is backed up individually. There might still be other stuff that's open (like an Outlook file) but the key stuff like data and dev related stuff is backed up for sure.

The batch file is then tied to Scheduled Task at 5am every morning with auto wake up of the computer enabled and I occasionally run this manually as well. The nice thing is that I can also do this while travelling when other means like online backups are not a good idea due to bandwidth constraints.

Clearly this is a most basic backup strategy, but it's actually served me really well over the years. I only back up things that matter so the back up size is somewhat reasonable and runs at about 75 gigs at the moment. I can do this easily when I travel as long as I carry a USB drive with me. RoboCopy only copies what's changed so the actual copied data is not excessive, so typical intra-day backups take something like 5 minutes or less to run. These days I often run another backup in mid-day when heading out to lunch. And since there's no compression or special storage involved I can access the backed up data directly anytime without using any software.

Why no automated backup?

You might ask, why no automated backup solution? To be honest I've looked at various automated backup software in the past, but I've never really found anything that was easy to set up and maintain and that worked reliably especially when on the go. It seems with most backup software that I've tried it's always a royal pain in the butt to set up the initial folders and it's even more of a hassle to actually make changes to a backup configuration. I also was never fond of real time backup software, because I'm running a lot of stuff that's constantly open and constantly locked. Backing up files that are always open is not very reliable if they ever get backed up at all - most backup software gets this wrong. Then there's typically the mess of incremental backups and trying to find individual files in them - it just never seems to work out well. In short, to me there were too many downsides and not enough upsides to make it worthwhile to switch from my simple batch file backups which while sort of crude catch everything I care about efficiently.

Backup to the Cloud? Yes please!

After my wake up call with the SSD drive, I finally got enough fire under my ass to take a look at online backup solutions and I started looking around what's out there. There are a lot of choices available today, and I'm finding that there's lots to like about various solutions I still haven't found one solution that works for everything I'd like to have. But I made a choice

There are really a few different approaches available for cloud storage of data:

  • Pure file storage services like DropBox and SkyDrive
    These services basically provide you a shared folder on your machine - anything dropped into that folder is synced between devices that are connected to this account and have syncing enabled. These services seem to be geared at selective backups.
  • Backup Services like Carbonite and CrashPlan
    Carbonite and CrashPlan bill themselves as full backup plans that allow you to do full machine backups (depending on the plan you chose). The idea with these services is that they provide a one stop solution to backing up everything and in the case of Carbonite to restore an entire machine.
  • Hybrid Services like SugarSync
    SugarSync on the surface looks a lot like DropBox and SkyDrive, but provides a bit more integration with the Windows Shell. Folders can be dropped onto SugarSync to sync and you can then see the sync status. You can also access backed up files in a separate drive, so it's real easy to find and search for files that are backed up. Overall I found that this combination along with the easy UI and fast syncing of changed files was the best solution for what I needed. It blurs the line between backup and syncing services.

DropBox and SkyDrive

The first set of tools I used - some time ago and now re-evaluated - are DropBox and SkyDrive. Both services are fairly similar although I would say that DropBox worked a lot more intuitively and was much less resource intensive than SkyDrive. My first thought went to these services since I'd used them previously, not quite sure if they'd make a good fit. In short, they didn't, but in a pinch they can work for basic backup purposes.

These services seem like they are primarily designed to share files - between devices and also externally for collaboration or simple file sharing with others. For this purpose they work well, but they're not particularly well suited for more complete backup scenarios. For one thing neither of these services explicitly supports linking folders external to their shared folders (although there's a workaround for that which I describe below).

Either of these services will work for file sharing but for me at least as a backup (and sharing) solution they weren't sufficient.

DropBox
I've been using DropBox off and on for a long time, but mostly for sharing files. DropBox was one of the first cloud file services out there and it works efficiently and has a simple and logical UI. It just works and gets out of the way. It's easy to drop files into the shared folder and the Web UI makes it very easy to share files with others. It all works as expected in a no frills but efficient sort of way.

The big downside to DropBox is that the amount of free storage is very limited at 2.5 gigs. While you can boost that minimum storage a bit with a few referrals, it's still not enough for even the most basics of backups. I'd say 5gig is the absolute minimum I could get away with for the most basic backups of data I work on a daily basis. Pricing beyond that free tier is also on the very pricey side with cheapest plan at $100/yr or $10/month for 100gigs. An intermediate step for 50gigs would be much more welcoming.

The desktop 'interface' for DropBox is just the dropbox folder. The Web Interface for DropBox is basic, but easy to navigate and get to the files you need quickly.

dropbox[6]

SkyDrive
Although I had been using DropBox for sometime, the first thing I actually checked out was SkyDrive, mainly because I have an old free SkyDrive account that was capped at 25 gig, which is a stately amount of free storage that would suffice for my typical 'live' backup needs. New accounts opened now get 7 GB of free space.

SkyDrive pretty much provides the same feature set as DropBox does and adds a few pretty cool features on top of it. One particularly nice feature is the ability to share your actual live PC if it is online. You can set it up so that your machine's drive(s) are remotely accessible, with file access to all of your files from other computers or any device that is logged into your SkyDrive account.

SkyDrive also intrinsically knows about Office documents and lets you edit Office documents online in Office 365 documents (which BTW are pretty sweet looking for online versions!). If you have Office installed on the machine you're using you can also open Office documents on your SkyDrive directly and then save them right back to your SkyDrive when you're done.

The extra features are pretty nice, but unfortunately I found SkyDrive to be pretty slow uploading and syncing of my data, and at the same time chewing through CPU making my laptop sound like a jet was taking off. Once I had a good chunk of data uploaded it seemed like forever for updated files to actually sync up to the server. And because there's no easy way to see what files are actually synced I often found myself with data that's out of date. Even doing something simple like dropping a PDF file into my Public folder and then accessing it from my phone a few minutes later usually didn't work, because of the delay between drop and sync which greatly reduces the utility of a file sharing service. I didn't have these kind of issues with DropBox or any of the other services I tried which seem to sync manually dropped files much more quickly especially in the 'special' folders.

On two occasions I also found SkyDrive to temporarily lose files. I'd go to a folder see two files, go back then only see one. Go back see two again. One of the files had been changed on the client but the file was never deleted on the local drive, yet SkyDrive was missing the file for a while. Not very confidence inspiring.

I also find SkyDrive's Web interface and the mobile apps (on IOs anyway) annoying to use. SkyDrive by default uses a Windows Metro style tile UI that is a waste of screen real estate without providing any real value for all that space especially when backing up non-media files. Fortunately you can also switch to a details view which uses space more diligently and provides more information. However, SkyDrive still hides file extensions of some known file types. Here's the details view:

SkyDrive 

Of the two services, I found DropBox to be more responsive and in general easier to use and consume both on the desktop and on various devices. Both work well enough, but SkyDrive definitely feels rougher around the edges. I suspect in time Microsoft will get the roughness smoothed out and it will end up a good solution. For now, as a backup solution neither of these services really work well.

Using Windows Junction Points to sync folders

DropBox and SkyDrive expect you to drop files into their respective sharing folders (User\DropBox User\SkyDrive respectively) in order to share files, so officially they don't support 'folder syncing' directly.

However, you can actually keep folders in sync by using Directory Symbolic Links (NTFS Junctions) which link a physical folder to a 'virtual' location. You can use the MKLINK Windows utility like this:

mklink /D c:\users\ricks\dropbox\utl  c:\utl

This causes the c:\utl folder to be synced into a DropBox utl folder. The same trick also works with SkyDrive. MKLINK comes with Vista and later - if you have an XP box you can use Systernal's Junction instead. For SkyDrive there's also a SkyShellEx utility you can download that provides a shell extension shortcut on folders so you can immediately share to the SkyDrive folder.

Full backups Carbonite and CrashPlan

The next set of tools I looked at were meant as tools for full backups more so than file sharing although both of these services also provide the ability to access files from various devices and so effectively provide file sharing.

Carbonite

I only spent a brief time with Carbonite. Carbonite is clearly geared at casual computer users and wants to hold your hand through the backup process. It makes some reasonable suggestions for backup of user data, but it's not the greatest choice if you want to customize your backups. Carbonite only includes certain files by default in your backups and you have to manually add 'special' files like EXEs or anything that Carbonite doesn't recognize as data. This means you have to manually tweak the default data set selected. The UI also provides no easy way to see what's in the default set or modify it short of uninstalling and reinstalling. I was pretty turned off by this clearly end user focused product, but also by the fact that I think that many people will actually get screwed by these default settings that are supposed to back up everything but in all reality are likely to miss a lot of vital stuff that's needed. Not cool.

On the plus side Carbonite is reasonably priced with unlimited backups of internal drives for $60. For that you can't back up attached drives. The next tier allows for any data, plus mirroring and simultaneous hard disk backups which is nice, but pricey at $99. Carbonite also shows files that it manages for backup with icons that indicate backup status - a vital feature that's unfortunately missing from many other services like CrashPlan and also SkyDrive and DropBox.

CrashPlan

CrashPlan is unabashedly an online backup tool and it does that job well, but it's not as polished as some of the other products. All products sync data of course and Crashplan's data synching worked quickly and smoothly without much CPU overhead in the background. You also have the ability to fine tune how much bandwidth and CPU it uses very finely which can be useful if you're on a slower connection or if you move around a lot. If you just need your files backed up and then in an emergency pick them back up then Crashplan is a great solution.

CrashPlan also keeps track of recent versions of your files and you easily see version and roll back to them. You can also simultaneously backup to a local folder/drive so you can have a local copy of the same backup data.

CrashPlan also has attractive pricing with unlimited data for $60/yr - the same price as Carbonite, but with the ability to back up all data on from a single computer including attached drives. There's also a free plan, but it's extremely limited to sharing backups with friends (you back each others machines up essentially) which seems like little more than a gimmick.

The biggest problem with CrashPlan is that it uses an old school hierarchical UI to manage files in the backup. It uses its own tree to show your computer's file system and you have to select the folders/files to be backed up. The same is true for restoring files and this is really the bigger issue. If you want to look at a backed up file you have to download it, push it into a different folder then open it separately. CrashPlan would benefit tremendously from the Windows Shell Integration that all of the other products provide, but then again, the primary feature is definitely for backup not for casual file access where you set up crash plan and forget about it until you have a problem or need files on one of your devices on the go.

Crashplan UI

Although you can easily access backed up files from various devices and phone apps, the process of sharing files is not quite as smooth as with other tools. You can only add files through a backup, but not from devices so in effect you can't push data from the device to the local machine.

CrashPlan also is not quite as good about new files added popping up in the backup right away. It basically schedules a backup after the last one completes, so the process is synchronous - one backup after the other. While you can control when and how files are backed up, even with the always option, new files aren't immediately detected and updated and there's no 'special folder' onto which you can drop files that are prioritized. This means that as a file sharing service CrashPlan is not ideal and you'll likely need to back it up with something like DropBox or SkyDrive.

This is not to fault CrashPlan - as a pure backup solution it works great and as such it pretty much does exactly what it promises to do.

SugarSync

One tool I looked at right at the beginning of my trials was SugarSync and I liked it right away, but I was initially put off by the higher pricing and for less data ($75/yr for 60 GB, $100 for 100 GB). As I looked at all the other choices and the pros and cons of both I found myself going back to SugarSync because to me at least it seems to strike the right balance between non-intrusive file backup and complete synching across devices.

There's lots to like about SugarSync. The first thing I noticed is that it seems a bit faster than most of the other services in getting the data pushed up to the server. Given my upload bandwidth is not exactly stellar here on Maui, I'm surprised at that figuring most of these services would be close to maxing out my connection, but it just seems like SugarSync was able to sync 6 GB of my initial test data in a few hours all the while seemingly not affecting the bandwidth while browsing too heavily. SugarSync also seems to be fairly light on the CPU usage. I let SugarSync back up my 6 gigs of test data twice and both times I was pleasantly surprised how quickly that went.

 

Also really nice is the syncing functionality of SugarSync which just seems much quicker than other services. Files dropped on the SugarSync Shared folder show up immediately even while other uploading is still going on. Even files changed while the initial upload is still going seem to show up quickly. It appears SugarSync is prioritizing changed files in backups which seems like a good idea that seems pretty obvious, but doesn't seem to happen in other products.

SugarSync's UI is more polished than most and it provides excellent Shell integration, so the shared folders show up in an S: drive on my system. The actual files in use are highlighted with icons that show the backup status of the actual files so it's easy to see what's changed and pending.

SugarSync

The Web and mobile interfaces make it easy to browse through files and pick individual files including multiple versions.

SugarSyncWeb

As a file sharing service, SugarSync is similar to DropBox and SkyDrive but has better Shell Integration than either of those products. I don't have to explicitly do folder mappings using mklink, but I can simply drop a folder onto SugarSync to have it show up in my shared folders. Once configured I can specify how the folders is shared (one way or two way).

On the downside it's not very easy to manage your mapped folders in SugarSync. In fact I ran into a major problem when I uninstalled SugarSync, then reinstalled a week later and all of a sudden had SugarSync back syncing from the cloud to my local machine in the process renaming a bunch of changed files. I also was unable to delete mapped folders while not connected, so it was difficult to get folders unlinked. The real trick is to mark a folder as not down-synched first then delete it, but that's all not very obvious. Still that issue was more of my problem as I just killed SugarSync and then reattached it telling it to sync up from the server, not realizing that the old files were still there.

To me SugarSync so far has the best combination of synching and file backup and the smoothest user experience. With low overhead, especially once the initial data upload has gone up there's very little activity on the wire and changes seem to appear very promptly on the cloud drive.

I'm not too thrilled with their pricing, but I for piece of mind I guess it's worth it. I just seems that the sweet spot for full back up should be right around $50/yr. Anything much above that just seems too much to spend…

No perfect Solutions

After all this trial and error with providers I still come away with partial solutions. Although SugarSync comes close it too is pretty rough. I posted on Twitter  throughout this trial and I heard quite a few horror stories about most services where files were missed or things took forever to sync up. It's surprising to see that this hasn't gotten a lot more solid yet.

Part of this too is that cloud backup are still hitting the limits of even reasonably fast connections. I have a 1.2mb up connection with RoadRunner here in Hawaii, but I rarely see that. Up speeds often hover around 750kb or so, which is pretty slow if you need to push up even 50GB of data. It'll take forever to sync. Now try to get that 1 or 2 GB database sent up everytime it changes and the backup will eat through a lot of bandwidth very quickly. I also noticed that when I was testing at some point my bandwidth started severly throttling - it's likely that RoadRunner decided to cripple my connection for a day or two of heavy uploads. We're all at the mercy of the bandwidth gate holders.

As I mentioned I've gone with SugarSync, but it was a close race between using it and CrashPlan + DropBox. CrashPlan works great for backup, and in combination with a file synching software it would make a good mix. This would allow for getting a full backup on CrashPlan and only putting a relatively small amount of shared files into DropBox or SkyDrive.

Even so after all of this I'm a little wiser and my data a is a little less susceptible to loss as I have my important files backed up in the Cloud - or they will be once they all arrive there :-)

I'd be curious to hear what some of you are using for online backup and synching and how you fared.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Office  

A small, intra-app Object to String Serializer

$
0
0

Here's a scenario that keeps popping up for me occasionally: In a number of situations I've needed to quickly and compactly serialize an object to a string. There are a million serializers out there from XML to binary, to JSON and even the somewhat compact LosFormatter in ASP.NET. However, all the built-in serialization mechanisms are pretty bloated and verbose especially when sending output to strings.

An Example for Micro-Serialization

Some scenarios where this makes sense is for compact configuration storage or token generation for application level storage. A common place where I need this is for cookies that pack a few values together or for a forms authentication ticket value. For example, I have a UserState object with common properties that stores UserId, User Name, Display Name, email address and a security level in a small object that is carried through with the forms authentication cookie - this avoids having to hit the database (or Session which I don't use) for user information on every hit.

The UserState object is very small and so it seems like overkill to throw a full JSON or XML serialization at it. Furthermore the XML and JSON sizes are pretty large. For my UserState object I used to actually hand code a 'custom' serialization mechanism which is trivial simple and based on string.Split() and string.Join() with a pipe symbol.

To demonstrate that scenario - which I have implemented on numerous other occasion - here's the code:

/// 
/// Exports a short string list of Id, Email, Name separated by |/// public override string ToString()
{            return string.Join(STR_Seperator, new string[] { 
                                             UserId,
                                             Name,
                                             IsAdmin.ToString(),
                                             Email }); } /// /// Imports Id, Email and Name from a | separated string/// public bool FromString(string itemString) {if (string.IsNullOrEmpty(itemString))return false;var state = CreateFromString(itemString);if (state == null)return false; UserId = state.UserId; Email = state.Email; Name = state.Name; IsAdmin = state.IsAdmin; return true; }/// /// Creates an instance of a userstate object from serialized/// data./// /// IsEmpty() will return true if data was not loaded. A /// UserData object is always returned./// public static UserState CreateFromString(string userData) { if (string.IsNullOrEmpty(userData))return null;
string[] strings = userData.Split(new string[1] {STR_Seperator}, StringSplitOptions.None );if (strings.Length < 4)return null;var userState = new UserState(); userState.UserId = strings[0]; userState.Name = strings[1]; userState.IsAdmin = strings[2] == "True"; userState.Email = strings[3];return userState; }

Simple enough, but it ends being a bit of non-flexible code as the properties have to be stored and retrieved in exactly the right order. Also if the class adds a property I have to manually add any properties in both ToString() and CreateFromString(). Nevertheless, this is efficient since it is handcoded and I've written code like this on a number of occasions, often enough to warrant a more generic solution.

Making it Generic: StringSerializer

Besides violating the DRY priniciple, there's also the issue of this code not being very flexible. If I decide to add a property to this object the serialization routines have to be updated. Or - more likely perhaps in an object like this, if it's subclassed there's then no easy way to add additional properties to serialize.

So today I spent a little time creating a simple generic component that basically provides this behavior as a reusable component. Here's an implementation of a simple StringSerializer:

///

/// A very simple flat object serializer that can be used /// for intra application serialization. It creates a very compact/// positional string of properties./// Only serializes top level properties, with no nesting support/// and only simple properties or those with a type converter are/// supported. Complex properties or non-two way type convertered/// values are ignored./// /// Creates strings in the format of:/// Rick|rstrahl@west-wind.com|1|True|3/29/2013 1:32:31 PM|1/// /// /// This component is meant for intra application serialization of/// very compact objects. A common use case is for state serialization/// for cookies or a Forms Authentication ticket to minimize the amount/// of space used - the output produced here contains only the actual/// data, no property info or validation like other serialization formats./// Use only on small objects when size and speed matter otherwise use/// a JSON/XML/Binary serializer or the ASP.NET LosFormatter object./// public static class StringSerializer{private const string Seperator_Replace_String = "-@-";/// /// Serializes a flat object's properties into a String/// separated by a separator character/string. Only/// top level properties are serialized./// /// /// Only serializes top level properties, with no nesting support/// and only simple properties or those with a type converter are/// 'serialized'. All other property types use ToString()./// /// The object to serialize /// Optional separator character or string. Default is | /// public static string SerializeObject(object objectToSerialize, string separator = null) {if (separator == null) separator = "|";if (objectToSerialize == null)return "null";var properties = objectToSerialize.GetType()
.GetProperties(BindingFlags.Instance |
BindingFlags.Public);var values = new List<string>();for (int i = 0; i < properties.Length; i++) {var pi = properties[i];// don't store read/write-only dataif (!pi.CanRead && !pi.CanWrite)continue;object value = pi.GetValue(objectToSerialize, null);string stringValue = "null";if (value != null) {if (value is string) { stringValue = (string)value;if (stringValue.Contains(separator)) stringValue = stringValue.Replace(separator, Seperator_Replace_String); }elsestringValue = ReflectionUtils.TypedValueToString(value, unsupportedReturn: "null"); } values.Add(stringValue); } if (values.Count < 0)// empty object (no properties)return string.Empty;return string.Join(separator, values.ToArray()); }/// /// Deserializes an object previously serialized by SerializeObject./// /// /// /// /// public static object DeserializeObject(string serialized, Type type, string separator = null) {if (serialized == "null")return null;if (separator == null) separator = "|";object inst = ReflectionUtils.CreateInstanceFromType(type);var properties = inst.GetType().GetProperties(BindingFlags.Instance | BindingFlags.Public);string[] tokens = serialized.Split(new string[] { separator }, StringSplitOptions.None);for (int i = 0; i < properties.Length; i++) {string token = tokens[i];var prop = properties[i];// don't store read/write-only dataif (!prop.CanRead && !prop.CanWrite)continue; token = token.Replace(Seperator_Replace_String, separator); object value = null;if (token != null) {try{ value = ReflectionUtils.StringToTypedValue(token, prop.PropertyType); }catch (InvalidCastException ex) {// skip over unsupported types } } prop.SetValue(inst, value, null); }return inst; }/// /// Deserializes an object serialized with SerializeObject./// /// /// /// /// public static T Deserialize(string serialized, string separator = null)where T : class, new() {return DeserializeObject(serialized, typeof(T), separator) as T; } }

Note that there are two dependencies on ReflectionUtils and the TypedValueToString and StringToTypedValue which handle the string type conversions. They're included in a support file linked at the bottom of this post.

With this object in place I can now re-write the UserState object's built in self serialization like this:

public override string ToString()
{return StringSerializer.SerializeObject(this);
}public bool FromString(string itemString)
{if (string.IsNullOrEmpty(itemString))return false;var state = CreateFromString(itemString);if (state == null)return false;// copy the propertiesDataUtils.CopyObjectData(state, this);return true;
}public static UserState CreateFromString(string userData)
{if (string.IsNullOrEmpty(userData))return null;return StringSerializer.Deserialize<UserState>(userData);
}public static UserState CreateFromFormsAuthTicket()
{return CreateFromString(((FormsIdentity)HttpContext.Current.User.Identity).Ticket.UserData);
}

The object now is uses the single StringSerializer methods to serialize and deserialize and the object is now more flexible in that it can automatically deal with additional properties that are added, perhaps even in a subclass.

To use the UserState object is super simple now. For example, in my base MVC Controller I can now easily attach the UserState object to the controller and the ViewBag:

protected override void Initialize(RequestContext requestContext)
{base.Initialize(requestContext);// Grab the user's login information from FormsAuth            if (this.User.Identity != null &&this.User.Identity is FormsIdentity)this.UserState = UserState.CreateFromFormsAuthTicket();else
        this.UserState = new UserState();// have to explicitly add this so Master can see untyped valueViewBag.UserSate = this.UserState;
    ViewBag.ErrorDisplay = this.ErrorDisplay;}

making the UserState easily available anywhere in the app. The user data is written once when  user logs or changes his profile info but otherwise the UserState is only read on each hit and made available to the app when logged in users are present. It's a great way to store basic information about a user without having to hit the database (as you have to with MemberShip in ASP.NET).

Limitations

As the comments describe, StringSerializer is not meant to be a full featured serializer. If you need to serialize large and nested objects or you need to share the data with other applications, this is probably not what you want to use. Use the JSON.NET serializer (because it's fast, robust and reasonably compact) or the Xml Serializer instread. StringSerializer has no support for nested objects and it only works the built-in .NET system types and anything that supports two-way ValueConverter string conversion.

StringSerializer makes a few assumptions: It requires that the fields serialized are desterilized in exactly the same order - IOW, deserialization expects the type signature on the other end to be the same as the input. Since there are no property names or any other kind of meta-data in the output, the data is small, but also is set up in a fixed format that must be the same on both ends. If you add properties the format breaks - hence the point about intra-app serialization - it's not meant as a portable format to share amongst machines or platforms or be version aware.

When it makes Sense

This serializer is fast and produces very compact output for small, flat objects and if all of these are important then it's a good fit. Serializing the 5 fields as I'm doing in the UserState object using JSON.NET would be overkill, produce a larger string and take more time to process.

The string data created is small - my UserState object serialized with this is 85 bytes in length which is basically the string value representation for each property plus the separators. JSON.NET formatted that same object to 166 bytes. The Xml Serializer to 379. The LosFormatter created a whopping 604 bytes (and required [Serializable]). StringSerializer also runs faster than any of the others I tried - a 1000 iterations on my machine take 5ms for the StringSerializer, 8ms for JSON.NET, 11ms for the XML Serializer and 16 for the LosFormatter . The perf of all of these is small enough to not matter, but the output size is significantly smaller for the StringSerializer which in the case of cookie or FormsAuth usage is vital.

It's not an everyday kind of component, but rather fits a special use case scenario, but one that I've run into often enough to warrant creating a reusable class for. Anyway I hope some of you find this useful.

Resources

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in ASP.NET  .NET  

UNC Drive Mapping Failures: Network name cannot be found

$
0
0

Yesterday I was working with a customer on setting up a remote map from my machine to a VPN drive on a remote server on the VPN. Typically you can just map a drive or directly enter a UNC path to the server:

\\113.111.111.111\myshare

Typically that works just fine, but yesterday I noticed that it didn't work. Further, it failed immediately with:

Network path cannot be found

Now there can be a number of problems that can cause this - like a connection issue, adapter configuration etc. - but in this case the behavior was very different that what I'd seen before:

  • The attempt to connect failed immediately (no checking on the network it seems)
  • Even though I checked Use different credentials the Login dialog did never pop up to prompt for credentials
  • Even trying to access drives on the local, internal network would fail
  • Trying to use the Network discovery dialog to access resources listed all failed immediately

This was very frustrating because network connections for all internet protocols and messaging have been working just fine - it's just that all the Windows based network mapping/location features failed immediately.

I smelled a rat right away given the immediate error message without delay and the fact that there was no login dialog ever brought up.

The Problem: Network Provider Order

After a bit more research I ran into an obscure Technet Forums post that discusses a similar case. The resolution in there is that the Network Provider order causes this sort of problem - if the provider order does not include LanMan in the first entries it doesn't work.

Promptly I checked my machine's settings at:

HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider\Order

what I found looked like this:

ProviderOrderInRegEditor

Notice the two empty entries in the order!

I changed the value to a more sane looking setting:

LanmanWorkstation,RDPNP,webclient

and that worked! Not sure what WebClient is but I didn't remove it - I suspect it's not required so don't add it if it wasn't there on your config in the first place.

The forum post mentions that other providers might sit in front of LanManWorkStation which is the provider responsible for managing network connections. To fix this I removed the two empty entries and moved LanmanWorkStation to the front of the list and low and behold I get my login dialog back and was able to connect to the remote machine.

Mysterious Settings

I hate shit like this, especially since there appears to be no way to set or even check the provider order anywhere manually. Clearly some application (which probably was uninstalled since) modified this setting, but nevertheless this is troublesome. Further the Windows 8 troubleshooting applet was a complete loss, going straight into 'Do you have permissions?' - of course not since I was never prompted :-)

Incidentally I think this has been a problem for a real long time of me being unable to map drives to servers. It hasn't been an issue since it's so rare that I ever need to do this, but yesterday we needed to quickly push some files up to a server that otherwise wasn't connected. Glad to get that resolved though because there is one other place where I do want to connect but gave up on a few months back.

Hopefully this post will be useful to somebody who runs into the same problem.

© Rick Strahl, West Wind Technologies, 2005-2013
Posted in Windows  
Viewing all 670 articles
Browse latest View live