Quantcast
Channel: Rick Strahl's Web Log
Viewing all 630 articles
Browse latest View live

Accepting Raw Request Body Content in ASP.NET Core API Controllers

$
0
0

A few years back I wrote a post about Accepting Raw Request Content with ASP.NET Web API. Unfortunately the process to get at raw request data is rather indirect, with no direct way to receive raw data into Controller action parameters and that hasn't really changed in ASP.NET Core's MVC/API implementation. The way the Conneg algorithm works in regards to generic data formats is roughly the same as it was with Web API.

The good news is that it's quite a bit easier to create custom formatters in ASP.NET Core that let you customize how to handle 'unknown' content types in your controllers.

Let's take a look.

Creating a Simple Test Controller

To check this out I created a new stock ASP.NET Core Web API project and changed the default ValuesController to this sample controller to start with:

public class BodyTypesController : Controller { }

JSON String Input

Lets start with a non-raw request, but rather with posting a string as JSON since that is very common. You can accept a string parameter and post JSON data from the client pretty easily.

So given this endpoint:

[HttpPost]
[Route("api/BodyTypes/JsonStringBody")]
public string JsonStringBody([FromBody] string content)
{
    return content;
}

I can post the following:

Figure 1 - JSON String inputs thankfully capture as strings in ASP.NET Core

This works to retrieve the JSON string as a plain string. Note that the string sent is not a raw string, but rather a JSON string as it includes the wrapping quotes:

"Windy Rivers are the Best!"

Don't Forget [FromBody]

Make sure you add [FromBody] to any parameter that tries to read data from the POST body and maps it. It's easy to forget and not really obvious that it should be there. I say this because I've forgotten it plenty of times and scratched my head wondering why request data doesn't make it to my method or why requests fail outright with 404 responses.

No JSON - No Workey

If you want to send a RAW string or binary data and you want to pick that up as part of your request things get more complicated. ASP.NET Core handles only what it knows, which by default is JSON and Form data. Raw data is not directly mappable to controller parameters by default.

So if you trying to send this:

POST http://localhost:5000/api/BodyTypes/JsonPlainBody HTTP/1.1
Accept-Encoding: gzip,deflate
User-Agent: West Wind HTTP .NET Client
Content-Type: text/plain
Host: localhost:5000
Content-Length: 28
Expect: 100-continue

Windy Rivers are the best!

to this controller action:

[HttpPost]
[Route("api/BodyTypes/PlainStringBody")]
public string PlainStringBody([FromBody] string content)
{
    return content;
}

The result is a 404 Not Found.

I'm essentially doing the same thing as in the first request, except I'm not sending JSON content type but plain text. The endpoint exists, but MVC doesn't know what to do with the text/plain content or how to map it and so it fails with a 404 Not Found.

It's not super obvious and I know this can trip up the unsuspecting Newbie who expects raw content to be mapped. However, this makes sense if you think about it: MVC has mappings for specific content types and if you pass data that doesn't fit those content types it can't convert the data, so it assumes there's no matching endpoint that can handle the request.

So how do we get at the raw data?

Reading Request.Body for Raw Data

Unfortunately ASP.NET Core doesn't let you just capture 'raw' data in any meaningful way just by way of method parameters. One way or another you need to do some custom processing of the Request.Body to get the raw data out and then deserialize it.

You can capture the raw Request.Body and read the raw buffer out of that which is pretty straight forward.

The easiest and least intrusive, but not so obvious way to do this is to have a method that accepts POST or PUT data without parameters and then read the raw data from Request.Body:

Read a String Buffer
[HttpPost]
[Route("api/BodyTypes/ReadStringDataManual")]
public async Task<string> ReadStringDataManual()
{
    using (StreamReader reader = new StreamReader(Request.Body, Encoding.UTF8))
    {  
        return await reader.ReadToEndAsync();
    }
}

This works with the following HTTP and plain text content:

POST http://localhost:5000/api/BodyTypes/ReadStringDataManual HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: text/plain
Host: localhost:5000
Content-Length: 37
Expect: 100-continue
Connection: Keep-Alive

Windy Rivers with Waves are the best!

To read binary data you can use the following:

Read a Byte Buffer
[Route("api/BodyTypes/ReadBinaryDataManual")]
public async Task<byte[]> RawBinaryDataManual()
{
    using (var ms = new MemoryStream(2048))
    {
        await Request.Body.CopyToAsync(ms);
        return  ms.ToArray();  // returns base64 encoded string JSON result
    }
}

which works with this HTTP:

POST http://localhost:5000/api/BodyTypes/ReadBinaryDataManual HTTP/1.1
Accept-Encoding: gzip,deflate
User-Agent: West Wind HTTP .NET Client
Content-Type: application/octet-stream
Host: localhost:5000
Content-Length: 40
Expect: 100-continue
Connection: Keep-Alive

Wind and Water make the world go 'round.

I'm sending a string here to make it readable, but really the content could be raw binary byte data - it doesn't matter what the content is in this case but it should be considered as binary data.

Running this results in:

Figure 2 - Capturing raw binary request data.

The result in the code is captured as binary byte[] and returned as JSON, which is why you see the base64 encoded result string that masquerades as a binary result.

Request Helpers

If you do this a lot a couple of HttpRequest extension methods might be useful:

public static class HttpRequestExtensions
{

    /// <summary>
    /// Retrieve the raw body as a string from the Request.Body stream
    /// </summary>
    /// <param name="request">Request instance to apply to</param>
    /// <param name="encoding">Optional - Encoding, defaults to UTF8</param>
    /// <returns></returns>
    public static async Task<string> GetRawBodyStringAsync(this HttpRequest request, Encoding encoding = null)
    {
        if (encoding == null)
            encoding = Encoding.UTF8;

        using (StreamReader reader = new StreamReader(request.Body, encoding))
            return await reader.ReadToEndAsync();
    }

    /// <summary>
    /// Retrieves the raw body as a byte array from the Request.Body stream
    /// </summary>
    /// <param name="request"></param>
    /// <returns></returns>
    public static async Task<byte[]> GetRawBodyBytesAsync(this HttpRequest request)
    {
        using (var ms = new MemoryStream(2048))
        {
            await request.Body.CopyToAsync(ms);
            return ms.ToArray();
        }
    }
}

Listing 1 - HttpRequest Extensions to retrieve raw body string and byte data. Github

which allows you to simplify those two previous controller methods to:

[HttpPost]
[Route("api/BodyTypes/ReadStringDataManual")]
public async Task<string> ReadStringDataManual()
{
    return await Request.GetRawBodyStringAsync();
}

[HttpPost]
[Route("api/BodyTypes/ReadBinaryDataManual")]
public async Task<byte[]> RawBinaryDataManual()
{
    return await Request.GetRawBodyBytesAsync();
}

Automatically Converting Binary and Raw String Values

If you'd rather use a more deterministic approach and accept raw data through parameters, a little more work is required by building a custom InputFormatter.

Create an MVC InputFormatter

ASP.NET Core has a clean and more generic way to handle custom formatting of content using an InputFormatter. Input formatters hook into the request processing pipeline and let you look at specific types of content to determine if you want to handle it. You can then read the request body and perform your own deserialization on the inbound content.

There are a couple of requirements for an InputFormatter:

  • You need to use [FromBody] to get it fired
  • You have to be able to look at the request and determine if and how to handle the content

So in this case for 'raw content' I want to look at requests that have the following content types:

  • text/plain (string)
  • application/octet-stream (byte[])
  • No content type (string)

You can add others to this list or check other headers to determine if you want to handle the input but you need to be explicit what content types you want to handle.

To create a formatter you either implement IInputFormatter or inherit from InputFormatter. The latter is usually the better approach, and that's what I used to create RawRequestBodyFormatter:

/// <summary>
/// Formatter that allows content of type text/plain and application/octet stream
/// or no content type to be parsed to raw data. Allows for a single input parameter
/// in the form of:
/// 
/// public string RawString([FromBody] string data)
/// public byte[] RawData([FromBody] byte[] data)
/// </summary>
public class RawRequestBodyFormatter : InputFormatter
{
    public RawRequestBodyFormatter()
    {
        SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/plain"));
        SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/octet-stream"));
    }


    /// <summary>
    /// Allow text/plain, application/octet-stream and no content type to
    /// be processed
    /// </summary>
    /// <param name="context"></param>
    /// <returns></returns>
    public override Boolean CanRead(InputFormatterContext context)
    {
        if (context == null) throw new ArgumentNullException(nameof(context));

        var contentType = context.HttpContext.Request.ContentType;
        if (string.IsNullOrEmpty(contentType) || contentType == "text/plain" ||
            contentType == "application/octet-stream")
            return true;

        return false;
    }

    /// <summary>
    /// Handle text/plain or no content type for string results
    /// Handle application/octet-stream for byte[] results
    /// </summary>
    /// <param name="context"></param>
    /// <returns></returns>
    public override async Task<InputFormatterResult> ReadRequestBodyAsync(InputFormatterContext context)
    {
        var request = context.HttpContext.Request;
        var contentType = context.HttpContext.Request.ContentType;


        if (string.IsNullOrEmpty(contentType) || contentType == "text/plain")
        {
            using (var reader = new StreamReader(request.Body))
            {
                var content = await reader.ReadToEndAsync();
                return await InputFormatterResult.SuccessAsync(content);
            }
        }
        if (contentType == "application/octet-stream")
        {
            using (var ms = new MemoryStream(2048))
            {
                await request.Body.CopyToAsync(ms);
                var content = ms.ToArray();
                return await InputFormatterResult.SuccessAsync(content);
            }
        }

        return await InputFormatterResult.FailureAsync();
    }
}

Listing 2 - InputFormatter to handle Raw Request inputs for selected content types. GitHub

The formatter uses CanRead() to check requests for content types to support and then the ReadRequestBodyAsync() to read and deserialize the content into the result type that should be returned in the parameter of the controller method.

The InputFormatter has to be registered with MVC in the ConfigureServices() startup code:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc(o => o.InputFormatters.Insert(0, new RawRequestBodyFormatter()));
}

Accepting Raw Input

With the formatter hooked up to the MVC formatter list you can now handle requests that POST or PUT to the server using text/plain, application/octet-stream or no content types.

Raw String

[HttpPost]
[Route("api/BodyTypes/RawStringFormatter")]        
public string RawStringFormatter([FromBody] string rawString)
{
    return rawString;
}

and you can post to it like this:

POST http://localhost:5000/api/BodyTypes/RawStringFormatter HTTP/1.1
Accept-Encoding: gzip,deflate

Raw Wind and Water make the world go 'round.

or

POST http://localhost:5000/api/BodyTypes/RawStringFormatter HTTP/1.1
Accept-Encoding: gzip,deflate
Content-type: text/plain

Raw Wind and Water make the world go plain.

The controller will now pick up the raw string text.

Note that you can call the same controller method with a content type of application/json and pass a JSON string and that will work as well. The RawRequestBodyFormatter simply adds support for the additional content types it supports.

Binary Data

Binary data works the same way but with a different signature and content type for the HTTP request.

[HttpPost]
[Route("api/BodyTypes/RawBytesFormatter")]
public byte[] RawBytesFormatter([FromBody] byte[] rawData)
{
    return rawData;
}  

and this HTTP request data with 'binary' content:

POST http://localhost:5000/api/BodyTypes/RawBytesFormatter HTTP/1.1
Accept-Encoding: gzip,deflate
Content-type: application/octet-stream

Raw Wind and Water make the world go 'round.

Again I'm sending a string to provide something readable here, but the string is treated as binary data by the method and returned as such as shown in Figure 2.

Source Code provided

If you want to play with this stuff and experiment, I've uploaded my sample project to Github:

The sample HTTP requests are setup in West Wind Web Surge and ready to test against or you can just use the BodyTypes.websurge file and pick out the raw HTTP request traces.

Summary

Accepting raw data is not something you have to do all the time, but occassionally it is required for API based applications. ASP.NET MVC/Web API has never been very direct in getting at raw data, but once you understand how the pipeline manages request data and deals with content type mapping it's easy to get at binary data.

In this post I showed two approaches:

  • Manually grabbing the Request.Body and deserializing from there
  • Using a custom InputFormatter that looks at typical 'raw' Content data types

The former is easy to use but doesn't describe the API behavior via the method interface. The latter is a little more work and requires hooking up a custom formatter, but it allows keeping the API's contract visible as part of the controller methods which to me simply feels cleaner.

All of this is making me hungry for some raw Sushi...

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in ASP.NET Core  

Conditional TargetFrameworks for Multi-Targeted .NET SDK Projects on Cross-Platform Builds

$
0
0

This is a short post that addresses an issue I ran into today when converting a project to .NET Core 2.0. I've been upgrading a host of my existing tools to .NET Standard/Core 2.0 and most of these projects have existing .NET 4.5 (or later) targets that I want to continue pulling forward. The new SDK project type makes it relatively easy to create libraries that do multi-targeting in your SDK style .csproj file:

<TargetFrameworks>netstandard2.0;net45</TargetFrameworks>

Assuming you can get your code to build on both platforms, this simple directive will build assemblies and NuGet packages (if you turn the option on) for both platforms which is very cool.

It'll work, but...

Cross Platform

There's a problem however when you do this multi-targeting. It works just fine on my local Windows machine where the specified target platform (.NET 4.5 SDK in this case) is installed.

However, if I now try to build on a Mac which doesn't have a net45 SDK I get:

/usr/local/share/dotnet/sdk/2.0.0/Microsoft.Common.CurrentVersion.targets(1122,5): error MSB3644:

The reference assemblies for framework ".NETFramework,Version=v4.5" were not found.

To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will beused in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

The problem is of course that when you're trying to build on a non-Windows platform, or even on Windows when the net45 build targeting is not installed, the build fails.

Why not just target .NET Standard?

Initially I thought I'd get away with just targeting .NET Standard and use that from full framework applications. That works as long as you can get all the features you need from NetStandard.

In my case however I'm porting very old code and there are number of dependencies on things that are not in .NET Standard and either require a separate set of libraries or simply can't run. So targeting native net45 (or whatever) is a good way to provide existing functionality while moving forward and also supporting .NET Standard/Core with a slightly diminished or altered feature set.

But how do you deal with that on multiple platforms?

You can change this:

<TargetFrameworks>netstandard2.0;net45</TargetFrameworks>

manually to:

<TargetFramework>netstandard2.0</TargetFrameworks>

when building on a non-supported platform, but that's ugly as all heck...

Conditional TargetFrameworks

I asked if there's an easy way to deal with this and kindly got a response from @DamienEdwards:

After a bit of experimenting with the right MSBuild invokations - and a little help from @andrewlocknet and @jeremylikness - I ended up with some conditional MSBuild blocks that work to do the right thing on Windows and in this case the Mac to build the project:

Old code

<Project Sdk="Microsoft.NET.Sdk"><PropertyGroup><!-- *** THIS *** --><TargetFrameworks>netstandard2.0;net45</TargetFrameworks><Version>3.0.0-preview1-0</Version><Authors>Rick Strahl</Authors>
    ...</Project>

New Code

<Project Sdk="Microsoft.NET.Sdk"><!-- *** THIS *** --><PropertyGroup Condition=" '$(OS)' != 'Windows_NT' "><TargetFramework>netstandard2.0</TargetFramework></PropertyGroup><PropertyGroup Condition=" '$(OS)' == 'Windows_NT' "> <TargetFrameworks>netstandard2.0;net45</TargetFrameworks></PropertyGroup><!-- *** THIS *** --><PropertyGroup><!-- <TargetFrameworks>netstandard2.0;net45</TargetFrameworks> --><Version>3.0.0-preview1-0</Version><Authors>Rick Strahl</Authors>
    ...</Project>

Notice that I use <TargetFramework /> for the single NetStandard reference and <TargetFrameworks /> for the 2 target NetStandard and Net45 build. It's an easy thing to miss!

Said and Done!

So now when I build on Windows, I get this output:

Figure 1 - With conditional flags, both NetStandard and Net45 projects are built on Windows

On the Mac:

Figure 2 - On OSX only the NetStandard package is built

This works as expected and is a reasonable solution for any project that requires building to multiple platform targets and still needs to build on multiple platforms where the target is not available.

© Rick Strahl, West Wind Technologies, 2005-2017
Posted in .NET Core   ASP.NET Core  

WPF Slow Window Loading due to Invalid SpellChecking Dictionaries

$
0
0

File this one into the Mr. Murpy Loves Me category: I ran into a nasty issue yesterday with Markdown Monster, which is a WPF application, by innocently adding an SpellCheck.IsEnabled=true attribute to one of my text boxes on the Weblog Publishing form:

<TextBox TextWrapping="Wrap" Height="100"
         Text="{Binding ActivePostMetadata.Abstract}" 
         IsEnabled="{Binding IsAbstractVisible}" 
         SpellCheck.IsEnabled="True"  />

That's simple enough, and it works:

Spell checking in Markdown Monster's WebLog Publishing Dialog

Figure 1 - WPF Spellchecking is easy to add with a simple property on a TextBox control

Why so slow???

But... on my dev machine there was a major problem: With that code in place the simple form that contains the spell check now took 3+ seconds to load. Say what?

So I tried to track this down. I removed the spell check code and immediately the form popped up instantly. Add the spell check back in - 3+ seconds load time.

Next, I tried this on another computer - a low end convertible no less - and guess what: No slowdown even with the spell check code in place. WTF?

A quick look at the Visual Studio Profiler Analysis on my machine quickly pointed me at InitializeComponent() block of code after which the Profiler disappears into native code, so the issue is something internal to WPF.

Dictionaries, Dictionaries

It turns out WPF spell checking uses Windows dictionaries and these dictionaries are globally referenced in the registry. When I took a look at the global dictionary registration key in the registry at:

HKEY_CURRENT_USER\Software\Microsoft\Spelling\Dictionaries  
Key: _Global_

I immediately found my problem:

Global Dictionaries in the Registry

Figure 2 - Global dictionaries registered had a bunch of temporary non-existant files.

Ugh! There were about 15 temporary dictionaries referenced in this section and sure enough these were the cause of my slow down.

Once I removed all the temp dictionaries and left just the legit Office dictionary my form now just pops again immediately.

The slowdown was caused by the errand dictionaries. I'm not sure why this is so incredibly slow. Given that the files referenced don't exist, there can't be any parsing happening. I also double checked to see that there weren't massive files in these folders which would make the look up really slow, but that's also not the case. Just having bad entries slows down the spell checker something fierce.

Temporary Dictionaries?

I also have no idea where these errand spell check dictionaries came from. I've been keeping an eye on this registry key and haven't seen temporary dictionaries returning. Clearly some application was writing out these files that subsequently get killed when my TMPFILES folder gets cleaned up by a daily scheduled task.

But unfortunately I have no idea which one. I'll keep an eye on this key and see if it returns after more use.

Moral of the Story

At the end of the day, if you should run into a problem with slow spell checking code - check your registry and make sure the dictionaries in use are legit.

It's definitely not something you are very likely to run into, but it's one of those strange edge cases that when they bite you, it takes a lot of time to track down. If you're unlucky enough bitten by this particular issue you landed here and can fix your problem quickly - unlike me who wasted a few hours tracking this down...

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in WPF  

A few notes on creating Class Libraries for ASP.NET Core

$
0
0

I'm starting to collect some of my helper and utility classes into a reusable library (as I tend to do) for my ASP.NET Core projects. In the process I ran into a few snags and after a brief discussion with David Fowler on Twitter I realized I was making a few non-obvious mistakes right from the get go.

In this quick post I point at couple of things I ran into and that you might also want to watch out for when creating class libraries that depend on ASP.NET Core features.

Don't Reference the ASP.NET Core Meta Package

On first glance it seems the easiest way to ensure you get access to all of ASP.NET's Features in your support library, is to reference the Microsoft.AspNetCore.All meta package that brings in all of the ASP.NET Core dependencies, just like a default ASP.NET Core Web application does.

The All package is good way to go for top level applications, especially those that target the pre-installed .NET Core runtimes. The ASP.NET Publish process can deal with sorting out where assemblies come from and in most cases referencing the meta package with its references to all packages just points at the preinstalled assemblies. .NET then sorts out at JIT time which assemblies are actually loaded.

So when I created my class library I figured, why not use the same package and add it to my classlib project - after all most use cases will already have the meta package in the top level project anyway.

Alas - David wagged a digital finger at me and reminded me that that this is not a good idea:

In hindsight, this makes perfect sense. If you stick the class library's package into another project it inherits the dependencies - ie. the entire ASP.NET stack. In most cases this probably not an issue, because the ALL meta package is probably already referenced in the top level Web project. Nothing gained, nothing lost, right?

But, in some cases the package might go into a purely local installation of an application that is using just the dependencies it needs rather than opting into the full ASP.NET Stack pointing at a pre-installed runtime. Now the consumer all of a sudden has to take a dependency on all those assemblies for whatever specialized functionality my lib provides.

Worse, if some other class library want to reference your package, it now too has a dependency on the full ASP.NET stack. Not cool.

In short, if you're building an internal library that you know will always be consumed in an application that uses the full meta package, then it's probably OK to reference the meta package in your class library.

But for any library that is going to be used generically in any kind of ASP.NET Core project or possibly as a dependency to other libraries, it's much more prudent to reference just the minimal dependencies you actually need.

ASP.NET Core Class Libraries and .NET Standard

A related issue came up when creating the ASP.NET Core class library project. I typically create class libraries that target .NET Standard 2.0 because it potentially makes the library more portable. With ASP.NET Core that's probably not a critical requirement right now as it always targets .NET Core App (not .NET Standard), but who knows what the future holds.

But when I referenced the ASP.NET Core meta package the class library project automatically forced me to target .NET Core 2.0.

Figure 1 - Using the Microsoft.AspNetCore.All package forces you to use the NETCoreApp2.0 target

Not only that but the drop down list actually doesn't give the option to change the target in Visual Studio - it only gives me options for .NET Core. What's going on?

Well, the ASP.NET Core meta package is actually responsible for changing the target, but I can change it manually in the .csproj file:

<Project Sdk="Microsoft.NET.Sdk"><PropertyGroup><TargetFramework>netstandard2.0</TargetFramework></PropertyGroup><ItemGroup><PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="2.0.0" /></ItemGroup></Project>

However when I did that I got this error:

Figure 2 - No NETSTANDARD for you!

Again the issue here that Microsoft.AspNetCore.Allexplicitly targets .NET Core 2.0 so that it's NOT used with other platforms.

Here's some more from David:

Based on David's suggestion to not use the ALL meta package, I switched to specific libraries and voila I can now target .NET Standard 2.0 in my project.

The reason for this is that .NET Standard can be consumed by other platforms, so potentially you'd be able to reference my package. If the meta package were there I would end up including all those assemblies from the meta package in say a full .NET Framework project which would really suck (I'm sure you've cursed some of the .NET Standard Projects Microsoft has put out that pukes a bunch of duplicated assemblies into your Bin folder - same thing).

Stick to Specific Dependencies in Class Libraries

The key take away from this is that the ASP.NET Core Meta package is just not a good choice for class libraries. So rather than referencing the entire framework, it's much better to reference just the individual components your application actually needs. And yeah - it's a lot more work finding the right packages to include and more importantly not choosing super high level ones that end up pulling in the world anyway.

Lower, Lower, Lower Level

Further, David suggested making sure you use the lowest level possible for NuGet packages. For example, in a recent post I talked about an InputFormatter class which I moved to my helper class library. This library needed to reference Microsoft.AspNetCore.Mvc.Core in order to get a reference to InputFormatter. However that package is a pretty high level package that also pulls a large chunk of ASP.NET Core stack. David's suggestion was is to go lower level and consider implementing IInputFormatter which requires just Microsoft.AspNetCore.Mvc.Abstractions which is very low level and doesn't have any dependencies.

To be fair though that's a tough call - I found out quickly that I ran into other dependencies that live up higher up in the stack. At that point at the choice of implementing several additional interfaces from scratch, just to avoid additional references. Frankly - not worth it, and in the end I did need to use the Microsoft.AspNetCore.Mvc.Core package after all, but your requirements may vary. You have to pick your battles wisely.

Regardless, I think David's advice is sound. Start with just what you need and only add specific packages as needed. Don't go for the high level packages as your first resort - use it as a last resort.

What do you need?

This is not always so easy because discovering where things live is not automatic. ASP.NET Core and MVC are scattered over dozens of packages. But the new NuGet Tooling in SDK projects at least shows you all NuGet Package dependencies along the way which usually gives you a good idea what things you can explicitly import.

By looking at a higher level dependency you can often gleam the lower level dependencies you actually need to get your work done.

Figure 3 - You can always look at a Package's dependencies and figure out what dependencies you need. For all packages avalailable look at the Microsoft.AspNetCore.All package in a Web project.

A couple of Package Reference Tips

Use the .NET API Browser to find Package References

Microsoft's been investing heavily in creating good documentation and it has been paying off in spades. If you haven't checked out common topics for .NET Core or ASP.NET Core in the Microsoft Docs because you're thinking that the docs are on 'MSDN' - you're in for a shock.

The docs are really good and open to community contribution. The docs are also much more consistent with a single documentation system spanning most of the documentation of the Microsoft Universe in the same format, with the same search tools and the same contribution and editing guidelines. Seriously - this is a huge accomplishment for Microsoft giving the special place in hell that was MSDN previously.

The doc system is all indexed and searchable and one really nice feature of this new integration is that there's a really useful .NET API Browser now that you can search for classes, namespaces, even member names. It uses a single textbox with auto-complete look up that lets you quickly jump to and even discover APIs.

Figure 4 - The .NET API browser is invaluable in discovering what package/assembly components live in

It's fast and gets you were you need to go. Check it out at:

Doc Tip: Nuget Packages and Assembly Names Match

Another useful tip from David: The ASP.NET Core and .NET Core Packages have matching assembly and package names. While the docs only show assembly names, they should in most cases match the package names you can search for on NuGet and with the .NET API Browser.

Publishing, Runtimes and Distribution Size

At the end of the day using .NET Core adds a few additional considerations to how you deal with dependencies. Unlike full framework .NET which used to include everything inside of installed framework assemblies, .NET Core applications can be installed side by side and may not have an available runtime which means every single dependency has to be copied to a server.

In .NET Core 1.x initially that was the only way to deploy an app and it meant you often deployed a 100 megs of stuff just to run a small app.

.NET Core 2.x (and later versions of 1.x) brought back pre-installed runtimes, and as much as people want to tout the 'self-contained application' syndrom, I think most applications going forward will use the pre-installed runtime approach. Unlike full framework, .NET Core supports side by side runtime installs which mitigates the worst of the full framework issues related to out of sync runtime versions.

Let's always remember that at runtime, .NET Is smart enough to know what assemblies to load and what actually code compile with the JIT. Non-referenced code is not loaded by the runtime. So with fixed runtimes, referencing some additional packages isn't going to make any discernible difference at runtime.

Still, in the spirit of clean code it's a good idea to be precise rather than general, so making a reasonable effort to keep dependencies down - especially in class libraries is definitely a valid concern. But going overboard to remove a small dependendency is probably not worth the effort.

Caveat emptor.

But the choice is there for you to decide.

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in ASP.NET Core  

Opening a Web Browser with an HTTP Url from Visual Studio Code

$
0
0

Here's a quick tip for Visual Studio Code and how to open the current document in a Web Browser.

I've been using Visual Studio Code more and more in recent months and it just keeps getting better and better as a general code editor. I like the speed and it the environment 'just feels' very comfortable to work in. While I still use other editors for full on development most of the time for their IDE features, for quick edits or updates I tend to always use Visual Studio Code.

I especially like it for Web development of all sorts, although for heavy duty work I still prefer WebStorm for its true IDE features (heavy duty refactoring, auto-complete, CSS and HTML navigation features).

For heads down coding VS Code is very nice and just feels better than most other editors. But one thing I miss is a quick and easy way to launch a browser from the current HTML document I'm editing either locally running from disk, or on my currently running development Web server.

But luckily it's quite easy to create a new custom Task in Visual Studio and add it to your project. If you use Visual Studio Code for Web editing and you quickly want to preview and HTML page in a browser, here's a simple way you can add a task to open a Web Browser.

Creating a new Task in tasks.json

To do this:

  • Bring up the Command Pallete (Ctrl-Shift-P)
  • Type in Task or Configure Task

This brings up the Task editor for the current project, which edits a tasks.json file in the .vscode folder in the editor root where you opened the editor.

You can now add tasks. I'm going to add two tasks to open Chrome with the current open document as a fixed HTML URL with the project relative path:

{"version": "0.1.0","tasks": [
        {"taskName": "Open in Chrome",     "isBuildCommand": true,"command": "Chrome","windows": {"command": "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe"
            },"args": ["http://localhost:5000/${relativeFile}"
            ]
        },
        {"taskName": "Open in Firefox",     "isBuildCommand": true,"command": "Firefox","windows": {"command": "C:/Program Files (x86)/Mozilla Firefox/firefox.exe"                
            },"args": ["http://localhost:5000/${relativeFile}"
            ]
        }
    ]
}

This hooks up the tasks as build tasks. Pressing Ctrl-Shift-B fires the first build task automatically - in this case Chrome.

Alternately:

  • Bring up the Command Console (Ctrl-Shift-P)
  • Type Run Task
  • Pick from the list of tasks

Launching HTML from the File System

The code above uses a hardcoded project specific URL that hits a local Web server. You can also just preview the file from disk which is a little more generic.

{"taskName": "Open as HTML File",     "isShellCommand": true,"command": "Shell","windows": {"command": "explorer.exe"                
    },"args": ["${file}"
    ]
} 

This will use whatever browser is configured on Windows and launch it from the local file system.

Easy Extensibility

The more I look in Visual Studio code the more i find to like. The extensibility model is super easy so it's easy to add things like code snippets or as I've shown here tasks that are tied to a hotkey.

There's a lot more you can do with tasks - so be sure to check out the documentation linked below.

Resources

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in Visual Studio Code  

.NET Core 2.0 and ASP.NET 2.0 Core are Here

$
0
0

.NET Core and ASP.NET 2.0 - a work in progress

Many of us have been patiently waiting through the long and windy road that has been the inception of the .NET Core and ASP.NET Core platforms. After a very rocky 1.0 and set of 1.x releases, version 2.0 of the new .NET frameworks and tooling have finally arrived a few weeks back. You know the saying: "Don't use any 1.x product from Microsoft", and this is probably more true than ever with .NET Core and ASP.NET Core. The initial releases, while technically functional and powerful, were largely under-featured and sported very inconsistent, buggy and ever changing tooling. Using the 1.x (and pre-release) versions involved quite a bit of pain and struggle just to keep up with all the developments along the way.

.NET Core 2.0 and ASP.NET Core 2.0 - The Promised Land?

Version 2.0 of .NET Standard, .NET Core and ASP.NET Core improve the situation considerably by significantly refactoring the core feature set of .NET Core, without compromising all of the major improvements that the new framework has brought in Version 1.

The brunt of the changes involve bringing back APIs that existed in full framework to make .NET Core 2.0 and .NET Standard 2.0 more backwards compatible with full framework. It's now vastly easier to move existing full framework code to .NET Core/Standard 2.0. It's hard to understate how important that step is, as 1.x simply in many ways felt hobbled by missing API and sub-features that made it difficult to move existing code and libraries to .NET Core/Standard. Bringing the API breadth back to close compatibility resets the expectation of what amounts to a functional set of .NET features that most of us have come to expect of .NET.

These subtle, but significant improvements make the overall experience of .NET Core and ASP.NET Core much more approachable especially when coming from previous versions of .NET. More importantly it should also make it much easier for third party providers to move their components to .NET Core so that the support eco-system for .NET Core applications doesn't feel like a backwater as it did during the 1.x days.

These changes are as welcome as they were necessary and in my experience with the 2.0 wave of tools has been very positive. I've been able to move two of my most popular libraries to .NET Core 2.0 with relatively little effort - something that would have been unthinkable with the 1.x versions. The overall feature breadth is pretty close to full framework, minus the obvious Windows specific feature set.

ASP.NET Core 2.0 also has many welcome improvements including a simplified configuration setup that provides sensible defaults, so you don't have to write the same obvious startup code over and over. There are also many new small enhancements as well as a major new of RazorPages which bring controller-less Razor pages to ASP.NET Core.

Overall 2.0 is a massive upgrade in functionality, that brings back features that realistically should have been there from the start.

But it's not all unicorns and rainbows - there are still many issues that need to be addressed moving forward. First and foremost is that the new SDK style project tooling leaves a lot to be desired with slow compilation, slow test tooling, and buggy tool support for multi-targeted projects in Visual Studio. Visual Studio in general seems to have taken a big step back in stability in recent updates when it comes to .NET Core projects.

The Good outweighs the Bad

Overall the improvements in this 2.0 release vastly outweigh the relatively few - if not insignificant - problems, that still need to be addressed. The outstanding issues are well known and on the board for fixes in the hopefully not so distant future. Most of these relate to tooling and tooling performance rather than the frameworks themselves. While inconvenient, these tooling shortcomings are not what I would consider show stoppers, but mostly nuisances that are likely to be addressed soon enough.

To be clear where I stand: The 2.0 release feels like a good jumping-in point to dig in and start building real applications with - a feeling that I never had with the 1.x releases. 2.0 strikes the right balance of new features, performance and platform options that I actually want to use, without giving up many of the conveniences that earlier versions of .NET offered. The 2.0 features no longer feel like a compromise between the old and new feature sets but a way forward to new features and functionality that is actually useful and easy to work with in ways that you would expect to on the .NET platform.

Let's take a look at some of the most important details of what's changed.

What is .NET Standard?

Not sure what .NET Standard is and it relates to .NET Core and other .NET frameworks? Check out my previous blog post that explains what .NET Standard is, why it's a useful new concept and how you can use it:

2.0 Versions of .NET Core and .NET Standard bring back many .NET APIs

The first and probably most significant improvement in the 2.0 releases is that .NET Standard and .NET Core 2.0 bring back many of the APIs we've been using since the beginnings of .NET in the full framework, that were not supported initially by .NET Core 1.x.

When .NET Core 1.x came out they were largely touted as trimmed down, high performance versions of the full .NET Framework. As part of that effort there was a lot of focus on trimming the fat and providing only core APIs as part of .NET Core and .NET Standard. The bulk of the .NET Base Class Library was also broken up into a huge number of small hyper-focused packages.

All this resulted in a much smaller framework, but unfortunately also brought a few problems:

  • Major incompatibilities with classic .NET framework code (hard to port code)
  • Huge clutter of NuGet Packages in projects
  • Many usability issues trying to perform common tasks
  • A lot of mental overhead trying to combine all the right pieces into a whole

With .NET Core 1.0 many common NET Framework APIs were either not available or buried under different API interfaces that often were missing critical functionality. Not only was it hard to find stuff that was under previously well known APIs, but a lot of functionality that was taken for granted (Reflection, Data APIs, XML for example) was refactored down to near un-usability.

Bringing back many Full Framework Features

.NET Core 2.0 - and more importantly .NET Standard 2.0 - add back a ton of functionality that was previously cut from .NET Core/Standard, bringing back a large swath of functionality that existed in full framework .NET. In 1.x it was really difficult to port existing code. The feature footprint with .NET Core 2.0 is drastically improved (~150% of APIs added) and compatibility with existing full framework functionality is preserved for a much larger percentage of code.

In real terms this means that it's much easier now to port existing full framework code to .NET Standard or .NET Core and have it run with no or only minor changes.

Case in point: I took one of my 15 year old general purpose libraries - Westwind.Utilities which contains a boat load of varied utility classes that touch a wide variety of .NET features - and I was able to re-target the library to .NET Standard as well as .NET 4.5. More than 95% of the library could migrate without changes and only a few small features needed some tweaks (encryption, database) and a few features had to be cut out (low level AppDomain management and Drawing features). Given that this library was such an unusual hodgepodge of features, more single-focused libaries will fare even better in terms of conversions. If you're not using a few of the APIs that have been cut or only minimally implemented, chances are porting to .NET Standard will require few or even no changes.

You can read more about what was involved in this process my Multi-Targeting and Porting a .NET Library to .NET Core 2.0 post.

Runtimes are Back

One of the key bullet points Microsoft touted with .NET Core is that you can run side by side installations of .NET Core. You can build an application and ship all the runtime files and dependencies in a local folder - including all the .NET dependencies as part of your application. The benefit of this is that you can much more easily run applications that require different versions of .NET on the same machine. No more trying to sync up and potentially break applications due to global framework updates. Yay! Right?

.NET Core 1.x - Fragmentation & Deployment Size

Well - you win some, you lose some. With Version 1.x of .NET Core and ASP.Core the side effect was that the .NET Core and ASP.NET frameworks were fragmented into a boatload of tiny, very focused NuGet packages that had to be referenced explicitly in every project. These focused packages are a boon to the framework developers as they allow for nice feature isolation and testing, and the ability to rev versions independently for each component.

But the result of all this micro-refactoring was that you had to add every tiny little micro-refactored NuGet Package/Assembly explicitly to each project. Finding the right packages to include was a big usability problem for application and library developers.

Additionally when you published Web projects all those framework files - plus all runtime runtime dependencies had to be copied to the server with 1.x, making for a huge payload to send up to a server for publishing even for a small HelloWorld application.

Meta Packages in 2.0

In .NET Core 2.0 and ASP.NET 2.0 this is addressed with system wide Framework Meta Packages that can be referenced by an application. These packages are installed using either the SDK install or a 'runtime' installer and can then be referenced from within a project as a single package. So when you reference .NET Core App 2.0 in a project, it automatically includes a reference to the .NET Core App 2.0 meta package. Likewise if you have a class library project that references .NET Standard - that meta package with all the required .NET Framework libraries is automatically referenced. You don't need to add any of the micro-dependencies to your project. The runtimes reference everything in the runtime, so in your code you only need to apply namespaces, but no extra packages.

There's also an ASP.NET Core meta package that provides all of the ASP.NET Core framework references. Each of these meta packages have a very large predefined set of packages that are automatically referenced and available to your project's code.

In your projects this means you can reference .NET Standard in a class library project and get references to all the APIs that are part of .NET Standard with NetStandard.Library reference in the screenshot below. In applications, you can reference Microsoft.NETCoreApp which is essentially a reference to .NET Core 2.0 - here you specifying a very specific instance of runtime for the application. For ASP.NET the Microsoft.AspNetCore.All package brings in all ASP.NET and EntityFramework related references in one simple reference.

Here's an example of a two project solution that has an ASP.NET Core Web app and a .NET Standard targeted business logic project:

Figure 1: Package references are manageable again in V2.0

Notice that the project references look very clean overall - I only explicitly add references to third party NuGet packages - all the system refs comes in automatically via the single meta package. This is even less cluttered than a full framework project which still needed some high level references. Here everything is automatically available for referencing.

This also is nice for tooling that needs to find references (Ctrl-. in VS or Alt-Enter for R#). Because everything is essentially referenced, Visual Studio or Omnisharp can easily find references and namespaces and inject them into your code as using statements. Nice.

Runtimes++

In a way these meta packages feel like classic .NET runtime installs and in an indirect way they are. Microsoft now provides .NET Core and ASP.NET Core runtimes that are installed from the .NET Download site and can either be installed via the plain runtime or the .NET SDK that includes all the compilers and command line tools so you can build and manage a project.

You can install multiple runtimes side by side and they are available to many applications to share, so the same components don't have to be installed for each and every application as they had to with 1.x applications which makes deployments a heck of a lot leaner.

You can still fall back to local packages installed with the application that override global installed packages, so if you want to run without installed runtimes you can. You can also selective override packages, by installing packages locally that override packages in the meta package.

In short you know get to have your cake and eat it to and you get to choose which route to take. The default is to use the runtimes which is path of least resistance.

Publishing Applications

In most cases with 2.0 publishing an application to a Web server is much leaner than in prior versions. Your publish folder and what needs to get sent to the server amounts to just your code plus any explicit third party dependencies you added to the project. You are no longer publishing runtime files to the server.

Here's the publish folder of the Solution shown above:

Figure 2: Published output contains just your code and your explicit dependencies

This means publishing your application is much more lightweight - after the initial runtime installation. It's still possible to deploy full runtimes just as you could in 1.x releases, it just no longer the default and you have to explicitly specify the runtime to publish.

.NET SDK Projects

One of the nicest features of 2.0 (actually introduced in 1.6) is the new SDK style .csproj Project format. This project format is very lean and easily readable - quite in contrast to the verbose and cryptic older .csproj format.

For example, it's not hard to glean what's going on in this .csproj project file:

<Project Sdk="Microsoft.NET.Sdk.Web"><PropertyGroup><TargetFramework>netcoreapp2.0</TargetFramework>      </PropertyGroup><ItemGroup><PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /><PackageReference Include="Serilog.Extensions.Logging" Version="2.0.2" /><PackageReference Include="Serilog.Sinks.RollingFile" Version="3.3.0" />		</ItemGroup><ItemGroup><ProjectReference Include="..\AlbumViewerBusiness\AlbumViewerBusiness.csproj" /></ItemGroup><ItemGroup><Content Update="wwwroot\**\*;Areas\**\Views;appsettings.json;albums.js;web.config"><CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory></Content>    </ItemGroup><ItemGroup><Compile Remove="logs\**" /><Content Remove="logs\**" /><EmbeddedResource Remove="logs\**" /><None Remove="logs\**" /></ItemGroup><ItemGroup><DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="2.0.0" /></ItemGroup><ItemGroup><None Update="albums.js"><CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory></None></ItemGroup></Project>  

Notice that file references are all but gone in the project file - projects now assume all files are included except those you explicitly exclude which drastically reduces the file references in a project. The other benefit here is that you can simply drop files in a folder to become part of a project - you no longer have to add files to a project explicitly.

Compared to the morass that was the old .csproj format this is very clean and lean.

Additionally the new project format supports multi-targeting to multiple .NET framework versions. I've talked a few times about porting existing libraries to .NET Standard, and using the new project format it's quite easy to set up a library to target both .NET 4.5 and .NET Standard for example.

Here's an example of my Westwind.Utilities library that does just that:

<Project Sdk="Microsoft.NET.Sdk"><PropertyGroup><TargetFrameworks>netstandard2.0;net45;net40</TargetFrameworks><RuntimeIdentifiers>win7-x86;win7-x64</RuntimeIdentifiers><Authors>Rick Strahl</Authors><Version>3.0.2</Version><AssemblyVersion>3.0.2.0</AssemblyVersion><FileVersion>3.0.2.0</FileVersion>  <PackageId>Westwind.Utilities</PackageId><RootNamespace>Westwind.Utilities</RootNamespace>
    ...Nuget info block omitted</PropertyGroup><PropertyGroup Condition="'$(Configuration)'=='Debug'"><DefineConstants>TRACE;DEBUG;</DefineConstants></PropertyGroup><PropertyGroup Condition=" '$(Configuration)' == 'Release' "><NoWarn>$(NoWarn);CS1591;CS1572;CS1573</NoWarn><GenerateDocumentationFile>true</GenerateDocumentationFile><IncludeSymbols>true</IncludeSymbols><DefineConstants>RELEASE</DefineConstants></PropertyGroup><ItemGroup><PackageReference Include="Newtonsoft.Json" Version="10.0.3" /></ItemGroup><ItemGroup Condition=" '$(TargetFramework)' == 'netstandard2.0'"><PackageReference Include="System.Data.SqlClient" Version="4.4.0" /></ItemGroup><PropertyGroup Condition=" '$(TargetFramework)' == 'netstandard2.0'"><DefineConstants>NETCORE;NETSTANDARD;NETSTANDARD2_0</DefineConstants></PropertyGroup><ItemGroup Condition=" '$(TargetFramework)' == 'net45' "><Reference Include="mscorlib" /><Reference Include="System" /><Reference Include="System.Core" /><Reference Include="Microsoft.CSharp" /><Reference Include="System.Data" /><Reference Include="System.Web" /><Reference Include="System.Drawing" /><Reference Include="System.Security" /><Reference Include="System.Xml" /><Reference Include="System.Configuration" /></ItemGroup><PropertyGroup Condition=" '$(TargetFramework)' == 'net45'"><DefineConstants>NET45;NETFULL</DefineConstants></PropertyGroup><ItemGroup Condition=" '$(TargetFramework)' == 'net40' "><Reference Include="mscorlib" /><Reference Include="System" /><Reference Include="System.Core" /><Reference Include="Microsoft.CSharp" /><Reference Include="System.Data" /><Reference Include="System.Web" /><Reference Include="System.Drawing" /><Reference Include="System.Security" /><Reference Include="System.Xml" /><Reference Include="System.Configuration" /></ItemGroup><PropertyGroup Condition=" '$(TargetFramework)' == 'net40'"><DefineConstants>NET40;NETFULL</DefineConstants></PropertyGroup></Project>

The project defines three framework targets:

<TargetFrameworks>netstandard2.0;net45;net40</TargetFrameworks>

and then uses conditional target framework filtering to add dependencies. Visual Studio can visualize these dependencies for each target as well:

Figure 3 - Multiple targets displayed in Visual Studio

Visual Studio 2017.3+ also has a new Target drop down that lets you select which target is currently used to display code and errors in the environment:

Figure 4 - Active target compiler constants are evaluated in the code editor so code excluded for a given target is low-lighted.

There are other features in Visual Studio that makes it target aware:

  • Intellisense shows warnings for APIs that don't exist on any given platform
  • Compiler errors now show the target platform for which the error applies
  • Tests using the Test Runner respect the active target platform (VS2017.4)

When you compile this project, the build system automatically builds output for all three targets which is very nice if you've ever tried to create multi-target projects with the old project system (hint: it sucked!).

It can also create a NuGet Package that wraps up all targets into the pacakge. If you look back at the project file you'll note that the NuGet Properties are now stored as part of the .csproj file.

Here's what the build output from my 3 target project looks like:

Figure 5 - Multi-target projects automagically build for all target platforms and can create a NuGet package.

This, friends, is pretty awesome to me and frankly something that should have been done a long time ago in .NET!

Easier ASP.NET Startup Configuration

Another nice improvement and a sign of growing up is that the ASP.NET Core startup code in 2.0 is a lot more streamlined and there's quite a bit less of it.

The absolute minimal ASP.NET Web application you can build is just a few lines of code:

public static void Main(string[] args)
{
    // The simplest thing possible!
    WebHost.Start(async (context) =>
    {
        await context.Response.WriteAsync("Hello World. Time is: " + DateTime.Now);
    })
    .WaitForShutdown();
}

Notice that this code works without any dependencies whatsoever, and yet has access to an HttpContext instance - there's no configuration or additional setup required, the framework now uses a set of common defaults for bootstrapping an application. Hosting options, configuration, logging and a few other items are automatically set with common defaults, so these features no longer have to be explicitly configured unless you want to change the default behavior.

The code automatically hooks up hosting for Kestrel and IIS, sets the startup folder, allows for host url specification and provides basic configuration features - all without any custom configuration required. All of these things needed to be configured explicitly previously. Now - all of that is optional. Nice!

To be realistic though, if you build a real application that requires configuration, authentication, custom routing, CORS etc. those things still have to be configured and obviously that will add code. But the point is that ASP.NET Core now has a default configuration that out of box lets you get stuff done without doing any configuration.

The more common configuration setup looks like this:

public static void Main(string[] args)
{
    var host = WebHost.CreateDefaultBuilder(args)
        .UseUrls(hostUrl)
        .UseStartup<Startup>()
        .Build()
        .Run()
}

with a Startup configuration class that handles minimal configuration for an MVC/API application:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, IConfiguration configuration)
    {
        app.UseStaticFiles();
        
        app.UseMvcWithDefaultRoute();  // use only AttributeRoutes
    }
}

You can then use either RazorPages (loose Razor files that can contain code) or standard MVC or API controllers to handle your application logic.

A controller of course is just another class you create that optionally inherits from Controller or simply has a Controller postfix:

[Route("api")]
public class HelloController 
{
    [HttpGet("HelloWorld/{name}")]
    public object HelloWorld(string name)
    {
        return new
        {
            Name =  name,
            Message = $"Hello World, {name}!",
            Time = DateTime.Now
        };
    }
}

In short, basic configuration for a Web application is now a lot cleaner than in 1.x versions.

One thing that has bugged me in ASP.NET Core is the dichotomy between the ConfigureServices() and Configure() methods. In 1.x ASP.NET Core seemed to have a personality crisis in where to put configuration code for various components. Some components configured in ConfigureServices() using the AddXXX() methods, others did it in the Configure() method using the UseXXX() methods. In 2.0 Microsoft seems to have moved most configuration behavior into ConfigureServices() using options objects (via Action delegates that actually get called later in the pipeline), so now things like CORS, Authentication and Logging all use a similar configuration patterns.

So for example, in the following code, DbContext, Authentication, CORS and Configuration are all configured in the ConfigureServices() method:

 public void ConfigureServices(IServiceCollection services)
{
    services.AddDbContext<AlbumViewerContext>(builder =>
    {
        string useSqLite = Configuration["Data:useSqLite"];
        if (useSqLite != "true")
        {
            var connStr = Configuration["Data:SqlServerConnectionString"];
            builder.UseSqlServer(connStr);
        }
        else
        {
            // Note this path has to have full  access for the Web user in order 
            // to create the DB and write to it.
            var connStr = "Data Source=" +
                          Path.Combine(HostingEnvironment.ContentRootPath, "AlbumViewerData.sqlite");
            builder.UseSqlite(connStr);
        }
    });

    services				
        .AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme)
		.AddCookie(o =>
		{
			o.LoginPath = "/api/login";
			o.LogoutPath = "/api/logout";
		});
	services.AddCors(options =>
    {
        options.AddPolicy("CorsPolicy",
            builder => builder
                .AllowAnyOrigin()
                .AllowAnyMethod()
                .AllowAnyHeader()
                .AllowCredentials());
    });


	// Add Support for strongly typed Configuration and map to class
    services.AddOptions();
    services.Configure<ApplicationConfiguration>(Configuration.GetSection("Application"));

The Configure() method generally then only enables the behaviors configured above by using various .UseXXXX() methods like .UseCors("CorsPolicy"), .UseAuthentication(), UseMvc().

While this still seems very disjointed at least the configuration logic is now mostly kept in a single place in ConfigureServices().

Ironically I've been struggling with this same issue in porting another library - Westwind.Globalization - to .NET Core 2.0 and I needed to decide how to configure my component. I chose to follow the same pattern as Microsoft using ConfigureServices with an Action delegate that handles option configuration:

services.AddWestwindGlobalization(opt =>
{
    // Resource Mode - Resx or DbResourceManager                
    opt.ResourceAccessMode = ResourceAccessMode.DbResourceManager;  // ResourceAccessMode.Resx

    opt.ConnectionString = "server=.;database=localizations;integrated security=true;";
    opt.ResourceTableName = "localizations_DEVINTERSECTION10";
    opt.ResxBaseFolder = "~/Properties/";

    // Set up security for Localization Administration form
    opt.ConfigureAuthorizeLocalizationAdministration(actionContext =>
    {
        // return true or false whether this request is authorized
        return true;   //actionContext.HttpContext.User.Identity.IsAuthenticated;
    });

});

implemented as an extension method with an Action delegate:

public static IServiceCollection AddWestwindGlobalization(this IServiceCollection services,
            Action<DbResourceConfiguration> setOptionsAction = null) 
{ 
    // add additional services to DI
    // configure based on options passed in
}

I'm not a fan of this (convoluted) pattern of indirect referencing and deferred operation, especially given that ConfigureServices() seems like an inappropriate place for component configuration when there's a Configure() method where I'd expect to be doing any configuring...

But I have to admit that once you understand how Microsoft uses the delegate-option-configuration-pattern, and if you can look past the consitent inconsistency, it is easy to implement and work with so I'm not going to rock the boat and do something different.

IRouteService - Minimal ASP.NET Applications

MVC or API applications typically can be built using the MVC framework. As you've seen above it's a lot easier with 2.0 to get an API application configured and up and running. But MVC has a bit of overhead internally.

If you want something even simpler perhaps for a quick one off minimal Micro Services, or you are a developer that wants to build a custom framework on top of the core ASP.NET middleware pipeline, you can now do that pretty easily by taking advantage of IRouterService.

Here's another very simple single file self contained ASP.NET application that returns a JSON response of a routed request:

public static class Program
{
    public static void Main(string[] args)
    {

        WebHost.CreateDefaultBuilder(args)
            //.UseStartup<Startup>()
            .ConfigureServices(s => s.AddRouting())
            .Configure(app => app.UseRouter(r =>
            {
                r.MapGet("helloWorldRouter/{name}", async (request, response, routeData) =>
                {
                    var result = new
                    {
                        name = routeData.Values["name"] as string,
                        time = DateTime.UtcNow
                    };
                    await response.Json(result);
                });
                r.MapPost("helloWorldPost" async (request, response, routeData) => {
                  ...  
                };
            }))
            .Build()
            .Run();
    }

    public static Task Json(this HttpResponse response, object obj, 
                            Formatting formatJson = Formatting.None)
    {
        response.ContentType = "application/json";

        JsonSerializer serializer = new JsonSerializer
            { ContractResolver = new CamelCasePropertyNamesContractResolver() };
        serializer.Formatting = formatJson;

        using (var sw = new StreamWriter(response.Body))
        using (JsonWriter writer = new JsonTextWriter(sw))
        {
            serializer.Serialize(writer, obj);                
        }

        return Task.CompletedTask;
    }
}

The key here is the router service that lets you directly map URLs to actions that have a request and a response you read from and write to. This is obviously a bit more low level than using MVC/API controllers. There's no HttpContext and you have to handle serializing inbound and outbound data yourself. But it gives you complete control over request handling and the ability to create very, very small services with minimal overhead.

With a few simple helper extension methods you can provide a lot of functionality using just this very simple mechanism. This is very cool if publishing simple one of 'handlers'. It can also be a good starting point if you ever want to build your own custom not-MVC MVC Web framework 😃

IRouterService functionality is primarily for specialized use cases where you need one or more very simple notification requests. It is very similar to where you might employ serverless Web Functions (like Azure Functions, AWS Lambda) for handling simple service callbacks or other one off operations that have few dependencies.

I've also found IRouterService useful for custom route handling that doesn't fall into the application space, but is more of an admin feature. For example, recently I needed to configure an ASP.NET Core app to allow access for Let's Encrypt's domain validation callbacks and I could just use a route handler to handle a special route in the server's Configure() code:

//app.UseRouter(r =>
{
    r.MapGet(".well-known/acme-challenge/{id}", async (request, response, routeData) =>
    {
        var id = routeData.Values["id"] as string;
        var file = Path.Combine(env.WebRootPath, ".well-known","acme-challenge", id);
        await response.SendFileAsync(file);
    });
});

app.UseMvcWithDefaultRoute();

Http.Sys Support

For Windows, ASP.NET Core 2.0 now also supports Http.sys as another Web server in addition to the Kestrel and IIS/IIS Express servers that are supported by default. http.sys is the kernel driver used to handle HTTP services on Windows. It's the same driver that IIS uses for all of its HTTP interaction and now you can host your ASP.NET Core applications directly on Http.sys usig the Microsoft.AspNetCore.Server.HttpSys package.

The advantage of using Http.sys directly is that it uses the Windows http.sys infrastructure which is a hardened Web Server front end that supports high level support for SSL, content caching and many security related features not currently available with Kestrel.

For Windows the recommendation has been to use IIS as a front end reverse proxy in order to provide features like static file compression and caching, SSL management and rudimentary connection protections against various HTTP attacks against the server.

By using the Httpsys server you can get most of these features without having to use a reverse proxy to front Kestrel which has a bit of overhead.

To use HttpSys you need to explicitly declare it using the .UseHttpSys() configuration added to the standard startup sequence (in program.cs):

WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>()
        .UseHttpSys(options =>
        {
            options.Authentication.Schemes = AuthenticationSchemes.None;
            options.Authentication.AllowAnonymous = true;
            options.MaxConnections = 100;
            options.MaxRequestBodySize = 30000000;
            options.UrlPrefixes.Add("http://localhost:5002");
        })
        .Build();

And then configure the local port in order for it to be accessibly both locally and remotely (by opening up a port on the firewall). When you do you should now see the HTTP Sys server:

Figure 6 - The http.sys hosting in ASP.NET Core provides efficient Windows server hosting without a proxy front

Microsoft has a well done and detailed document that describes how to set up http.sys hosting:

I haven't had a chance to try this in production, but if you are running on Windows this might be a cleaner and more easily configurable way to run ASP.NET Core applications than doing the Kestrel->IIS dance. Doing some quick over the finger performance tests with WebSurge show that running with raw Httpsys is a bit faster than running IIS->Kestrel.

For a public facing Web site you're probably better off with full IIS, but for raw APIs or internal applications Httpsys is a great option for Windows hosted server applications.

RazorPages

In ASP.NET Core 2.0 Microsoft is rolling out RazorPages. RazorPages is something completely new, although it's based on concepts that should be familiar to anybody who's used either ASP.NET WebPages or - gasp - WebForms (minus Viewstate).

When I first heard about RazorPages a while back I had mixed feelings about the concept. While I think that script based framework is an absolute requirement for many Web sites that deal primarily with content, I also felt like requiring a full ASP.NET Core application setup that requires a full deployment process just to run script pages is a bit of an oxymoron. After all one of the advantages of tools like WebPages and WebForms is that you don't have to 'install' an application and you just drop a new page into a server and run.

RazorPages are different - they depend on ASP.NET Core and they are an intrinsic part of the ASP.NET Core MVC platform. Razorpages use the same concepts and share the same Razor components as MVC Views and controllers, so for all intents and purposes RazorPages is a different repackaging for MVC.

So why use it? Think about how much clutter there is involved in MVC to get a single view fired up in the default configuration ASP.NET MVC projects use:

  • Controller Class with a Controller Method (Controllers folder)
  • View Model (Models Folder)
  • View (View/Controller folder)

IOW, code in MVC is scattered all over the place. Some of this can be mitigated with Feature folders where all related files are stored in a single folder, but you still essentially have view html, view model code and controller code scattered across 3 different files.

RazorPages provides much of the same functionality in a much simpler package. In fact, with Razor Pages you can create single pages that include both HTML, Model and Controller code:

@model IndexModel
@using Microsoft.AspNetCore.Mvc.RazorPages
@functions {

    public class IndexModel : PageModel
    {
        
        [MinLength(2)]
        public string Name { get; set; }

        public string Message { get; set; }

        public void OnGet()
        {
            Message = "Getting somewhere";
        }


        public void OnPost()
        {
            TryValidateModel(this);            

            if (this.ModelState.IsValid)
                Message = "Posted all clear!";
            else
                Message = "Posted no trespassing!";
        }
    }
}
@{    
    Layout = null;
}<!DOCTYPE html><html><head>    </head><body><form method="post" asp-antiforgery="true"><input asp-for="Name" type="text" /><button type="submit" class="btn btn-default">Show Hello</button>
    @Model.Name</form><div class="alert alert-warning">
    @Model.Message</div></body></html>

Although I really like the fact that you can embed a model right into the Razor page as shown for simple pages, this gets messy quickly. More commonly you pull out the PageModel into a separate class, and the default template that creates a RazorPage in Visual Studio does just that. When you create a new RazorPage in Visual studio you get a .cshtml and a nested .cshtml.cs files:

Figure 7 - RazorPage Code Behind uses a hybrid View/Controller class

RazorPages Runtime Compilation

Before you completely dismiss inline code in the .cshtml template, consider that code inside the RazorPage is dynamically compiled at runtime, which means you can make changes to the page without having to recompile and restart your entire application!

The PageModel subclass in this scenario becomes a hybrid of controller and model code very similar to the way many client side frameworks like Angular handle the MV* operation which is more compact and easier to manage than having an explicit controller located in yet another external class.

PageModel supports implementation of a few well known methods like OnGet(), OnPost() etc for each of the supported verbs that can handle HTTP operations just like you would in a controller. An odd feature called PageHandlers using the asp-page-handler="First" attribute lets you even further customize the methods that are fired with a method postfix like OnPostFirst(), so that you can handle multiple forms on a single page for example.

While traditional MVC feels comfortable and familiar, I think RazorPages offers a clean alternative with page based separation of concerns in many scenarios. Keeping View and View specific Controller code closely associated usually makes for an easier development workflow and I'd already moved in this direction with feature folder setup in full MVC anyway. If you squint a little, the main change is that there are no more explicit multi-concern controllers, just smaller context specific classes.

RazorPages is not going to be for everyone, but if you're like me and initially skeptical I would encourage you to check them out. It's a worthwhile development flow to explore and for the few things I still use server generated HTML for I think RazorPages will be my tool of choice on ASP.NET Core.

The Dark Underbelly

So, I've talked about a lot of things that are improved and that make 2.0 a good place to jump into the fray for .NET Core and ASP.NET Core.

But it's not without its perils - there are still a lot of loose ends especially when it comes to tooling. Let's look at a few things that feel like they still need work.

SDK Projects and Cycle Time

The biggest issues I've had with development under .NET Core in general is the tooling.

While I love the fact that command line tooling is available to build, run and compile applications using the various dotnet commands, the performance of these tools is considerably slower than the old compilers in classic full framework .NET projects. I'm not sure where the slowness is exactly but the start/debug/stop/restart cycle is dreadfully slow with anything .NET core related.

When building Web applications with ASP.NET Core I tend to use dotnet watch run which uses additional build tooling to automatically recompile your applications when a change is made, and then automatically restarts the application.

When working on a SPA application I often end up making a change on the server, switching back to my browser to see the changes. Client side changes happen almost instantly, but the server side API code still takes between 20-30 seconds even for a small project which is very slow. This is especially frustrating when working on a client project where the client side content is live updated nearly instantly in the browser.

The slowdown appears to be in the build process, because if I run a published application like this:

dotnet albumviewer.dll

it fires up instantly in less than a couple of seconds which includes some db initialization.

However, running:

dotnet run

is achingly slow and takes upwards of 20+ seconds. Dotnet run builds the project again and that's seems to be where the issue is as .NET goes through re-packaging the application in order to run it.

The slowness of the cycle time to restart an application is a noticeable drag on productivity for me which makes me very wary of running the application or running tests for that matter, which have the same issues.

Tests

Another area that's problematic is tests which run considerably slower under SDK projects than in full framework projects. I'm also seeing test runs that just stop running randomly in the middle of the test run quite frequently.

As I mentioned earlier I moved a couple of projects from full framework to the new .NET SDK projects with multi-targeting in place, and I can compare performance side by side and the full framework tests run 3-4 times faster and reliably run through where the SDK project frequently stops mid-run.

Another problem with tests in particular is running multi-target tests when running inside of Visual Studio. It's never clear which target you are actually running, nor does the test output tell you.

To be fair if you run tests from the command line you can specify which framework target is used and you can easily specify which namespace, class or even method you want to test. In fact, in many cases I had to use the command line, because that was the only way I could get tests to run.

I find myself waiting on builds and application startups a lot more than I used to with full framework projects. In fact, I work with both, and whenever I switch back to a full framework project from a SDK based project I go "Damn that's so much better". That's not an experience I like to have with a new platform.

Slow Editing in Visual Studio

Another annoying issue is working with projects in Visual Studio. The tooling quite frequently goes out to lunch and finds errors that aren't errors, not letting the application run. I also frequently see the editor show perfectly valid code as invalid while a full compilation of the project shows no errors. The only workaround for this often is to close the solution and re-open it.

Visual Studio also often slows down drastically when working on .NET Core projects. In fact, a few minutes after opening a .NET Core project, the fans of my Mac Book Pro go into hyper drive mode and after an hour of this it's not unusual for my computer to actually throttle down due to heat. This does not happen on full framework projects, so there's something specific to .NET Core or SDK projects that cause this madness.

On a related note, I also use Resharper as a crutch I can't live without, and it too seems to have a lot of problems with validating code properly. Razor content especially seem to be a problem for both the raw Visual Studio editor and Resharper.

Just to be clear though I've also run extended periods without Resharper to make sure it's not R# that's causing issues. Resharper causes its own set of problems, but even 'raw' Visual Studio throws up the same issues.

You'd think using Command line tools and shelling out would be very efficient and if nothing else offload a lot of the workload external from Visual Studio, but currently that's not the case. The difference between classic and SDK projects is extremely noticeable and very productivity sapping.

The alternative is to use Visual Studio Code with OmniSharp and the C# addin, or JetBrains Rider which fare much better when it comes to editing performance and getting stuff done, but even then the external compile process, and running of tests is just dreadfully slow.

Rider especially looks very promising but there are still a number of issues related to .NET Core projects that are deal breakers for me at the moment. Testing in particular is a problem. I've tried out working both in VS Code and Rider for a while, but while I can some work done it seems like some processes are just too much of a pain. Rider actually comes close, but it probably needs a few more iterations before it becomes a viable choice for me.

So many choices, but none of them are really satisfying at the moment. All have some very strong things going for them, but every single one has also major drawbacks.

I am also hopeful that the tooling mess will sort itself out in time, but I think we as developers really need to make it clear to Microsoft that this is a big concern and not give them an easy pass. I know I can be overly critical at times, but I've heard this very same feedback from quite a few capable developers, so much so that many have just given up and just gone back to full framework where you get dependable tooling and tool performance. I think the tooling needs to be as first rate as the framework and there's a ways to go to achieve that goal.

Microsoft knows how to build great tools and I'm sure it's technically feasible, but the Achilles heel for the tooling has always been getting the final polish right. Right now we could use a massive shoe shine, (Achilles) heel cleaning kit 😃

Summer of 2.0

I don't want to end on this downer note, so let me make clear that I think overall the entire 2.0 train of upgrades is a huge improvement over what came before and the progress of change has been pretty rapid and amazing. The new features and improvements in the framework finally provide enough of a surface to make it realistic to jump in and build applications with .NET Core 2.0 and ASP.NET Core 2.0.

Another thing that I find extremely promising is that Scott Hunter recently mentioned that .NET Core 2.0 and .NET Standard 2.0 will stay put for a while, without major breaking changes moving forward. I think this is a great call - I think we all need a breather instead of chasing the next pie in the sky. Some time to try to catch up to the platform, figure out best practices and also see what things still need improving.

It's been a rough ride getting to 2.0 and I for one would appreciate a little stretch smooth road.

Figure 8 - Smooth road - for maybe a little while

Go ahead and Jump

I have been very hesitant to jump in with pre 2.0 versions, but with the release of 2.0 I've decided the time has come to build apps with this platform. My first work has been around libraries which has been a great learning experience, and the experience has been a good one. In fact the benefits of the new project system and multi-targeting has been a big improvement over older versions.

The support for pre-installed runtimes makes it much easier to manage deployment of Web applications, and proper multi-target support in the project system is a big bonus for library developers.

I'm in the middle of upgrading a couple of really old internal ASP.NET applications to .NET Core 2.0 and so far the process has been relatively smooth. Yes I struggle with the slow tooling quite a bit, but as annoying as that can be it's workable. And I am confident that Microsoft (and perhaps also JetBrains for both R# and Rider) can work out the kinks on that end as the tooling becomes more mature. I do hope they hurry though.

So what about you? Are you ready to give .NET Core and ASP.NET Core 2.0 a spin if you've been sitting on the fence like me? Sound of in the comments with your experience.

Resources

this post created and published with Markdown Monster.
© Rick Strahl, West Wind Technologies, 2005-2017

Dev Intersection 2017 Session Slides and Samples Posted

$
0
0

I've posted my Session Slides and code samples from last week's DevIntersection conference. It's been a while since I've been at a .NET Conference and as always after all the toil and tension getting ready for sessions, it all ends up being a blast as was catching up with friends after hours.

Thanks for those of you that attended my sessions and filled out the sessions rooms so nicely 😃. There were also a lot of good questions and discussions after all sessions which is always great. I was especially happy to see so many turn out to the Localization talk - which is a tough sell in the best of circumstances, and especially tough as the last session on the last day.

Here are the three sessions (or two if I count the Angular/ASP.NET one as a single long session).

  • Using Angular with ASP.NET Core
    Part 1: Getting started
    Part2: Putting it all together

    Part 1 of this session was basically an all code demo for creating a first Angular app and then hooking it up to an ASP.NET Core backend API. Part 2 then looked at a more realistic albeit small application and dives into the details about how to integrate Angular and ASP.NET and manage many common aspects like error handling, user authentication, deployment and hosting and more.

    The slides for these sessions are combined into a single large deck that are much more numerous than what I used during the session, filling in the details that were either covered by code samples or handled in the live coding bits.

    Samples and Slides:
    https://github.com/RickStrahl/DI2017-AspNet-Core-Angular

  • Localization in ASP.NET Core
    This session introduced localization in .NET in general and then jumped into the specifics of how to use the new dependency injection based localization features in ASP.NET Core. Several sample pages that are provided in the Github link below. The session also covered how to use Westwind.Globalization as a database driven resource localizer, along with a discussion on how to implement a custom Localizer implementation in .NET Core.

    Samples and Slides:
    https://github.com/RickStrahl/DI2017-ASP.NET-Core-Localization

Hope some of you find these materials useful. Enjoy.

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in ASP.NET Core   Angular   Localization  

Using Gists for General Purpose Web Pages with Markdown

$
0
0

Many of you probably know that Github has a GitHub Gist Site that can be used to post and share snippets of code easily. Gists are great for sharing longer code snippets on social media sites like Twitter, where you can link to a Gist for discussion. You can even associate multiple files to a single Gist so if you need to show client and server code, or want to show an implementation and example, or a whole slew of related configuration files. Each of the files can be 'associated' with a different syntax for each file which is nice:

Gists are like a single file Git Repository - Gists can be cloned and forked. There's also support for comments that allows for discussion of the code/text using the same comment system you use in Github issues.

One missing feature is support for Pull Requests - which would be a really nice addition for interactively updating content.

Gists support Markdown

Gist as code snippets is pretty cool and powerful all by itself, but what's actually even more exciting is that Gists support Markdown. Yeah you probably knew that, but are you using this functionality as much as you should?

More Detailed Code Discussions

If you're creating Gists for discussion purposes - especially when sharing the discussion on Twitter - I'd argue that creating a Markdown file is almost always better than creating one or more source files individually, because you can actually say something useful about the code you are sharing and frame the code in the context of the discussion (or vice versa). The comment support allows you to continue the discussion past the pointer when the Twitter conversation has scrolled off the feed.

I also like Markdown because trying to cram a description into a text box usually sucks, especially if you have multiple bits of code. With Markdown's capability of embedding code snippets with syntax highlighting, I think it's an an infinitely more approachable way for sharing code along with explanation.

For example here's a bug I ran into and shared on Twitter via a Gist link while back:

Notice there are multiple snippets (which can all be using different syntax).

As I went through this, I updated the text and eventually ended up with a mini-blog like entry. The thought process is there as well as the solution. Because it's Markdown I can now also pick up the Markdown - as is - and stick it into my Markdown editor for a future blog post which saves some additional time.

Gist as Mini Blogs

Because Gists support Markdown that is turned into HTML, you can use it to create one off HTML pages that you can easily share. If you don't have a blog and you want to write up something you found during development this is a great way to put out something public. Even if you have a blog you might have content that's too small or not a good fit for a blog and you can write it as a Gist instead. Sort of a blog away from your blog 😄

There's a difference from GitHub Issues though - you can't embed images into the document by pasting them in and having GitHub store them which is a bummer. All images you use have to be externally stored somewhere else.

Gists to share 'Secret' Text

Gists can also be 'secret' which merely means the Gist isn't linked anywhere or shown in your profile. Anybody can access the Gist if the URL is known, but the URL isn't shared unless you explicitly give it out to somebody. I've found this useful in many situations where I needed to share some short lived information that wouldn't transfer over email (like emails with links to download files that are blocked for example) and Gists work just fine for that.

Gists for Configuration Settings

I also use Gists for storing non-sensitive configuration settings. Machine configuration scripts short registry scripts, I have them shuttled away as private gists that I reuse when paving a machine.

There are also a number of tools that store their configuration settings as Gists. There's a VS Code syncing utility that creates a shared backup configuration as Gist. Tools like BoxStarter store Chocolatey scripts to run as Gists.

Gists have a relatively easy API that is easy to integrate with so it's pretty straight forward to post and retrieve Gists from the Web.

Gists as a shared Document Store

I use my own Markdown Monster as my Markdown editor and there is a Gist Integration Addin which makes it very easy to open and save documents using Gist support that's directly integrated into MM.

This makes it easy to store things like Blog posts in progress in a Gist in a central location. If I want to pick up editing that post later from a different machine I can just grab the Gist, edit and save when I'm done. It works from anywhere whether I use MM or not as I can even do my edits directly on the Gist Web site.

Using a Gist also allows me to share my post with others for review or input in an easy fashion. Reviewers can fork the

Markdown Monster also allows saving individual code snippets as Gists that are then directly embedded into the Markdown document and rendered from Github:

I'm not necessarily trying to sell you on Markdown Monster here as the main winner here is the Gist process, but I bring it up because I found these incredibly valuable additions to Markdown Monster that have changed in some ways how I work.

Markdown Monster also allows editing of non-Markdown code Gists, although it will only edit a single file.

What I would love to see

As nice as Gists are, it would be really nice if there was support for Pull Requests or some easy way to pull in changes from a fork. This really would make it a killer platform for hosting blog posts and letting users make edits for typos and clarifications for example.

Image support would be awesome as well - the ability to paste images directly into the editor and have them show up is such a useful feature in GitHub Issues that I sorely miss it in Gists. I suspect this has to do with people abusing Gists in all sorts of unexpected ways, but it still would be nice if that was supported in some capacity (maybe with some image size limits).

Summary

Gists and also other sharing sites like PasteBin are pretty awesome, but Gists to me feel immediately familiar because I already use Github and I have an account there, so it's all pretty transparent. If you haven't been using Gists (or some other code sharing site) think about how you can utilize this functionality and integrate it into your work flow. Whether it's for sharing code on social media, for storage of multi-machine access for documents, Gists are an easy way to do this. Even if you just use the online editor it's easy to create content - and very familiar if you're already using Github.

What unconventional uses are you using Gists for?

Resources

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in Markdown  

Working around the lack of dynamic DbProviderFactory loading in .NET Core

$
0
0

I've been writing a lot about how nice it is to have .NET Core 2.0 and .NET Standard 2.0 support the majority of the full framework .NET, which makes it easy to port existing full framework libraries. This post is a little different and I talk about one thing that is not compatible in .NET Standard/Core.

Specifically I am talking about missing support for DbProviderFactories, which provide support for dynamically loading a DbProviderFactory without having to take a dependency on the underlying provider you are loading. This to me seems an incredibly important piece of ADO.NET that is missing in .NET Core and Standard 2.0.

Without DbProviderFactories there's no built-in support for using ADO.NET components without taking a dependency on the data access provider library such System.Data.SqLite or System.Data.MySql etc.)

To be clear, what I'm talking about is fairly low level, and of no concern in most application level code, because no matter what at some point an application has to take a dependency of the providers it wants to use. However, generic data access libraries or libraries that generically use database code with different providers typically don't know what providers might be used and so shouldn't have to take all the data provider dependencies that might be used.

If you're building a re-usable library that works with multiple data providers, the missing DbProviderFactories.GetFactory() method is a huge hole in ADO.NET and it's a pain to work around.

The missing DbProviderFactories in Westwind.Globalization

First a little background...

I recently updated my West Wind Globalization library to .NET Core 2.0. I probably sound like a broken record, but I was thrilled to see that the vast majority of the code of this 10+ year old full framework library ported with only very minimal changes required. I was even able to port the base SQL Server based ADO.NET code that is used to handle the very simple database access that's required to manage localization resources with no changes.

But the full framework library uses a helper library Westwind.Utilities and its data access layer to provide simple ADO data access. The library internally used DbProviderFactories.GetFactory() to allow loading up various different data base providers by name which in turn is used by Westwind.Globalization to access multiple providers like Sql Server, SqLite, MySql and SqlCompact. This makes it possible to load a DbProviderFactory instance without taking a reference to the underlying provider in Westwind.Globalization.

Eventually your application needs to take a hard dependency on the data provider you want to use, and that's fine. But inside of a generic library you definitely don't want to have to take that dependency. In full framework as long as the provider library is loaded and configured using DbProviderFactories.GetFactory() gives you that generic interface without the hard dependency.

In full .NET Framework you can use the following code:

var dbProvider = DbProviderFactories.GetFactory("System.Data.SqLite")

using(var connection = dbProvider.CreateConnection()) {
    connection.ConnectionString = ConnectionString;
    
    var cmd = dbProvider.CreateCommand("select * from customers",connection);
    connection.Connection.Open();
    cmd.CommandText = sql;
    var reader = cmd.ExecuteReader();
    // ... off you go 
    reader.Close();
}

If you want to use a different provider - you simply switch the first line and provide a different provider:

var dbProvider = DbProviderFactories.GetFactory("System.Data.MySql")

and the rest still works because the DbProviderFactory exposes most of the ADO.NET data objects via CreateXXXX() methods:

Figure 1 - DbProviderFactory gives access to ADO.NET objects in a provider independent way.

The important thing is that the code in the data access library doesn't have to take a dependency on System.Data.SqLite or System.Data.MySql, which makes it very clean for a data access library to not have dependencies on all the individual libraries it supports.

This all has always worked just fine on full framework.

.NET Standard/.NET Core 2.0 - Where's my DbProviderFactories, Dude?

Unfortunately in .NET Core/Standard 2.0, the DbProviderFactories class is completely missing, and there's no direct replacement available to dynamically retrieve a DbProviderFactory dynamically based on a provider name.

I don't know what the .NET team was thinking to leave this important feature out.

A hint of why can be found in the DbProviderFactories class interface which includes the GetFactoryClasses() method that returns a DataTable of available providers. There's also the issue that .NET Core doesn't have a concept of registered providers (nor classic .config configuration support), so there's no central repository that gives access to the providers.

I would be perfectly happy if there was only a replacement for the crucial GetFactory() function - everything else can be worked around relatively easily or accommodated by some simple up front requirements like the parent application has to have the dependency added.

GetFactory() retrieves a DbProviderFactory using a provider name string. It assumes the provider library is registered and the assembly loaded (on .NET full at least) you can just load the provider. In .NET Core the only direct alternative is to take a hard dependency on the data provider and access the DbProviderFactory .Instance property directly.

For example here's the SqLite instance:

var dbProvider = Microsoft.Data.Sqlite.SqliteFactory.Instance;

This works - but of course this requires that you have a reference to the Microsoft.Data.SqLite assembly. That's no good.

So, the alternatives I thought about:

  • Eat it and take the dependency
    I could just add the dependencies for supported providers to my generic library, but that would seriously suck because now anybody using this 'generic' library that otherwise doesn't have any dependencies, now would have many dependencies. It also wouldn't allow for just any provider to work.

  • Add specialized assemblies for each provider implementation One pretty common approach is to add specialized versions of the library that make the provider specific dependencies available. Something along the lines of: Westwind.Globalization.SqLite or Westwind.Globalization.MySql etc. That would work fine, but it's an administrative nightmare as each provider requires a separate project and a separate set of dependencies to keep in sync. On the plus side it guarantees the right providers are always available and loaded.

  • Hack it using Reflection
    You can also use Reflection to dynamically access (and load if necessary) the providers. This makes the assumption that the top level application (or one of its dependencies higher up the stack) have added the required provider assemblies (ie. System.Data.SqLite or Microsoft.Data.SqLite) to the application. It also limits me to a known set of providers that I have to know about up front. Uncool but better than nothing.

Hacking it

As you might guess my chosen alternative is the last one which uses Reflection to dynamically instantiate various known providers. This code actually ended up in Westwind.Utilities and the DataUtils class as a helper function.

For now I added the providers I immediately needed to work with, but I suppose a few additional common providers might be useful as well.

What I implemented are essentially three static methods that return a provider factory:

  • GetDbProviderFactory(string dbProviderFactoryTypename, string assemblyName)
  • GetDbProviderFactory(DataAccessProviderTypes type)
  • GetDbProviderFactory(string providerName)

Of those three, the first one is the one that does all the work of using Reflection to try and retrieve a provider based on a type name and assembly name:

public static DbProviderFactory GetDbProviderFactory(string dbProviderFactoryTypename, string assemblyName)
{
    var instance = ReflectionUtils.GetStaticProperty(dbProviderFactoryTypename, "Instance");
    if (instance == null)
    {
        var a = ReflectionUtils.LoadAssembly(assemblyName);
        if (a != null)
            instance = ReflectionUtils.GetStaticProperty(dbProviderFactoryTypename, "Instance");
    }

    if (instance == null)
        throw new InvalidOperationException(string.Format(Resources.UnableToRetrieveDbProviderFactoryForm, dbProviderFactoryTypename));

    return instance as DbProviderFactory;
}

The code does three things:

  • Tries to load the Instance property of a provider accessed dynamically
  • If not found, it tries to load the providers assembly
  • Tries again to the load the Instance property

It's ugly, but it works. The assembly load gets around the problem of the provider assembly not having loaded yet. Unlike ASP.NET Classic, ASP.NET Core does not pre-load assemblies when the application starts so unless the provider is otherwise used previously, the type is not going to be available. Loading the assembly explicitly loads the necessary dependency(s) and should then work. Incidentally this also works for full framework .NET and allows loading of providers without having to register the DbProvider in the .config file.

The code uses a ReflectionUtils helper class from WestWind.Utilities to help with accessing the static property and loading the assembly, which keeps this code simple.

Other Helpers

The above method is the low level interface that requires that you know what assembly and class to load. In order to provide to make that a little easier I added a couple of additional methods that provide easy access or support for the old Provider Name syntax where applicable.

Since I'm already usurping the existing API, the first version uses a few known provider types through an Enum to specify a provider. This certainly is a lot easier than trying to run around looking for the right provider name to use which was always a pain in the ass since there's no consistency across providers there. You can look at the implementation in the helper class.

Here's the Enum:

public enum DataAccessProviderTypes
{
    SqlServer,
    SqLite,
    MySql,
    PostgreSql,
#if NETFULL    
    OleDb,
    SqlServerCompact
#endif    
}

The first of the two methods just uses this enum to retrieve a specific DbProviderFactory instance. Since the provider types and assemblies are pretty much fixed this method is mostly a map to a few well-known providers:

public static DbProviderFactory GetDbProviderFactory(DataAccessProviderTypes type)
{
    if (type == DataAccessProviderTypes.SqlServer)
        return SqlClientFactory.Instance; // this library has a ref to SqlClient so this works

    if (type == DataAccessProviderTypes.SqLite)
    {
#if NETFULL
        return GetDbProviderFactory("System.Data.SQLite.SQLiteFactory", "System.Data.SQLite");
#else
        return GetDbProviderFactory("Microsoft.Data.Sqlite.SqliteFactory", "Microsoft.Data.Sqlite");
#endif
    }
    if (type == DataAccessProviderTypes.MySql)
        return GetDbProviderFactory("MySql.Data.MySqlClient.MySqlClientFactory", "MySql.Data");
    if (type == DataAccessProviderTypes.PostgreSql)
        return GetDbProviderFactory("Npgsql.NpgsqlFactory", "Npgsql");
#if NETFULL
    if (type == DataAccessProviderTypes.OleDb)
        return System.Data.OleDb.OleDbFactory.Instance;
    if (type == DataAccessProviderTypes.SqlServerCompact)
        return DbProviderFactories.GetFactory("System.Data.SqlServerCe.4.0");                
#endif

    throw new NotSupportedException(string.Format(Resources.UnsupportedProviderFactory,type.ToString()));
}

To provide some semblance of backwards compatibility, especially for full framework I also provide an overload for the old provider names. For full framework this method also allows loading of any provider using the provider name just as you could before, while in .NET Standard/Core only the supported providers work.

public static DbProviderFactory GetDbProviderFactory(string providerName)
{
#if NETFULL
    return DbProviderFactories.GetFactory(providerName);
#else
    var providername = providerName.ToLower();

    if (providerName == "system.data.sqlclient")
        return GetDbProviderFactory(DataAccessProviderTypes.SqlServer);
    if (providerName == "system.data.sqlite" || providerName == "microsoft.data.sqlite")
        return GetDbProviderFactory(DataAccessProviderTypes.SqLite);
    if (providerName == "mysql.data.mysqlclient" || providername == "mysql.data")
        return GetDbProviderFactory(DataAccessProviderTypes.MySql);            
    if (providerName == "npgsql")
        return GetDbProviderFactory(DataAccessProviderTypes.PostgreSql);            

    throw new NotSupportedException(string.Format(Resources.UnsupportedProviderFactory,providerName));
#endif
}

Note in order for this last method to work with full framework, any providers have to be registered in the .config file.

For now I just added a few 'known' providers that I actually needed to work with. It's easy enough to add additional providers (Postgres, OleDb etc.)

Mixed Targets

As you can see the code uses a bunch of framework specific compiler directives that perform different tasks depending on which platform the code compiles against. For example:

if (type == DataAccessProviderTypes.SqLite)
{
#if NETFULL
    //return DbProviderFactories.GetFactory("System.Data.Sqlite");
    return GetDbProviderFactory("System.Data.SQLite.SQLiteFactory", "System.Data.SQLite");
#else
    return GetDbProviderFactory("Microsoft.Data.Sqlite.SqliteFactory", "Microsoft.Data.Sqlite");
#endif
}

NETFULL isn't default compiler constant provided by the CSharp compiler and default project setup. Rather I explicitly specify this directive in my project, as part of my .NET SDK project.

A few definitions I tend to always create in multi-targeted projects:

  • NETFULL - full framework
  • NETSTANDARD and NETCORE - .NET Core/Standard (mostly used interchangeably)

These are defined as compiler constants in the .csproj file:

<PropertyGroup Condition=" '$(TargetFramework)' == 'netstandard2.0'"><DefineConstants>NETSTANDARD2_0;NETCORE;NETSTANDARD;</DefineConstants></PropertyGroup><PropertyGroup Condition=" '$(TargetFramework)' == 'net45'"><DefineConstants>NET45;NETFULL</DefineConstants></PropertyGroup>

Note that the Visual Studio and dotnet new automatically create the framework targets (NET45 and NETSTANDARD2_0) for the explicit targets that you use. But I find the higher level distinction between full framework and .NET Standard/Core is usually the more generic way to differentiate.

SqLite Differences

One thing to watch out for is that Microsoft has created their own .NET Standard Microsoft.Data.SqLite implementation, which behaves quite differently than the System.Data.SQLite provider which hasn't been updated to support .NET Core/Standard. This means that different libraries are used for full framework (System.Data.SQLite) vs. .NET Standard (Microsoft.Data.SqLite).

Not only are the packages and assemblies different, but the Microsoft provider is much more low level than than the old full framework driver from the SqLite team. The Microsoft provider only exposes the minimal types that SqLite supports, so for example Dates are returned as ISO strings rather than being automatically transformed to dates.

This can bite you in unexpected ways. I ran into this with generic data object mapping from a data reader into an object and the dates in that bit of code would blow up because the mapper would expect a date instead of a string.

On my Way

The code in this post is all I needed to get things working, but I am still baffled why Microsoft decided to not include some way to load a provider factory dynamically - it's a key requirement for just about any data access component that wants to access multiple providers without having to resort to messy workarounds.

The solution I provide here is a yucky helper, but it gets the job done and I was able to make it work in Westwind.Globalization. It's a shame that it took this much discovery effort to make it happen. Here's to hoping Microsoft changes its mind and either provides an implementation of DbProviderFactories.GetFactory() or some other API that makes it possible to load a provider more generically.

Resources

© Rick Strahl, West Wind Technologies, 2005-2017
Posted in .NET Core   ADO.NET  

Easy Configuration Binding in ASP.NET Core - revisited

$
0
0

A long while back I wrote a detailed and still relevant post that discusses ASP.NET Core's new configuration model and binding of configuration values to .NET types. In it I discussed the configuration system and specifically in how to set up configuration injection using IOptions<T>.

I really like the new model, which is much more flexible than the old, static ConfigurationManager in full framework because it provides strong typing on configuration settings out of the box. In the past I'd been using my own Westwind.Utilities.Configuration setup from Westwind.Utilities that provided much of the same functionality (interchangeable providers) - with .NET Core there's no need for a third (or for me first) party solution as the in-the-box implementation provides most of those same features. Nice.

In the process of setting up a new application over the weekend I stumbled across an even simpler and - to me at least, cleaner - approach to configuring and injecting configuration into an application without using IOptions<T>.

Let's take a look.

Create your Configuration Object

ASP.NET Core's configuration system allows binding object properties to a series of providers. By default there's a JSON provider that looks at appsettings.json file, environment variables and the UserSecrets store. The config system can bind values from all these providers (and any others you might add) into a typed configuration object which can even include nested sub-objects.

I'm working on updating my blog to .NET Core - it's time: The blog is 12+ years old and still running Webforms. For that app the beginnings of my configuration object looks like this:

public class WeblogConfiguration
{
    public string ApplicationName { get; set; }
    public string ApplicationBasePath { get; set; } = "/";
    public int PostPageSize { get; set; } = 10000;
    public int HomePagePostCount { get; set; } = 30;
    public string PayPalEmail { get; set; }
    public EmailConfiguration Email { get; set; } = new EmailConfiguration();
}

public class EmailConfiguration
{
    public string MailServer { get; set; }
    public string MailServerUsername { get; set; }
    public string MailServerPassword { get; set; }
    public string SenderName { get; set; }
    public string SenderEmail { get; set; }
    public string AdminSenderEmail { get; set; }
}

Note that you can easily nest the configuration objects which helps organizing complex configuration settings into easily segregated blocks. Here I separate out the email settings into a separate nested class.

I tend to use appsettings.json for most settings, and then use either user secrets for dev (so the values don't get shared to source control) or environment variables in production to feed in the sensitive values like passwords. Here's the relevant appsettings.json that has all the fields from my configuration mapped to a Weblog property key:

{"Logging": { ... }
  },"Weblog":
  {"ApplicationName": "Rick Strahl's WebLog (local)""ApplicationBasePath": "/","ConnectionString": "server=.;database=WeblogCore;integrated security=true;MultipleActiveResultSets=True","PostPageSize": 7600,"HomePagePostCount": 25,"Email": {"MailServer": "mail.site.com","MailServerUsername": "name@site.com","MailServerPassword": "nicetry","SenderEmail": "admin@site.com","SenderName": "West Wind Weblog","AdminSenderEmail": "admin Administration"
    }
  }
}

Inject in Startup.cs

To start we need a IConfiguration instance which is the configuration root object. As of .NET 2.0 IConfiguration is a default service that can get injected automatically without explicitly adding any explicit Configuration services. The default .NET Core template now provides IConfiguration in the Startup constructor:

public Startup(IConfiguration configuration)
{
    Configuration = configuration;
}
public IConfiguration Configuration { get; }

So here's the part that is not really written much about, which is that you can easily bind a configuration instance (or interface) explicitly without having to go through the IOptions<T> interface.

Instead you can simple do the following in ConfigureServices():

services.AddConfiguration()  // enable Configuration Services

var config = new WeblogConfiguration();
Configuration.Bind("Weblog", config);
services.AddSingleton(config);

This provides you a filled config object instance that has values set from the various configuration stores. I can take the configured object and add it to the DI provider as a singleton. From then on I can inject the raw Configuration instance into other components or views.

To inject this configuration singleton added above is now simply a matter of requesting the WeblogConfiguration in the constructor of any class that needs it:

public class PostsController : Controller
{
    PostRepository PostRepo { get; }
    WeblogConfiguration Config  { get; }
    public PostsController(PostRepository postRepo, 
                           WeblogConfiguration config)
    {
        PostRepo = postRepo;
        Config = config;
    }
    ...
}

Likewise the repository constructor also receives an instance of the configuration object:

public PostRepository(WeblogContext context, 
                      WeblogConfiguration config) : base(context)

I much prefer this over injecting IOptions<T> because it's more direct and specifies the actual dependency that's needed by the components, plus it's easier to set up tests that now don't have to get an instance of IOptions<T> from somewhere.

Compare to to using IOptions<T>

Just so you know what I'm talking about when I say IOptions<T> implementation: Here's an example of how to set up the same behavior using IOptions<T> instead of the configuration singleton.

services.AddOptions();

var section = Configuration.GetSection("Weblog");
services.Configure<WeblogConfiguration>(section);

You can then inject into a controller's constructor like this:

public PostsController(PostRepository postRepo, 
                       IOptions<WeblogConfiguration> options)
{
    PostRepo = postRepo;
    Config = options.Value;  // note the indirect reference :-(
}

Obviously this isn't any more complicated, but it does require an extra layer of abstraction that doesn't really add any value. IOptions<T> is just that - an abstraction wrapper without any real feature benefits.

For application level code this is perfectly fine, but if I do this for my repository which lives in a separate business project independent of your main application:

public PostRepository(WeblogContext context, 
                      IOptions<WeblogConfiguration> config) : base(context)

you now have a dependency on IOptions there as well along with having to provide an IOptions implementation in order to test the component. It's a lot easier to just create an instance and stuff customized values into it when needed.

IOptionsSnapshot can reload changed Config Settings

One advantage to using IOptions<T> or more specifically IOptionsSnapshot is that it can detect changes to the configuration source and reload configuration as the application is running.

Plain Configuration Binding

For completeness sake note that you can also bind string values by configuration path using the Configuration object indexer.

var connectionString = Configuration["Weblog:ConnectionString"]
var mailServer = Configuration["Weblog:Email:MailServer"];

To get access to the Configuration object in your code, you can inject IConfiguration and as of .NET Core 2.0 the configuration object is automatically available in the DI container of every application.

Static Instance?

As many of you know I'm not the biggest fan of DI for all things. I think there are a few things - and especially Configuration - that need to be easily and universally accessible in all situations. Especially in older versions of .NET Core it was painful to get at the configuration objects say inside of business or a system level component since configuration wasn't automatically injected by default. That's been fixed, but it can still be difficult to get access to the DI service context in some cases.

If you're building a generic component that needs to be reused in many different environments, there's no guarantee that DI is available and configured. Configuration is a critical component that often is needed deep in the bowels of other components or necessarily static logic where it's not easy to get access to DI injected components. It never made sense to me to force DI on simple few-line helpers when a static function that have no other dependencies.

Long story short: It is sometimes useful to be able to get at a static instance of configuration and while I try to avoid introducing singleton statics like this in most cases, I think configuration is one of those cases where it makes sense (at least to me).

So, I decided to create my Configuration instance with a static property that holds the Current instance:

 public class WeblogConfiguration
 {
    public static WeblogConfiguration Current;

    public WeblogConfiguration()
    {
        Current = this;
    }
}

Since the configuration object is a singleton anyway, the Current property is implicitly set only once when the code in ConfigureServices() initially binds the config instance. After that you can then use either DI whenever possible - ie. most cases - and the static property in those few speciall cases when it's difficult to get access to the DI context.

In my Weblog for example, I'm copying over a lot of old helper code from the old application and there are static function helpers that generate a bunch of small HTML bits like this:

public static HtmlString ShareOnFacebook(string url)
{
    var baseUrl = WeblogConfiguration.Current.ApplicationBasePath;
    string link =
$@"<a href=""https://www.facebook.com/sharer/sharer.php?u={url}&display=popup"" target=""_blank""><img src=""{baseUrl}images/shareonfacebook.png"" style=""height: 20px;padding: 0;"" /></a>";

    return new HtmlString(link);
}

The static property makes this code work easily without having to refactor all of the ported functions in this class to a (pointless) class wrapper.

This gives me the best of both worlds: I get the ability to inject configuration where I can get at the DI context (just about all new code), and use the static in the few cases when DI is not easily available (legacy code).

Many Options - Choice is good

In ASP.NET Core there are many ways to handle configuration and that's a good thing. As I mentioned I ditched my own custom solution in favor of the new configuration system in .NET Core which is good enough out of the box. The only thing I'm really missing is an easy way to update configuration stores and force initial values to be written, but overall that's a minor concern and doesn't make sense for all the supported providers (how do you update Environment variables loaded from a startup script for example?)

The good news is that the configuration engine is very flexible and provides a number of different ways to get at configuration objects.

  • Raw Configuration[key]
  • IOptions<T> binding to a Config Section
  • Configuration.Bind() to bind to a Config Section
  • Static Props if you want it

IOptions<T> is what the Microsoft documentation recommends, and I've shown another way directly binding and injecting a configuration instance instead that to me is cleaner to work with.

Resources

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2017
Posted in ASP.NET Core   .NET Core  

Code Magazine Article: Securing IIS Web Sites with Let’s Encrypt Certificates

$
0
0

I'm happy to point at my new CoDe Magazine article Securing IIS Web Sites with Let's Encrypt Certificates which is in the January/February edition issue:

I've written a few times about Let's Encrypt, which is an open source platform and protocol that provides free TLS certificates along with an API to facilitate automation of the certificate generation process. Additional tools provided by third parties then provide support features that make it drop dead simple to automatically create certificates and install them into Web Servers of choice.

This article summarizes the hows and whys of Lets Encrypt and provides a quick start on how you can use Lets Encrypt with standalone IIS servers. There really is no more excuse for even running that small hobby site that's public facing without IIS.

Go check it out and while you're at it check out the rest of this issue of CODE Magazine.

© Rick Strahl, West Wind Technologies, 2005-2017
Posted in Security  

Flexing your HTML Layout Muscles with Flexbox

$
0
0
Flexbox is a CSS based technology that makes it much easier to create structured layouts with HTML and CSS. Based on a containership hierarchy, Flexbox combines the structured features of tables with the free form layout capabilities of arbitrary HTML elements that make it possible to create complex , yet flexible HTML designs much more easily that was otherwise possible. My article in CoDe Magazine describes the reasons for Flexbox, the basics of operation and few practical examples you can use today to put flexbox to use.

Distributing Content and Showing a ReadMe file in a .NET Core Nuget Package

$
0
0

When you use NuGet with the new .NET SDK type project format, NuGet packages no longer can deploy content into the target project. In classic projects and full framework projects, you could add a content folder to your NuGet package and NuGet would install that content into projects root folder.

There are good reasons why this change and removal happened:

  • Bad package etiquette - polluting projects with extra files
  • NuGet wouldn't remove content added through the package (since it can be changed)
  • Keeping content updated and versioned along with the package is a pain

Nevertheless, I have one package - Westwind.Globalization.Web - where having Content shipped as part of the package is very useful. West Wind Globalization's Web components include a Localization Administration UI and the UI's HTML, CSS and Script was previously shipped as Content in the old NuGet package.

This still works for the full framework package:

Figure 1 - Full framework packages still support Content folders that are expanded when installed

When I recently ported this libary to ASP.NET Core - Westwind.Globalization.AspNetCore - I found out that I can no longer ship my Localization Admin UI via Content bundling inside of the NuGet package as the new .NET SDK projects that are required for .NET Core/Standard development no longer load the content.

What used to work in Classic Projects

NuGet packages for Full Framework projects can still package Content and Tools folders.

Content and Tools Folders do not work in .NET SDK Projects

Just keep in mind that the following sections apply only to full framework projects. There no longer is support for these in the new .NET SDK projects and while you can have the folders, they are ignored.

Content

The Content folder can hold arbitrary content that's dumped into the project's root folder. You can also use some limited templating to format some limited text expressions like project name, default namespace and class names inside of text documents.

Tools

Additionally you can also put a Powershell script into a Tools folder and Visual Studio will execute that install.ps1 script. The script has access to the Visual Studio IDE COM objects and with that you can bring up a Web browser window inside of Visual Studio to display more information, or open an external browser to show more information.

In lieu of embedding content directly this is the next best alternative. The Newtonsoft.Json package does just this and you can actually

Figure 2 - Newtonsoft.JSON is an example of a post-installer that displays a Web Page

As you can see automating Visual Studio from Powershell is a sucky affair, but it works, although only for full framework.

.NET SDK Projects - No more Content and Tools

So in .NET Core/Standard projects which only support the new .NET SDK style project format, content or tools can no longer be distributed as part of NuGet package. Well, you can distribute it but they won't get installed.

As you probably know by now, .NET SDK projects can optionally build a NuGet package as part of the project compilation process:

Figure 3 - .NET SDK packages now allow you to generate a NuGet package for each target platform supported by the project.

By default the package picks up the output binaries and xml doc files (and optionally pdb files) for library for each of the targets defined, which is an awesome feature if you've ever built multi-targeted projects with classic .csproj projects. For multi-targeted projects, the process of creating output and a NuGet package is drastically easier than the myriad of steps required in classic projects.

Here's an example of a multi-targeted NuGet package of Westwind.Globalization which supports .NET 4.5+ and .NET Standard:

Figure 4 - Multi-targeting in NuGet Packages from project build output is drop dead simple.

But - no content.

Externalizing the Content

So for my ASP.NET core package Westwind.Globalization.AspNetCore package I no longer can distribute the LocalizationAdmin folder as part of the package. Instead I opted for putting the content into my GitHub repo and offering it as a downloadable Zip file with instructions on how to install it.

Package the Zip and Tag in Git

My build process optionally regenerates this content for each release and so gets tagged by the Git version tag applied to each release. This allows matching up NuGet releases to a specific version of the content zip file.

RTFM

Instructions in the Getting Started guide are usually not enough - people still come back and ask where the Localization UI dependencies can be found. Yeah it's in the install instructions, but we all know we often only skim those instructions and I'm as guilty of that as the next guy. Sigh.

In your Face: Show me the Instructions

In the end, the goal for the NuGet package is to display some sort of information to make it obvious that you still need to download the zip file if you want to use the Administration UI.

Embedding a Readme.txt File Works!

.NET SDK projects do support embedding of a readme.txt file in the root folder of the package. By making a special entry into the .csproj file you can specify that the readme.txt file gets embedded into the package, and more importantly, the file gets displayed by Visual Studio in a tab:

Figure 5 - Getting a readme.txt to display for a package is fairly easy.

The readme.txt file displays only for top level packages - if the package is referenced as a dependent package the readme doesn't display. If it's a dependency the host package needs to handle the display of any messages necessary.

To get the readme.txt file into the project:

  • Add a readme.txt file into the project root
  • Add a file inclusion entry with a pack="true" attribute into the .csproj

Use this in the .csproj file to include the readme.txt:

<ItemGroup><None Include="readme.txt" pack="true" PackagePath="." /><ItemGroup>

Figure 6 - Enbedding the readme.txt involves a custom entry in the .csproj file.

Summary

While it's a bummer that Content no longer works, I can see how removing that feature was probably a good idea. Even when I had my localization UI in the package for full framework, there were issues where package updates wouldn't update the files that were already there when updating. I had to delete the files then update the package to ensure to get the latest.

At least with this explicit content it's more explicit and the user installing can choose whether to keep the old files or install the new ones. The install process is a bit tedious - download and unzip, and you have to do it each time the package updates (if there are updates to the UI), but it's not unreasonable to expect this.

It would be real nice if there would be a readme.html or readme.md instead to make the content look a little more interesting but then again that's a potential security issue allowing arbitrary HTML to mine information from anybody who installs a package.

The readme.txt is a minimalistic compromise and with the clickable links it's reasonably easy to download the files and link to richer information as needed.

Good to go!

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2018
Posted in NuGet   .NET Core  

Accessing Configuration in .NET Core Test Projects

$
0
0

If you've been following my blog you know I've written a bit about how the configuration system works in .NET Core and specifically in ASP.NET Core using the dependency injection system:

Both posts describe how you can set up configuration using the various now automatically configured Configuration services in the ASP.NET Startup class and its ConfigureServices() method.

But how do you do this in a Test or any other ASP.NET Core project where configuration isn't automatically configured?

Why Configuration? My simple Use Case - Sensitive Values

When running test projects it's often possible to get away without having to configure a configuration class and just provide explicit values. But often you actually need full dependency injection to be hooked up in order to get Configuration injected into dependencies which brings up two issues:

  • How to get access to the Configuration Provider
  • Hooking up Dependency Injection so Configuration can be injected

My first use case is the simple use case and doesn't require dependency injection: I simply need configuration to handle reading some configuration information in order to test sending an email. I explicitly need to make it so I don't hardcode the sensitive email values and they don't end up in my Git repo. So it would be nice to use UserSecrets as well as get the values from the already existing configuration object config in appSettings.json - same as the Web application that actually runs this code.

The second scenario involves using a business object that also uses this email sending logic in an integration test. Here the configuration object is injected into the business object so I need to have dependency injection available.

Let's take a look at both of these scenarios.

IConfiguration in non-ASP.NET Projects

ASP.NET Core 2.0 now automatically provides an IConfiguration provider that handles input from appsettings.json (including the .Development file), UserSecrets and Environment variable which is great. Configuration is such a core thing that almost every application needs it and with ASP.NET Core 2.0 you don't have to worry about setting up the configuration system manually.

However in a test project that onus falls on you. Unfortunately it's not quite so easy to do this as in ASP.NET because test projects don't automatically configure either a Dependency injection container with common objects, or a configuration provider, so this has to be handled manually.

Fortunately the process to do this is pretty straight forward.

Setting up and Retrieving a Raw Configuration Object

In my test projects I generally add a TestHelper class that provides a few commonly used values, but I also add a few helper methods and one of the methods I typically create is a GetApplicationConfiguration() class. In this application I have a configuration class call KavaDocsConfiguration which is a nested class that contains a bunch of values along with a nested Email object that contains the configuration values I need for my mail test code.

Here's what my configuration in appsettings.json looks like:

{"Logging": {...},"KavaDocs": {"ApplicationName": "KavaDocs","ConnectionString": null,    "ApplicationBasePath": "/","ApplicationHomeUrl": "https://localhost:5000","Email": {"MailServer": null,"MailServerUsername": null,"MailServerPassword": null,"SenderName": "Kava Docs Administration","SenderEmail": "support@kavadocs.com","AdminSenderEmail": "support@kavadocs.com","UseSsl": true
    }
  }
}

To access the configuration I have to build an IConfigurationRoot explicitly, which is the part that ASP.NET handles explicitly. Once I have the config root I can then bind it to an object instance.

Here are a couple of helpers that configure configuration root and provide an instance of a configuration object - we'll use both of these methods for different purposes later:

public static IConfigurationRoot GetIConfigurationRoot(string outputPath)
{            
    return new ConfigurationBuilder()
        .SetBasePath(outputPath)
        .AddJsonFile("appsettings.json", optional: true)
        .AddUserSecrets("e3dfcccf-0cb3-423a-b302-e3e92e95c128")
        .AddEnvironmentVariables()
        .Build();
}

public static KavaDocsConfiguration GetApplicationConfiguration(string outputPath)
{
    var configuration = new KavaDocsConfiguration();

    var iConfig = GetIConfigurationRoot(outputPath);

    iConfig
        .GetSection("KavaDocs")
        .Bind(configuration);

    return configuration;
}

Notice that the code needs a basepath in order to find the appsettings.json file which is going to be the output path for the file in the test project. I copied this file from my Web Project so I get the same configuration settings and then make sure I mark it as copy to the output folder:

In order for UserSecrets to work in a test project a little extra effort is required since test projects don't let you just edit the value in Visual Studio as you can in a Web Project. I added my UserSecrets key from the Web project into test project's .csproj file configuration:

<PropertyGroup><TargetFramework>netcoreapp2.0</TargetFramework><UserSecretsId>e4ddcccf-0cb3-423a-b302-e3e92e95c128</UserSecretsId></PropertyGroup>

so now I'm also looking at the same UserSecrets values that my Web project is looking at. Yay!

Using the Configuration Object Explicitly

In my test project using NUnit I can now pull this value out as part of the initialization and store it as a property on my test object:

[TestFixture]
public class SmtpTests
{
    private KavaDocsConfiguration configuration;

    [SetUp]
    public void Init()
    {
        configuration = TestHelper.GetApplicationConfiguration(TestContext.CurrentContext.TestDirectory);
    }
    
    [Test]
    public async void SendEmailTest()
    {
        var smtp = new SmtpClientNative();

        // this code here uses the configuration
        smtp.MailServer = configuration.Email.MailServer;
        smtp.Username = configuration.Email.MailServerUsername; 
        smtp.Password = configuration.Email.MailServerPassword; 

        smtp.SenderEmail = "West Wind Technologies <info@west-wind.com>";
        smtp.Recipient = "test@gmail.com";

        smtp.Message = "Hello from Mail Gun. This is a test";
        smtp.Subject = "Mailgun Test Message";

        Assert.IsTrue(await smtp.SendMailAsync(),smtp.ErrorMessage);
    }

}    

The test method then uses the configuration values and I'm off to the races. The values are read from both appSettings.json and from UserSecrets.

This works great and if all you need is a configuration object to read a few values this approach is easy and sufficient.

#PAGEBREAK

Setting up Dependency Injection

For the second use case I mentioned in the intro I need Configuration to come from Dependency injection in order to inject it into child objects in the dependency chain. To do this I need to a little more work setting up the Dependency provider in the configuration. The business object in this case has a number of dependencies on a EF DbContext as well as the configuration.

In order to do this I can set up the dependency injection in the initialization of the class:

public class SmtpTests 
{
    private ServiceProvider serviceProvider;
    private KavaDocsConfiguration configuration;        
    private UserBusiness userBusiness;
    [SetUp]
    public void Init()
    {
       configuration = TestHelper.GetApplicationConfiguration(TestContext.CurrentContext.TestDirectory);
       var services = new ServiceCollection();
       // Simple configuration object injection (no IOptions<T>)
       services.AddSingleton(configuration);
       // configure EF Core DbContext - using the configuration
       services.AddDbContext<KavaDocsContext>(builder =>
       {
           var connStr = configuration.ConnectionString;
           if (string.IsNullOrEmpty(connStr))
               connStr = "server=.;database=KavaDocs; integrated security=true;MultipleActiveResultSets=true";
           builder.UseSqlServer(connStr, opt =>
           {
               opt.EnableRetryOnFailure();
               opt.CommandTimeout(15);
           });
       });
       // has a depedency on DbContext and Configuration
       services.AddTransient<UserBusiness>();
       // Build the service provider
       serviceProvider = services.BuildServiceProvider();
       // create a userBusiness object with DI    
       userBusiness = serviceProvider.GetRequiredService<UserBusiness>();
    }
}

The code creates a services collection and adds the various dependencies needed for this particular test class. If you end up doing this for a bunch of classes this configuration code could also be moved into the test helper which could return an object with all the dependencies.

So this code adds the configuration, a DbContext, and a business object into the service provider.

In my business object I have a method that handles the email sending I showed earlier internally and I can now load a user and run an integration test sending a validation key:

[Test]
public void UserSendEmail()
{
    // connection string should be set from config
    var user = userBusiness.GetUser(TestHelper.UserId1);
    var validationKey = user.ValidationKey;

    Assert.IsTrue(userBusiness.ValidateEmail(validationKey));
}

And there you have it - injected values in your tests.

IOptions instead of raw Configuration

If you'd rather inject IOptions<T> rather than the raw configuration instance you can change the Init() code slightly and use the following:

var services = new ServiceCollection();

// IOption configuration injection
services.AddOptions();

var configurationRoot = TestHelper.GetIConfigurationRoot(TestContext.CurrentContext.TestDirectory);
services.Configure<KavaDocsConfiguration>(configurationRoot.GetSection("KavaDocs"));
...

serviceProvider = services.BuildServiceProvider();

// to use (or store in )
iConfig = serviceProvider.GetRequiredService<IOptions<KavaDocsConfiguration>>()
var server = iConfig.Value.Email.MailServer;

Usually I try to avoid IOptions<T> for the sheer ugliness of the intermediate interface, and unless you need the specific features of IOptions<T> (see my previous article) I much rather just use the raw configuration object.

Summary

Using Configuration in non-ASP.NET projects is not real obvious or at least it wasn't for me so hopefully this post provides a simple overview on how you can get the same configuration you might be using in your main application to also work inside of your test or other non-ASP.NET Projects.

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2018
Posted in .NET Core   ASP.NET Core  

Creating an ASP.NET Core Markdown TagHelper and Parser

$
0
0

A few months ago I wrote about creating a literal Markdown Control for WebForms, where I described a simple content control that takes the content from within a tag and parses the embedded Markdown and then produces HTML output in its stead. I created a WebForms control mainly for selfish reasons, because I have tons of semi-static content on my content sites that still live in classic ASP.NET ASPX pages.

Since I wrote that article I've gotten a lot of requests to write about an ASP.NET Core version for something similar and - back to my own selfishishness - I'm also starting to deploy a few content heavy sites that have mostly static html content that would be well served by Markdown using ASP.NET Core and Razor Pages. So it's time to build an ASP.NET Core version by creating a <markdown> TagHelper.

There are already a number of implementations available, but I'm a big fan of the MarkDig Markdown Parser, so I set out to create an ASP.NET Core Tag Helper that provides the same functionality as the WebForms control I previously created.

Using the TagHelper you can render Markdown like this inside of a Razor Page:

<markdown>
    #### This is Markdown text inside of a Markdown block

    * Item 1
    * Item 2
 
    ### Dynamic Data is supported:
    The current Time is: @DateTime.Now.ToString("HH:mm:ss")

    ```cs
    // this c# is a code block
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }
    ```</markdown>

The Markdown is expanded into HTML to replace the markdown TagHelper content.

You can also easily parse Markdown both in code and inside of Razor Pages:

string html = Markdown.Parse(markdownText)

Inside of Razor code you can do:

<div>@Markdown.ParseHtmlString(Model.ProductInfoMarkdown)</div>

Get it

The packaged component includes the TagHelper and a simple way to parse Markdown in code or inside of a Razor Page.

It's available as a NuGet Package:

PM> Install-Package Westwind.AspNetCore.Markdown

And you can take a look at the source code on Github:

Why do I need a Markdown Control?

Let's take a step back - why would you even need a content control for Markdown Parsing?

Markdown is everywhere these days and I for one have become incredibly dependent on it for a variety of text scenarios. I use it for blogging, for documentation both for code on Git repos and actual extended documentation. I use it for note keeping and collaboration in Gists or Github Repos, as well as a data entry format for many applications that need to display text content a little bit more richly than using plain text. Since I created the Markdown control I've also been using that extensively for quite a bit of my static content and it's made it much easier to manage some of my content this way.

What does it do?

The main reason for this component is the ability to embed Markdown into content with a simple tag that gets parsed into HTML at runtime. This is very useful for content pages that contain a lot of raw static text. It's a lot easier to write Markdown text in content pages than it is to write HTML tag soup consisting of <p>,<ul> and <h3> tags. Markdown is a heck of a lot more comfortable to type and maintain and this works well for common text content. It won't replace HTML for markup for an entire page, but it can be a great help with large content blocks inside of a larger HTML page.

In this post I'll create <markdown> TagHelper that can convert inline Markdown like this:

<h3>Markdown Tag Helper Block</h3><markdown>
    #### This is Markdown text inside of a Markdown block

    * Item 1
    * Item 2
 
    ### Dynamic Data is supported:
    The current Time is: @DateTime.Now.ToString("HH:mm:ss")

    ```cs
    // this c# is a code block
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }
    ```</markdown>

The content of the control is rendered to HTML at runtime which looks like this:

The above renders with default Bootstrap styling of an ASP.NET Core MVC default Web site plus hilightjs for the code highlighting. You can check out the full Markdown.cshtml page on Github. The code of that page also includes the highlightjs hookup code to make the source code sample look nice.

It's important to understand that rendered Markdown is just HTML there's nothing in Markdown that handles styling of the content - that's left up to the host site or tool that displays the final HTML output. Any formatting comes from the host application, in this case the stock ASP.NET Core template for sample purposes.

Using this control allows you to easily create content areas inside of HTML documents that are rendered from Markdown. You write Markdown, the control renders HTML at runtime.

As part of this component I'll also provide a simple way to parse Markdown in code and inside of @RazorPages.

Creating a Markdown TagHelper

Before we dive in let's briefly discuss what TagHelpers are for those of you new to ASP.NET Core and then look at what it takes to create one.

What is a TagHelper?

TagHelpers are a new feature for ASP.NET Core MVC, and it's easily one of the nicest improvements for server side HTML generation. TagHelpers are self contained components that are embedded into a @Razor page. TagHelpers look like HTML tags and unlike Razor expressions (@Expression) feel natural inside of standard HTML content in a Razor page.

Many of the existing Model binding and HTML helpers in ASP.NET have been replaced by TagHelpers and TagHelper behaviors that allow you to directly bind to HTML controls in a page. For example, here is an Input tag bound to a model value.

For example:

<input type="email" asp-for="Email" 
       placeholder="Your email address"
       class="form-control"/>

where asp-for extends the input element with an extension attribute to provide the model binding to the value property. This replaces:

@Html.TextBoxFor(model => model.Email, 
                 new { @class = "form-control",
                      placeholder = "your email address", 
                      type = "email" })

Which would you rather use? 😃 TagHelpers make it easier to write your HTML markup by sticking to standard HTML syntax which feels more natural than using Razor expressions.

Make your own TagHelpers

Another important point is that it's very easy to create your own TagHelpers which is the focus of this post. The interface to create a TagHelper is primarily a single method interface that takes a Context input to get element, tag and content information and an output string that has to be generated of for the actual TagHelper output. Using this approach feels very natural and makes it easy to create your own tag helpers with minimal fuss.

A TagHelper encapsulates rendering logic via a very simple ProcessAsync() interface that renders a chunk of HTML content into the page at the location the TagHelper is defined. The ProcessAsync() method takes a TagHelper Context as input to let you get at the element and attributes for input, and provides an output that you can write string output to generate your embedded content. As we'll see it takes very little code to create a very useful TagHelper.

In order to use TagHelpers they have to be registered with MVC, either in the page or more likely in the _ViewImports.cshtml page of the project.

To create a Tag Helper these are the things you typically need to do:

  • Create a new Class and Inherit from TagHelper
  • Create your TagHelper implementation via ProcessAsync() or Process().
  • Register your TagHelper in _ViewImports.cshtml
  • Reference your TagHelper in your pages
  • Rock on!

Creating the MarkdownTagHelper Class

For the <markdown> TagHelper I want to create a content control whose content can be retrieved and parsed as Markdown and then converted into HTML. Optionally you can also use a Markdown property to bind Markdown for rendering - so if you have Markdown as part of data in your model you can bind it to this property/attribute in lieu of static content you provide.

Here's the base code for the MarkdownTagHelper that accomplishes these tasks:

[HtmlTargetElement("markdown")]
public class MarkdownTagHelper : TagHelper
{
    [HtmlAttributeName("normalize-whitespace")]
    public bool NormalizeWhitespace { get; set; } = true;

    [HtmlAttributeName("markdown")]
    public ModelExpression Markdown { get; set; }

    public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        await base.ProcessAsync(context, output);

        string content = null;
        if (Markdown != null)
            content = Markdown.Model?.ToString();

        if (content == null)            
            content = (await output.GetChildContentAsync()).GetContent();

        if (string.IsNullOrEmpty(content))
            return;

        content = content.Trim('\n', '\r');

        string markdown = NormalizeWhiteSpaceText(content);            

        var parser = MarkdownParserFactory.GetParser();
        var html = parser.Parse(markdown);

        output.TagName = null;  // Remove the <markdown> element
        output.Content.SetHtmlContent(html);
    }

}

Before you can use the TagHelper in a page you'll need to register it with the MVC application by sticking the following into the _ViewImports.cshtml:

@addTagHelper *, Westwind.AspNetCore.Markdown

Now you're ready to use the TagHelper:

<markdown>This is **Markdown Text**. Render me!</markdown>

As you can see the code to handle the actual processing of the markdown is very short and easy to understand. It grabs either the content of the <markdown> element or the markdown attribute and then passes that to the the Markdown Parser to process. The parser turns the Markdown text into HTML which is the written out as HTML content using output.Content.SetHtmlContent().

The code uses an abstraction for the Markdown Parser so the parser can be more easily replaced in the future without affecting the TagHelper code. I've gone through a few iterations of Markdown Parsers before landing on MarkDig, and I use this code in many places where I add Markdown parsing. I'll come back to the Markdown Parser in a minute.

Normalizing Markdown Text

One issue with using a TagHelper or Control for Markdown is that Markdown expects no margins in the Markdown text to process.

If you have Markdown like this:

<markdown>
    #### This is Markdown text inside of a Markdown block

    * Item 1
    * Item 2
 
    ### Dynamic Data is supported:
    The current Time is: @DateTime.Now.ToString("HH:mm:ss")

    ```cs
    // this c# is a code block
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }
    ```</markdown>

and leave this Markdown in its raw form with the indent, the Markdown parser would render the entire Markdown text as a code block, because the text is indented with 4 spaces which is constitutes a code block in Markdown. Not what we want here!

This is where the NormalizeWhiteSpace property comes into play. This flag, which is true by default, determines whether leading repeated white space is stripped from the embedded Markdown block.

Here's the code to strip leading white space:

string NormalizeWhiteSpaceText(string text)
{
    if (!NormalizeWhitespace || string.IsNullOrEmpty(text))
        return text;

    var lines = GetLines(text);
    if (lines.Length < 1)
        return text;

    string line1 = null;

    // find first non-empty line
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }

    if (string.IsNullOrEmpty(line1))
        return text;

    string trimLine = line1.TrimStart();
    int whitespaceCount = line1.Length - trimLine.Length;
    if (whitespaceCount == 0)
        return text;

    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < lines.Length; i++)
    {
        if (lines[i].Length > whitespaceCount)
            sb.AppendLine(lines[i].Substring(whitespaceCount));
        else
            sb.AppendLine(lines[i]);
    }

    return sb.ToString();
}

string[] GetLines(string s, int maxLines = 0)
{
    if (s == null)
        return null;

    s = s.Replace("\r\n", "\n");

    if (maxLines < 1)
        return s.Split(new char[] { '\n' });

    return s.Split(new char[] { '\n' }).Take(maxLines).ToArray();
}

This code works by looking at the first non-empty line and checking for leading White space. It captures this white space and then removes that same leading whitespace from all lines of the content. This works as long as the Markdown Block uses consistent white space for all lines (ie. all tabs or all n spaces etc.).

If normalize-whitespace="false" in the document, you can still use the TagHelper but you have to ensure the that text is left justified in the saved razor file. This is hard if you're using Visual Studio as it'll try to reformat the doc and re-introduce the whitespace, so the default for this attribute is true.

To look at the complete code for this class you can check the code on Github:

Razor Expressions in Markdown

If you look back at the Markdown example above you might have noticed that the embedded Markdown includes a @Razor expression inside of the <markdown> tag.

The following works as you would expect:

<markdown>
The current Time is: **@DateTime.Now.ToString("HH:mm:ss")**</markdown>

Razor processes the expression before it passes the content to the TagHelper, so in this example the date is already expanded when the Markdown parsing is fired.

This is pretty cool - you can essentially use most of Razor's features in place. Just make sure that you generate Markdown compatible text from your Razor expressions and code.

Markdown Parsing with Markdig

The TagHelper above relies on a customized MarkdownParser implentation. As mentioned this component uses the MarkDig Markdown parser, but I added some abstraction around the Markdown Parser as I've switched parsers frequently in the past before settling pretty solidly on MarkDig.

Parsing Markdown with Markdig is pretty simple, and if you want to be quick about it, you can easily create a function that does the following to parse Markdown using MarkDig:

public static class Markdown
{
    public static string Parse(string markdown) 
    {
        var pipeline = new MarkdownPipelineBuilder()
                             .UseAdvancedExtensions()
                             .Build();
        return Markdown.ToHtml(markdown, pipeline);
    }
}        

MarkDig uses a configuration pipeline of support features that you can add on top of the base parser. The example above adds a number of common extensions (like Github Flavored Markdown, List Extensions etc.), but you can also add each of the components you want to customize exactly how you want Markdown to be parsed.

The code above is not super efficient as the pipeline needs to be recreated for each parse operation and that's part of the reason that I built a small abstraction layer around the Markdown parser so the parser can be easily switched without affecting the rest of the application and so that the generated Pipeline can be cached for better performance.

A MarkdownParserFactory

The first thing is a Markdown Parser factory that provides an IMarkdownParser interface which has little more than that a Parse() method:

public interface IMarkdownParser
{
    string Parse(string markdown);
}

The Factory then produces the Interface with at this point a hardcoded implementation for MarkDig in place. The factory also caches the Parser instance so it can be reused without reloading the entire parsing pipeline on each parse operation:

/// <summary>
/// Retrieves an instance of a markdown parser
/// </summary>
public static class MarkdownParserFactory
{
    /// <summary>
    /// Use a cached instance of the Markdown Parser to keep alive
    /// </summary>
    static IMarkdownParser CurrentParser;

    /// <summary>
    /// Retrieves a cached instance of the markdown parser
    /// </summary>                
    /// <param name="forceLoad">Forces the parser to be reloaded - otherwise previously loaded instance is used</param>
    /// <param name="usePragmaLines">If true adds pragma line ids into the document that the editor can sync to</param>
    /// <returns>Mardown Parser Interface</returns>
    public static IMarkdownParser GetParser(bool usePragmaLines = false,
                                            bool forceLoad = false)                                                
    {
        if (!forceLoad && CurrentParser != null)
            return CurrentParser;
        CurrentParser = new MarkdownParserMarkdig(usePragmaLines, forceLoad);

        return CurrentParser;
    }
}

Finally there's the actual MarkDigMarkdownParser implementation that's responsible for handling the actual configuration of the parser pipeline and parsing the Markdown to HTML. The class inherits from a MarkdownParserBase class that provides a few optional pre and post processing features such as font awesome font-embedding, yaml stripping (which is not built into MarkDig but not other parsers) etc.

/// <summary>
/// Wrapper around the MarkDig parser that provides a cached
/// instance of the Markdown parser. Hooks up custom processing.
/// </summary>
public class  MarkdownParserMarkdig : MarkdownParserBase
{
    public static MarkdownPipeline Pipeline;

    private readonly bool _usePragmaLines;

    public MarkdownParserMarkdig(bool usePragmaLines = false, bool force = false, Action<MarkdownPipelineBuilder> markdigConfiguration = null)
    {
        _usePragmaLines = usePragmaLines;
        if (force || Pipeline == null)
        {                
            var builder = CreatePipelineBuilder(markdigConfiguration);                
            Pipeline = builder.Build();
        }
    }

    /// <summary>
    /// Parses the actual markdown down to html
    /// </summary>
    /// <param name="markdown"></param>
    /// <returns></returns>        
    public override string Parse(string markdown)
    {
        if (string.IsNullOrEmpty(markdown))
            return string.Empty;

        var htmlWriter = new StringWriter();
        var renderer = CreateRenderer(htmlWriter);

        Markdig.Markdown.Convert(markdown, renderer, Pipeline);

        var html = htmlWriter.ToString();
        
        html = ParseFontAwesomeIcons(html);

        //if (!mmApp.Configuration.MarkdownOptions.AllowRenderScriptTags)
        html = ParseScript(html);  
                  
        return html;
    }

    public virtual MarkdownPipelineBuilder CreatePipelineBuilder(Action<MarkdownPipelineBuilder> markdigConfiguration)
    {
        MarkdownPipelineBuilder builder = null;

        // build it explicitly
        if (markdigConfiguration == null)
        {
            builder = new MarkdownPipelineBuilder()                    
                .UseEmphasisExtras(Markdig.Extensions.EmphasisExtras.EmphasisExtraOptions.Default)
                .UsePipeTables()
                .UseGridTables()
                .UseFooters()
                .UseFootnotes()
                .UseCitations();


            builder = builder.UseAutoLinks();        // URLs are parsed into anchors
            builder = builder.UseAutoIdentifiers();  // Headers get id="name" 

            builder = builder.UseAbbreviations();
            builder = builder.UseYamlFrontMatter();
            builder = builder.UseEmojiAndSmiley(true);
            builder = builder.UseMediaLinks();
            builder = builder.UseListExtras();
            builder = builder.UseFigures();
            builder = builder.UseTaskLists();
            //builder = builder.UseSmartyPants();            

            if (_usePragmaLines)
                builder = builder.UsePragmaLines();

            return builder;
        }
        
        // let the passed in action configure the builder
        builder = new MarkdownPipelineBuilder();
        markdigConfiguration.Invoke(builder);

        if (_usePragmaLines)
            builder = builder.UsePragmaLines();

        return builder;
    }

    protected virtual IMarkdownRenderer CreateRenderer(TextWriter writer)
    {
        return new HtmlRenderer(writer);
    }
}

The key bit about this class is that it can be used to configure how the Markdown Parser renders to HTML.

That's a bit of setup, but once it's all done you can now do:

var parser = MarkdownParserFactory.GetParser();
var html = parser.Parse(markdown);

and that's what the Markdown TagHelper uses to get a cached MarkdownParser instance for processing.

Standalone Markdown Processing

In addition to the TagHelper there's also a static class that lets you easily process Markdown in code or inside of a RazorPage, using a static Markdown class:

public static class Markdown
{
    /// <summary>
    /// Renders raw markdown from string to HTML
    /// </summary>
    /// <param name="markdown"></param>
    /// <param name="usePragmaLines"></param>
    /// <param name="forceReload"></param>
    /// <returns></returns>
    public static string Parse(string markdown, bool usePragmaLines = false, bool forceReload = false)
    {
        if (string.IsNullOrEmpty(markdown))
            return "";

        var parser = MarkdownParserFactory.GetParser(usePragmaLines, forceReload);
        return parser.Parse(markdown);
    }

    /// <summary>
    /// Renders raw Markdown from string to HTML.
    /// </summary>
    /// <param name="markdown"></param>
    /// <param name="usePragmaLines"></param>
    /// <param name="forceReload"></param>
    /// <returns></returns>
    public static HtmlString ParseHtmlString(string markdown, bool usePragmaLines = false, bool forceReload = false)
    {
        return new HtmlString(Parse(markdown, usePragmaLines, forceReload));
    }
}

In code you can now do:

string html = Markdown.Parse(markdownText)

Inside of Razor code you can do:

<div>@Markdown.ParseHtmlString(Model.ProductInfoMarkdown)</div>

Summary

As with the WebForms control none of this is anything very new, but I find that this is such a common use case that it's worth to have a reusable and easily accessible component for this sort of functionality. With a small Nuget package it's easy to add Markdown support both for content embedding as well as simple parsing.

As Markdown is getting ever more ubiquitous, most applications can benefit from including some Markdown features. For content sites especially Markdown can be a good fit for creating the actual text content inside of pages and the <markdown> control discussed here actually makes that very easy.

I was recently helping my girlfriend set up a landing page for her Web site and using Markdown I was able to actually set up a few content blocks in the page and let her loose on editing her own content easily. No way that would have worked with raw HTML.

Enjoy...

Resources

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2018
Posted in ASP.NET Core   Markdown  

Getting the .NET Core Runtime Version in a Running Application

$
0
0

One thing I like to do as part of my applications that are running is to have an information page that gives some basic information about the runtime environment the application is running under.

For example, here's what I add to my info page in my AlbumViewer Sample:

I find it useful especially these days with SDKs and runtimes switching so frequently that I can quickly determine what versions the application actually runs under and where it's hosted (Window/Linux/container) without having to actually look at the source code.

This should be easy right? Well, it may not be difficult, but obvious it is not.

Getting the Runtime Version

You would think it would be easy to get runtime version information - after all the runtime is... well... it's running. But nooooo... Microsoft has never made it easy to get proper runtime version information that's suitable for display in an application. Hell, in full framework you had to resort to checking the registry and then translating magic partial version numbers to an official release version number (like 4.7.1). You know you're doing it wrong when you can't tell what version of a runtime you have installed without looking in the registry and looking at an obscure lookup table to resolve the actual version everyone expects to look at.

This trend continues in .NET Core. There's no direct API that returns a version number like 1.1, 2.0 or 2.1 etc. Because why make the obvious easy?

There are lots of APIs that you might think would work but they either don't return anything or return some unrelated version number like the system installed full framework version - but not the one of the actual running runtime.

I'm not going to bore with all the false starts I've had here. If you want to find the .NET Core Runtime that your application is targeting you can look at the TargetFrameworkAttribute in your application's startup assembly:

var framework = Assembly
    .GetEntryAssembly()?
    .GetCustomAttribute<TargetFrameworkAttribute>()?
    .FrameworkName;

var stats = new
{                
    OsPlatform = System.Runtime.InteropServices.RuntimeInformation.OSDescription,
    AspDotnetVersion = framework
};

It seems pretty hacky but it should be fairly reliable since every application has to have a a TargetFramework associated with it. This value comes from the project file:

<Project Sdk="Microsoft.NET.Sdk.Web"><PropertyGroup><TargetFramework>netcoreapp2.1</TargetFramework></PropertyGroup></project>

and the project build process turns that into an attribute attached to the startup assembly.

It's a shame that this isn't exposed somewhat more logically like the property that actually exists but is not set:

string runtime = System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription;

Sadly that value very usefully returns null.

So, the above hack is it - it works. For now 😃

© Rick Strahl, West Wind Technologies, 2005-2018
Posted in .NET Core  ASP.NET Core  

Creating a generic Markdown Page Handler in ASP.NET Core

$
0
0

I'm in the process of re-organizing a ton of mostly static content on several of my Web sites and in order to make it easier to manage the boat load of ancient content I have sitting around in many places. Writing content - even partial page content - as Markdown is a heck of a lot more comfortable than writing HTML tag soup.

So to make this easier I've been thinking about using Markdown more generically in a number of usage scenarios lately, and I wrote last week's post on Creating a Markdown TagHelper for AspNetCore and an earlier one on doing the same thing with class ASP.NET WebForms pages. These controls allow for embedding Markdown content directly into ASP.NET Core MVC Views or Pages and WebForms HTML content respectively.

Serving Markdown Files as HTML

But in a lot of scenarios even these controls add a lot of unnecessary cruft - it would be much nicer to simply dump some Markdown files and serve those files as content along with a proper content template so those pages fit into the context of the greater site. This typically means access to a layout page by way of a generic template into which the Markdown content is rendered.

By using plain Markdown files it's easier to edit the files, and when you host them in a repo like Github as they can just be displayed as rendered Markdown. In short it's a similar use case, but meant for content only displays that's ideal for Documentation sites or even things like a file only Blog.

So in this post I'll describe a generic Middleware implementation that allows you to drop Markdown files into a folder and get them served - either as .md extension files, or as extensionless Urls based on the filename without the extension.

Get it

If you want to try out the middleware I describe in this post, you can install the NuGet package from here:

PM> Install-Package Westwind.AspNetCore.Markdown

or take a look at the source code on Github:

Generic Markdown Processing Middleware

The idea to process Markdown files directly is nothing new - it's a common feature in standalone documentation and CMS/Blog generators.

But wouldn't it be nice to have this functionality as a simple, drop-in feature that you can attach to any folder that is part of your existing Web application? In many of my dynamic Web sites, I often have a handful of information pages (like About, Terms of Service, Contact us, Support etc.) that are essentially static pages. And for those simple Markdown formatting is a perfect fit.

Additionally many sites I work on also need documentation and having a separate area to actually document a site with simple Markdown files. You use only Markdown text, and leave the site chrome to a generic configured template that renders the reusable part of the site. When creating content all you do then is write Markdown - you can focus on content and forget the layout logistics.

What do we need to serve Markdown Pages?

Here are the requirements for serving 'static' markdown pages:

  • A 'wrapper' page that provides the site chrome
  • A content area into which the markdown gets dropped
  • The actual rendered Markdown text from the file
  • Optional Yaml Parsing for title and headers
  • Optional title parsing based on a header or the file name

So, today I sat down to build the start of some generic middleware that processes Markdown content from disk and renders it directly using a configurable MVC View into which the Markdown content is rendered to provide the 'container' page that provides the styling and site chrome that you are likely to need in order to display your Markdown. This template can contain self contained HTML page content, or it can reference a _Layout page to provide the same site chrome that the rest of your site uses.

The idea is that I can set up one or more folders (or the entire site) for serving markdown files with an .md extension or extensionless Urls and then serve the Markdown files into a configurable View template.

The middleware is a relatively simple implementation that looks for a configured folder and extensionless urls within (think Docs for documentation or Posts folder for Blog posts) or .md files in the configured folder. When it finds either, the URL is processed by loading the underlying Markdown file, rendering it to HTML and simply embedding it into the specified View template.

Getting Started With the MarkdownPageProcessorMiddleWare

To use this feature you need to do the following:

  • Create a Markdown View Template (default is: ~/Views/__MarkdownPageTemplate.cshtml)
  • Use AddMarkdown() to configure the page processing
  • Use UseMarkdown() to hook up the middleware
  • Create .md files for your content

Basic Configuration

The first step is to configure the MarkdownPageProcessor by telling it which folders to look at. You specify a site relative folder, an optional MVC View or Page Template (the template has to exist) and a few optional parameters.

As usual for ASP.NET Core Middleware, you need to both hook up ConfigureServices() configuration and engage the Middleware in Configure().

The following configures up two folders /posts/ for processing for Markdown files:

public void ConfigureServices(IServiceCollection services)
{
    // this is required since we hook into custom routing
    services.AddRouting();

    services.AddMarkdown(config =>
    {
        // Simplest: Use all default settings - usually all you need
        config.AddMarkdownProcessingFolder("/posts/", "~/Pages/__MarkdownPageTemplate.cshtml");
    });

    // We need MVC so we can use a customizable Razor template page
    services.AddMvc();
}

You then also need to hook up the Middleware in the Configure method:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    app.UseMarkdown();

    app.UseStaticFiles();
    
    // we need MVC for the customizable Razor template
    app.UseMvc();
}

Create a Razor Host Template

Next we need a Razor template that will host the rendered Markdown. This template is the "site chrome" that surrounds a rendered Markdown page. Each folder you configure can have its own template, so it's possible to vary the template. The template is just a Razor page that receives MarkdownModel which includes among other things a Model.RenderedMarkdown that you can embed into the page.

The simplest template you can create looks like this:

@model Westwind.AspNetCore.Markdown.MarkdownModel
@{
    ViewBag.Title = Model.Title;
    Layout = "_Layout";
}<div style="margin-top: 40px;">
    @Model.RenderedMarkdown</div>

The template has really nothing in it except the rendered markdown. All the rest of the 'site chrome' will be picked up by the _layout.cshtml page which provides the over look and feel of the page.

Note that you can do whatever you want in the template. You don't have to use a _layout page - you can create a standalone page, or a page with partials and sections or whatever you want. All you have to make sure is there:

  • Make sure you have a @model Westwind.AspNetCore.Markdown.MarkdownModel
  • Make sure you call @Model.RenderedMarkdown to embed the rendered HTML
  • Pick up the page title from Model.Title

Note that the title parsing is optional, but it is enabled by default. The middleware checks for YAML header and title: property or a # Header tag in the top 10 lines of content.

Test it out

With this basic configuration code in place you should now be able to place a markdown file with a .md anywhere into the /posts/ folder somewhere and render it. I took my last Weblog post's Markdown file and simply dumped it into a folder like this:

I can now go to:

http://localhost:59805/posts/2018/03/23/MarkdownTagHelper.md

or the extensionless version:

http://localhost:59805/posts/2018/03/23/MarkdownTagHelper

The default configuration works both with an .md extension or no extension. When no extension is specified the middleware looks at each extensionless request and tries to append .md and checks if a file exists then renders it.

With this in place you can now render the page like this:

Keep in mind this is pretty much a stock ASP.NET Core project - it uses the stock Bootstrap template and I haven't made any other changes to the layout or page templates, yet the markdown file just works as a drop in file.

Cool, n'est pas?

More Cowbell

Ok the above is the basics, lets look at a few more configuration and customization options here. You can:

  • Customize the Razor template
  • Configure folders that are handled
  • Configure each folder's options

Let's take a look

A better Template: Adding Syntax Coloring

Most likely you'll want to spruce up things a little bit. If you're doing software related stuff like documentation or a blog posts one of the first things you'll want is syntax highlighting.

I'm a big fan of highlightjs which comes with most common syntax languages I care about, and provides a number of really nice themes including vs2015 (VS Code Dark), visualstudio, monokai, twilight and a couple of github flavors.

The code below explicitly uses the Visual Studio (Code) Dark theme (vs2015):

@model Westwind.AspNetCore.Markdown.MarkdownModel
@{
    Layout = "_Layout";
}
@section Headers {<style>
        h3 {
            margin-top: 50px;
            padding-bottom: 10px;
            border-bottom: 1px solid #eee;
        }
        /* vs2015 theme specific*/
        pre {
            background: #1E1E1E;
            color: #eee;
            padding: 0.7em !important;
            overflow-x: auto;
            white-space: pre;
            word-break: normal;
            word-wrap: normal;
        }

            pre > code {
                white-space: pre;
            }</style>
}<div style="margin-top: 40px;">
    @Model.RenderedMarkdown</div>

@section Scripts {
    <script src="~/lib/highlightjs/highlight.pack.js"></script><link href="~/lib/highlightjs/styles/vs2015.css" rel="stylesheet" /><script>
        setTimeout(function () {
            var pres = document.querySelectorAll("pre>code");
            for (var i = 0; i < pres.length; i++) {
                hljs.highlightBlock(pres[i]);
            }
        });</script>
}

HighlightJs from CDN

The provided highlight JS package includes a customized set of languages that I use most commonly and it also includes a custom language (FoxPro) that doesn't ship on the CDN. You can however also pick up HighlightJs directly off a CDN with:

<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/highlight.min.js"></script><link href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/vs2015.min.css" rel="stylesheet" />

Here's what the page looks like with the Syntax highlighting enabled:

Configuration Options

If you want more control over how the Markdown processing is done you can explicitly configure each of the folders you set up for Markdown handling. You can:

  • Configure .md file and extensionless processing
  • Specify whether you want to extract for title in the Markdown content
  • Hook in pre-processing code that is passed to the host template
  • Configure the Markdown Parser (Markdig)

The following sets up the /posts folder with some of the options explicitly set:

services.AddMarkdown(config =>
{
    // Simplest: Use all default settings - usually all you need
    config.AddMarkdownProcessingFolder("/docs/", "~/Pages/__MarkdownPageTemplate.cshtml");
    // Customized Configuration: Set FolderConfiguration options
    var folderConfig = config.AddMarkdownProcessingFolder("/posts/", "~/Pages/__MarkdownPageTemplate.cshtml");

    // Optional configuration settings
    folderConfig.ProcessExtensionlessUrls = true;  // default
    folderConfig.ProcessMdFiles = true; // default

    // Optional pre-processing
    folderConfig.PreProcess = (folder, controller) =>
    {
        // controller.ViewBag.Model = new MyCustomModel();
    };

    // optional custom MarkdigPipeline (using MarkDig; for extension methods)
    config.ConfigureMarkdigPipeline = builder =>
    {
        builder.UseEmphasisExtras(Markdig.Extensions.EmphasisExtras.EmphasisExtraOptions.Default)
            .UsePipeTables()
            .UseGridTables()                        
            .UseAutoIdentifiers(AutoIdentifierOptions.GitHub) // Headers get id="name" 
            .UseAutoLinks() // URLs are parsed into anchors
            .UseAbbreviations()
            .UseYamlFrontMatter()
            .UseEmojiAndSmiley(true)                        
            .UseListExtras()
            .UseFigures()
            .UseTaskLists()
            .UseCustomContainers()
            .UseGenericAttributes();
    };
}    

If you want to improve performance a little, don't use extensionless URLs for the markdown files. The way the implementation currently works extensionless URLs require intercepting every extensionless URL request and checking for a Markdown file with an .md extension. Using just .md files will only affect files that actually have an .md extension.

This can be mitigated with some caching behavior - I come back to that a bit later in this post.

The default Markdig configuration has most of the pipeline extensions enabled so most things just work, but if you want optimal performance for your Markdown processing explicitly whittling the list down to just what you need can yield better performance.

Creating the Markdown File Middleware

So how does all of this work? As you might expect the process of creating this is actually not very difficult, but it does involve quite a few moving pieces as is fairly standard when you're creating a piece of middleware.

Here's what is required

  • Actual Middleware implementation to handle the request routing
  • Middleware Extensions that hook into Start ConfigureServices() and Configure()
  • MVC Controller that handles the actual render request
  • The Razor template to render the actual rendered Markdown HTML

A quick review of Middleware

The core bit is the actual Middleware extension that is hooked into the ASP.NET Core middleware pipeline. Middleware is simply a class that implements an Task InvokeAsycn(HttpContext context) method. Alternately, Middleware can also be implemented directly in Startup or as part of a Middleware Extension using app.Use() or for terminating middleware using app.Run()

The idea behind Middleware is quite simple: You implement a middleware handler that receives a context object and calls a next(context) which passes the context forward to the next middleware defined in the chain and it calls the next and so on until all of the middleware components have been called. Then chain reverses and each of the those calls return their task status back up the chain.

image credit: Microsoft Docs

If middleware wants to terminate the pipeline it can just not call next() and the chain reaction ends, and simply reverses out.

In this scheme the order of middleware components is very important since they fire in order declared. For example, it's crucial that things like the Authentication, Routing and CORS middleware bits are hooked up before the MVC middleware executes.

Implementing a dedicated middleware component usually involves creating the actual middleware component as well a couple of middleware extensions that allow for being called in ConfigureServices() for configuration and Configure() for actually attaching the middleware to the pipeline. Yeah I know - talk about misnamed events: Configuration usually happens in ConfigureServices() where you configure the dependency injected components either directly or via callbacks that fire on each request.

Implementing Markdown Page Handling as Middleware

Ok now that you have an idea how middleware works lets look at the actual implementation.

Let's start with the actual middleware. The primary job of the middleware is to figure whether an incoming request is a Markdown request by checking the URL. If the request is to an .md Markdown file, the middleware effectively rewrites the request URL and routes it to a custom Controller that is provided as part of this component library.

public class MarkdownPageProcessorMiddleware
{
    private readonly RequestDelegate _next;
    private readonly MarkdownConfiguration _configuration;
    private readonly IHostingEnvironment _env;

    public MarkdownPageProcessorMiddleware(RequestDelegate next, 
                                           MarkdownConfiguration configuration,
                                           IHostingEnvironment env)
    {
        _next = next;
        _configuration = configuration;
        _env = env;
    }

    public Task InvokeAsync(HttpContext context)
    {
        var path = context.Request.Path.Value;
        if (path == null)
            return _next(context);

        bool hasExtension = !string.IsNullOrEmpty(Path.GetExtension(path));
        bool hasMdExtension = path.EndsWith(".md");
        bool isRoot = path == "/";
        bool processAsMarkdown = false;

        var basePath = _env.WebRootPath;
        var relativePath = path;
        relativePath = PathHelper.NormalizePath(relativePath).Substring(1);
        var pageFile = Path.Combine(basePath, relativePath);

        // process any Markdown file that has .md extension explicitly
        foreach (var folder in _configuration.MarkdownProcessingFolders)
        {
            if (!path.StartsWith(folder.RelativePath, StringComparison.InvariantCultureIgnoreCase))
                continue;

            if (isRoot && folder.RelativePath != "/")
                continue;

            if (context.Request.Path.Value.EndsWith(".md", StringComparison.InvariantCultureIgnoreCase))
            {
                processAsMarkdown = true;
            }
            else if (path.StartsWith(folder.RelativePath, StringComparison.InvariantCultureIgnoreCase) &&
                 (folder.ProcessExtensionlessUrls && !hasExtension ||
                  hasMdExtension && folder.ProcessMdFiles))
            {
                if (!hasExtension && Directory.Exists(pageFile))
                    continue;

                if (!hasExtension)
                    pageFile += ".md";

                if (!File.Exists(pageFile))
                    continue;

                processAsMarkdown = true;
            }

            if (processAsMarkdown)
            {             
                context.Items["MarkdownPath_PageFile"] = pageFile;
                context.Items["MarkdownPath_OriginalPath"] = path;
                context.Items["MarkdownPath_FolderConfiguration"] = folder;

                // rewrite path to our controller so we can use _layout page
                context.Request.Path = "/markdownprocessor/markdownpage";
                break;
            }
        }

        return _next(context);
    }
}

Middleware constructors can inject requested components via Dependency Injection and I capture the active Request delegate (next) in order to call the next middleware component. I also capture Markdown configuration that was setup during startup (more on that when we look at the middleware extension). The configuration holds a few global configuration settings as well as well as the configuration for each of the folders mapped in the configuration.

The code looks at the URL and first checks for a .md extension. If it finds that it simply forwards the request to the controller by rewriting the URL to a fixed path that the controller is generically listening on.

context.Request.Path = "/markdownprocessor/markdownpage";

If the URL is an extensionless URL things are a bit trickier. The code has to first check to see if the request is to a physical directory - if it is it's not a markdown file. It then has to append the .md extension and check for the file's existence the determine if the file can be found. If not the request is ignored and passed on in the middleware pipeline. If there is a matching markdown file then it too gets re-written to the markdown controller's route path.

If the URL is to be processed the the original, un-re-written URL and the actual filename are written into Context.Items along with the folder configuration that was matched which makes these values available.

The Generic Markdown Controller

The request is forwarded to a controller that's implemented in the library. The controller has a single Action method that has a fixed and well-known attribute route:

[Route("markdownprocessor/markdownpage")]
public async Task<IActionResult> MarkdownPage()

This fixed route is found even though it lives in a library. Note that this route only works in combination with the middleware because it depends on the Context.Items that were stored by the middleware earlier in the request.

Here's main action method in the controller (full code on Github):

public class MarkdownPageProcessorController : Controller
{
    public MarkdownConfiguration MarkdownProcessorConfig { get; }
    private readonly IHostingEnvironment hostingEnvironment;

    public MarkdownPageProcessorController(IHostingEnvironment hostingEnvironment,
        MarkdownConfiguration config)
    {
        MarkdownProcessorConfig = config;
        this.hostingEnvironment = hostingEnvironment;
    }

    [Route("markdownprocessor/markdownpage")]
    public async Task<IActionResult> MarkdownPage()
    {            
        var basePath = hostingEnvironment.WebRootPath;
        var relativePath = HttpContext.Items["MarkdownPath_OriginalPath"] as string;
        if (relativePath == null)
            return NotFound();

        var folderConfig = HttpContext.Items["MarkdownPath_FolderConfiguration"] as MarkdownProcessingFolder;
        var pageFile = HttpContext.Items["MarkdownPath_PageFile"] as string;
        if (!System.IO.File.Exists(pageFile))
            return NotFound();
        // string markdown = await File.ReadAllTextAsync(pageFile);
        string markdown;
        using (var fs = new FileStream(pageFile, FileMode.Open, FileAccess.Read))
        using (StreamReader sr = new StreamReader(fs))
        {                
            markdown = await sr.ReadToEndAsync();                
        }
        if (string.IsNullOrEmpty(markdown))
            return NotFound();

        var model = ParseMarkdownToModel(markdown);
    
        if (folderConfig != null)
        {
            folderConfig.PreProcess?.Invoke(folderConfig, this);
            return View(folderConfig.ViewTemplate, model);
        }
        
        return View(MarkdownConfiguration.DefaultMarkdownViewTemplate, model);
    }

    private MarkdownModel ParseMarkdownToModel(string markdown, MarkdownProcessingFolder folderConfig = null)
    {
        var model = new MarkdownModel();

        if (folderConfig == null)
            folderConfig = new MarkdownProcessingFolder();

        if (folderConfig.ExtractTitle)
        {
            var firstLines = StringUtils.GetLines(markdown, 30);
            var firstLinesText = String.Join("\n", firstLines);

            // Assume YAML 
            if (markdown.StartsWith("---"))
            {
                var yaml = StringUtils.ExtractString(firstLinesText, "---", "---", returnDelimiters: true);
                if (yaml != null)
                    model.Title = StringUtils.ExtractString(yaml, "title: ", "\n");
            }

            if (model.Title == null)
            {
                foreach (var line in firstLines.Take(10))
                {
                    if (line.TrimStart().StartsWith("# "))
                    {
                        model.Title = line.TrimStart(new char[] {' ', '\t', '#'});
                        break;
                    }
                }
            }
        }

        model.RawMarkdown = markdown;
        model.RenderedMarkdown = Markdown.ParseHtmlString(markdown);

        return model;
    }
}

The main controller code reads the path from Context.Items and then checks to ensure the file exists. If it does reads the markdown from disk passes it to a helper that populates the model.

The ParseMarkdownToModel() helper tries to extract a title and parses the markdown to HTML and stores those values on the model. The resulting model is then fed to the view specified in the folder configuration.

Et voilà! We have rendered Markdown documents.

Performance

As I mentioned earlier this middleware has some overhead because it has to effectively look at every request for the folders you have configured and check either for the .md extension worse for extensionless URLs check whether the file exists. Therefore I recommend that you are very specific about the folders you set up to serve markdown from rather than making this a global hookup in the root folder. Use specific directories like /docs/ or /posts/ etc. rather than just setting the entire site to use /.

There's some opportunity for optimization here as well. Output caching on the controller is one thing that would help, but I couldn't actually get this to work with server side caching - ResponseCache seems to only set headers and not actually cache anything server side any more. Something I haven't looked at with Core yet.

It would also help to cache file lookups to avoid the disk hit for file existence checks which are relatively slow. Keeping track of files that were previously checked could avoid that process. One advantage of the way things work now is that you don't have to worry about updating Markdown files on the server because currently there is no caching. Change the file and it will be picked up immediately in the next request.

Summary

There's still stuff to do with this library, but I've thrown this into a few internal projects and so far it works great. These projects are applications that have lots of dynamic content, but also have several sections that are mostly static text which previously was hand coded HTML - I was able to throw out a bunch of these HTML pages and convert them to Markdown in Markdown Monster as they were in Markdown friendly simple HTML to start with. It greatly simplifies editing and I've been able to pass of these documents to other non-coder types to edit where previously it was just easier for me or somebody else on my team to write the html ourselves.

This is nothing overly complex, but I find this drop in Markdown functionality incredibly useful and I'm sure I'll be using it extensively in the future. I hope some of you find this useful as well. Enjoy.

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2018
Posted in ASP.NET Core  Markdown  

Updating my AlbumViewer to ASP.NET Core 2.1 and Angular 6.0

$
0
0

I've been keeping my old AlbumViewer ASP.NET Core and Angular sample application up to date, and today I decided to take a little time to update the application to the latest ASP.NET Core 2.0 RC and Angular 6.0 bits.

The good news is that the ASP.NET Core update was almost painless. Updating Angular from version 5.2 to 6.0 on the other hand took quite a bit more work because Angular has introduced a new way to deal with rxJs which pretty much affects all observables in the application. Additionally, i ran into a weird WebPack/Angular CLI build issue that was causing a last minute painpoint when trying to upload the application.

Getting the Code

If you want to play with this stuff, the sample and all code is up on Github. Until ASP.NET Core 2.1 releases the code lives on a NETCORE21 branch in Github - I'll move the code over Master when the final build is out. Final release is supposed to come out by the end of the month as was announced at Build yesterday.

Upgrading to .NET Core 2.1

The upgrade to .NET Core 2.1 from 2.0 and an earlier 2.1 preview was pretty uneventful. To upgrade all that needed to happen is:

.NET Core 2.1 Updates

Updating to .NET Core 2.1 from 2.0 was pretty much a no-brainer. Essentially the whole process involved installing the new .NET Core SDK and updating package references. There were no code changes required over the two updates I did for the preview and then to RC.

For this small project package updates basically involve going into the NuGet Package Manager for both projects and looking for out of date packages. The main update is the AspNetCore.

The two changes I had to make in the Web project file:

<PackageReference Include="Microsoft.AspNetCore.App" /><PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" 
                  Version="2.1.0-rc1-final" />

Note that there's no explicit version specifier required as I'm using the latest version, but essentially this uses the same 2.10-rc1-final version.

The business project too needed an update for the EntityFramework reference:

<PackageReference Include="Microsoft.EntityFrameworkCore" 
                  Version="2.1.0-rc1-final" />

Here's what this looks like in this project:

Note also that .NET Standard shows as .NET Standard Library 2.0.3 which is the latest version. There's no direct option to configure the version, but this is determined by the SDK and build tools.

<TargetFramework>netstandard2.0</TargetFramework>

Configuration - Bleed For Me Mode

ASP.NET Core 2.1 includes a new configuration option that lets you opt in for bleeding edge changes - changes that might break behavior in some edge cases. The services.AddCompatibilityVersion() allows you to opt in to changes that might opt backwards compatbility and you can specify either a specific version that you're willing to go with or Latest.

// Add framework services
services
    .AddMvc()
    .SetCompatibilityVersion(Microsoft.AspNetCore.Mvc.CompatibilityVersion.Version_2_1)

By default is CompatibilityVersion.Version_2_0.

Remove Tooling

One very welcome change in .NET Core 2.1 is that you no longer have to explicitly add development time build tools for dotnet watch, User Secrets or the Entity Framework console commands as these are now built-in and always available.

So if you have a command like this from .NET Core 2.0 or older you can remove it:

<ItemGroup><DotNetCliToolReference Include="Microsoft.DotNet.Watcher.Tools" Version="2.0.0" /></ItemGroup>

Build Speed Improvements

In testing out .NET Core 2.1 on a couple of additional projects that are quite a bit larger than my sample here, the big thing that stands out is that build performance is much improved at least on Windows. Build speed now seems to be at least on par with full framework, and maybe even a little better. Even full rebuilds which were horribly slow before are now working much faster. Yay!

Updating to Angular 6.0

Angular is keeping to bi-yearly release schedule and we're now up to Angular 6. The upgrades from version 2 all the way up to 5 have been largely uneventful when it comes to upgrades. Most updates were simply a matter of upgrading all the dependencies.

It's been while since I'd done updates so I actually did a two step update process this time around. I was running a late 4.x version, and initially jumped straight to 6.0 before it was released. However, that did not go well... Because it wasn't released yet upgrade information was a bit spotty and I didn't realize that there were going to be major changes in Angular 6.0 that would end up breaking my application rather hard.

After spending a couple of hours fighting and not finding decent docs to point me in the right direction I decided to wait until 6.0 was released. Which - in true Mr. Murphy fashion - happened the day **after I rolled back to 5.2. Argh.

So the upgrade from 4.x to 5.24 was very quick. Moving to 6.0 took a bit longer, but after 6.0 was released there was actually decent documentation.

ng update

One of the big new features in Angular 6.0 is support for ng update in the Angular CLI. ng update updates the angular-cli configuration angular.json and package.json files to bring up the core Angular and Angular dev time dependencies.

This worked reasonably well for me but not before I made a few funky mistakes I hope I can help you avoid.

The proper order to do the upgrade is:

  • Remove the global Angular CLI
    npm uninstall -g @angular/cli
  • Remove local Angular CLI if installed Angular CLI (if installed)npm uninstall @angular/cli
  • Install the latest Angular CLI globally
    npm install -g @angular/cli
  • Nuke the node_modules folder
  • Nuke the `package-lock.json file (IMPORTANT!)
  • Run ng update
  • npm install to ensure you get latest

package-lock.json was the one that got me - I did everything else, but because I didn't delete the package-lock.json file at just the right time I ended up restoring the wrong versions. It took me a few go-arounds to make sure that the final npm install was pulling all the right files.

Although I had trouble with it, I am really glad to see ng update is available now, because previously I'd go and manually create a new project and compare my old and new package.json and angular.json files and try to synchronize files. With this new functionality a lot of that is handled for me autom

rxJs 6.0

Probably the biggest change you need to deal with in Angular 6.0 is the changeover to rxJS 6.0. Recall that rxJS is used for the Observable objects you use for any kind of event handling or HTTP processing.

rxJS has always been a funky implementation and has undergone many changes in how it's referenced - the access APIs keep changing. rsJS introduces yet another backwards incompatible change that requires making syntax changes.

There are two ways you can upgrade your existing rxJS code to rxJS 6.0:

  • Use the rsjx-Compat module
  • Make the rxJS changes explicitly

rxjs-compat

ng update doesn't touch your rsjx code or imports in any way so if you just want your old code to work you can simply add the rsjx-compat module:

  • npm install --save rxjs-compat

That's all that you need - this package basically provides the same structure as the old rxjs syntax did and then provides shims to the new functionality.

The angular folks have mentioned that rsjx-compat is a temporary fix and that you should try and move your code to the new syntax.

Manual rxjs Updates

The other option is to go through your project and explicitly update to the new syntax.

The reason for the new syntax apparently is:

  • Much simpler imports for operations and operators
  • Improved ability to include just what you need of the rxjs bundle

The former is a big deal and addresses one of the major beefs I've had with this library in that it was always very difficult to figure out exactly what you needed to import and the imports were basically undiscoverable. The new import syntax just has a couple of high level module paths to import from and once imported Code Completion/Intellisense can help with finding the right operators.

So here's what's required

Imports

Imports have been simplified and typically come only from rxjs or rx/operators.

Here's the old syntax:

import {Observable}  from 'rxjs/Observable';

import 'rxjs/Operator/map';
import 'rxjs/Operator/of';
import 'rxjs/add/operator/catch';
import 'rxjs/add/observable/throw';

And here is the simpler new syntax:

import {Observable, of} from "rxjs";
import {map, catchError} from "rxjs/operators";

The core components like Observable and Subject and some core operations like of are found in the rxjs namespace. All operators can now be retrieved from rxjs/operators no longer do you have to have individual imports for each operator. Yay!

Note that some operation names have changed to protect the innocent - eh, to avoid naming conflicts with reserved words in JavaScript. So catch becomes catchError, throw becomes throwError and so on.

A more complete list of changes can be found in the rxjs upgrade guide:

.pipe()

In previous versions you used chained operations to call operators. You'd combine operators like .filter() and .map() to string together Observable operations. In rxJS 6.0 this changes to a parameter based syntax using the .pipe() function.

The following is code inside of a service that returns an Observable<Album[]> for an HTTP call to the caller.

Old Syntax:

getAlbums(): Observable<Album[]> {
        return this.httpClient.get<Album[]>(this.config.urls.url("albums"))
            .map(albumList => this.albumList = albumList)
            .catch(new ErrorInfo().parseObservableResponseError);
    }

New Syntax:

getAlbums(): Observable<any> {
    return this.httpClient.get<Album[]>(this.config.urls.url("albums"))
            .pipe(
                map(albumList => this.albumList = albumList),
                catchError( new ErrorInfo().parseObservableResponseError)
            );
}

The new syntax replaces chainable functions with explict function calls as parameters to the .pipe() function.

<rant>

Now personally I think this is crazy ugly syntax and a big step back, but according to the rxJs team this is supposed to make it much easier for build tools to optimize the actual bits that are pulled from full rxJS library resulting in a much smaller footprint.

My beef with this is that rxJS now basically creates global functions that can pollute the module namespace, and you can see that by the fact that some things needed to be renamed so they could work at all and not conflict with JavaScript functions. You also lose some discovery context at the call level via code completion because there's really no call context as these functions live in the module's namespace.

That said, I can get used to this I suppose, but it still seems like some very odd decisions for a mature library to make to essentially solve a build issue. But alas it is what it is.

</rant>

To migrate to the new syntax is not difficult as you can see when you compare the two versions. You can usually just add the .pipe() function and wrap it around the operators and remove the . and commas. Easy but time consuming.

The good news is in Angular the compiler catches all of these if you don't have rxjs-compat installed so the usage is easy to find and fix. In my smallish application this took about 10 minutes, but obviously in a larger app this will take time.

There's also a rxjs-tslint update you can grab from npm:

npm i -g rxjs-tslint
rxjs-5-to-6-migrate -p src/tsconfig.app.json

and this will try to automatically fix up the imports and change to .pipe() operations or offer them as fixup options in the IDE.

I didn't have a chance to try it as my sample app is small but for larger apps this looks very useful.

Summary

All in all, this two step update process to .NET Core 2.1 and also Angular 6.0 was relatively painless especially since I had let this app sit idle over a number of update releases.

For .NET Core 2.1 the process was a breeze, but the Angular update required a bit of work primarily to deal with the major rxjs changes.

For now the code on Github remains on a non-master path (NETCORE21), until .NET Core 2.1 is released to RTM which apparently won't be very long.

Resources

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2018
Posted in ASP.NET Core  Angular  rxJs  

Web Code is a solved Problem: How about fixing Web UI next?

$
0
0

If you're a Web developer, you probably have noticed that our industry is thriving on extremely rapid change. You step away from the Web world for a month and you come back and there are 20 new things you need to look at. The pace of change is exhilarating and frustrating both at the same time.

But these days most of the focus in Web development is on code - JavaScript code in particular. By comparison, the Web UI - HTML and CSS and the browser DOM and support features - feels like it has been stuck in the mud and stagnating for a long time. We now have all the advanced coding tools to do cool stuff, but it seems that HTML and the Web Browser's feature set are really what is holding us back.

Code Über Alles

Most of the focus in Web Development in recent years has been on the code side of things: JavaScript Frameworks, ever more complex build systems along with the tools that facilitate creating code efficiently have gotten all the attention. Huge improvements have been made in this space in the span of just a few short years, and it's now possible for most developers to build even very large applications in a modular and manageable fashion without having to piece together all the needed pieces themselves. There are established solutions for building Web applications from the code perspective and it's relatively approachable and consistent.

It's not so long ago that this really wasn't the case when you had to piece together your own - mostly inadequate - frameworks and hoped for the best. Today we have established frameworks and at least some general best practice guidelines for the average developer to build applications using modular development techniques.

Client side JavaScript code development is mostly a solved problem that isn't holding us back anymore.

That isn't to say that we're done - there's lots that can be improved in the development process, the frameworks, and especially the build processes involved, which has spiraled out of control in complexity and size. But overall, the tools and processes to build Web applications efficiently are available and readily accessible by developers of all skill levels.

Web Assembly - A Promise for Change?

As a result of the complexity of the JavaScript build tool chain, there's also been a movement to bring some language diversity to Web development with a push to dethrone JavaScript as the only language that can play in Web dev by way of WebAssembly.

WebAssembly promises to bring alternate languages to client side Web development by providing a low level execution layer that compilers and language services can produce output for, effectively bypassing JavaScript as the execution engine. WebAssembly might help break the choke hold of the crazy complex JavaScript build systems based on a million untraceable Node modules, into more traditional compiler based approaches that produce single source output directly and offer some choices for client side code.

Whether this pans out, or is any easier and less complex remains to be seen, but considerable effort has been invested in WebAssembly as a technology by the big language players in the industry. This is something way overdue. While JavaScript has matured drastically and has become much more flexible with the ES 2015 and later releases, it's still good idea for the Web as a platform to have choices in languages usage, just as you do with other platforms.

A mono-culture is never a good idea for innovation. Web Assembly brings a glimmer of hope that the mono-culture days of JavaScript are numbered.

Coding for the Web is a Solved Problem

Putting WebAssembly aside for a moment, the progress that has been made for code based solutions using JavaScript has been nothing short of amazing. Today you have a choice of a large array of frameworks like Angular, React, VueJs, Ember, Aurelia and many more to build applications in a consistent manner. The process of doing so once you get to a working build setup is relatively easy, consistent and maintainable.

Building JavaScript applications not so long ago used be a real Wild West experience. But today by way of frameworks and consistent approaches to modularizing applications, the process of building complex applications is much easier and more consistent. With ES 2016 modules it's now even possible to get by without frameworks, although I'd argue for bigger applications frameworks provide so much support you'd be crazy not to take advantage of their core features.

While there surely are more areas that can be improved, overall the code bit of Web application is a mostly solved problem.

Let's talk about HTML and Web Browser

Ask most Web developers about the biggest pain point in Web applications today, and they will likely tell you that Web UI is their biggest time sink. I know this is true for me. To get an application to look right and professional, to get the common input controls that most applications need, to be able to customize or create custom controls beyond the basics, the create a well rounded responsive UI is not something that just arrives as part of a single framework or tooling.

And it sure as hell does not come in the box via raw HTML and CSS.

HTML and DOM: The ugly StepChild

In my mind the weak point in Web Development now is HTML, CSS and DOM that's not keeping up with the change and ambitions we are seeing in the code space. Compared to the rapid advances we've seen in the JavaScript world, HTML, CSS and the DOM are stuck in the mud. HTML still seems like it's 90's technology, so far behind the advanced features of everything else around it.

HTML5 was a LOOOOONG time ago

Many years ago when HTML5 arrived and it was supposed to be the panacea that was to deliver us the rich platform that would finally banish other platforms. HTML for everything it was said. A new era of rapid improvements, new APIs etc was upon us. Soon we would be able to build mobile apps, talking to native APIs and find unicorns and rainbows that lead us to that pot of Gold...

Cue the record scratching off the turn-table...

Scratch That

HTML5 did bring a number of much needed improvements to the pitiful HTML4/XHTML standard. Semantic HTML tags and a handful of much needed DOM APIs including Geolocation, various local storage solutions, and maybe most importantly a consistent model respected by all browsers - including at the time, Microsoft's browser and Internet Explorer 10 and 11 in particular.

HTML5 improved Web development dramatically with more consistency and explicit rules for browser vendors to follow. But it wasn't exactly an earth shaking change or advancement of HTML. The biggest feature boosts came from CSS3 improvements with many much needed new CSS attributes. But again - most of those had been in most browsers (except IE) for years, so when all of this finally landed it was kind of a ho hum moment.

Around the same time HTML5 finally was ratified there were also a ton of new proposals for new integrations especially related around mobile device features. The future looked bright...

And then... Crickets!

This is especially troubling with all this talk about Progressive Web Apps (PWA) to providing more app like features. While PWA has a whole new set of features that are aimed at making sure that network (via Service Workers) and home screen features can be managed better, there's little else to actually support better integration.

To make PWAs a more realistic use case for replacing native applications much needs to happen to improve browser integration with host platforms.

HTML What have you done for me since 5.0?

Now we're 8ish years past HTML5 adoption. Look at the original HTML5 specification from ~8 years ago and honestly think about what's changed since then. Not much...

What can you think of?

Here are a few I can think of for UI (not anything JavaScript or code related):

  • Flexbox
  • CSS Grid
  • Navigation and History Improvements

Crickets? Yes?

Heck, we still have only the same basic 8 input controls HTML started out with 20+ years ago. Now there's progress for 'ya.

Note I'm deliberately excluding big non-UI enhancements that are major components but don't directly affect UI:

  • Service Worker (not directly UI related though)
  • Web Assembly
  • ES2015/2016

With the minimal UI improvements in HTML, think about were we've gone with JavaScript and in the browser code space in general in the last 8 years by comparison.

Where have HTML, CSS and the DOM gone by comparison? Practically nowhere. The same issues we had when building applications that needed to interact with host OSs/platforms 8 years ago are no better than back then. You still can't build a decent mobile application with just a Web browser that interacts with native phone hardware or software APIs. To do something as simple as accessing your contacts, or sending an SMS (all with permissions of course) is still impossible. Controlling the camera or microphone beyond the very basics is still not possible anymore than it was back then. You still can't effectively save files without popping up a Save As prompt box every single freaking time, even if re-saving a file that you previously opened or saved. Camera and audio access is absolutely minimal and barely supported on some browsers.

whatwebcando.today shows an overview of various specs supported by various browsers and the while there are lot of features listed, many of them are not widely enough adopted to be used for general purpose Web access.

There's a lot of red on this feature list and even some of the checked off items don't work in all browsers (here in FireFox):

and it's worse if you do this on mobile phones. Check out that list above on an iOS device and look at the sea of red.

But...

Just wait another 2 years and everything will be awesome!

Bluetooth and USB support? Yeah right. Those are nice experiments, but don't expect those to be usable in all browsers the next 5 years in general purpose applications. It's same with many other APIs shown above. There are breathless articles about how wonderful this or that new experimental feature is, except well... you can't use because no browser actually supports it. Most of those experimental 'APIs' have been around in spec form for years and in experimental mode behind developer flags in some browsers. But released as a widely adopted standard? Not anytime soon.

And so we wait. Specifications are available and have been proposed years ago. And they sit and sit and languish.

Change is possible: GeoLocation

Not surprisingly rapid change is possible when there is commercial interest. It happened with the browser GeoLocation API, which got into browsers very rapidly and was also ratified relatively quickly. Browser vendors had a vested interest in that technology (Google and Microsoft both have map solutions to sell and push advertising on) and so it got pushed hard and was adopted very quickly in all browsers.

GeoLocation is also a good example of how security can and should work in terms of asking for permissions in the browser, caching permissions for some time without re-prompting for a given time. Geo location just works and demonstrates that it's possible to integrate with security sensitive features without being totally obnoxious and making the tech unusable due to security limitations.

Finally GeoLocation has also been one of the first APIs that required use of SSL/TLS for all API access and so has become a driver for moving forward towards a secure Web where most if not all (eventually) traffic runs over a secure connection.

In other words: Where there is a will there's a way and GeoLocation was one of those cases where there was a lot of will. Unfortunately, the rest of the Web APIs under review and in proposal status seem to get none of that same love.

Don't we have everything we need?

When it comes to HTML, I often hear - "we have everything we need in HTML" so it doesn't need to change rapidly.

Really? There are so many shortcoming and hacks required to make even some of the most basic design features work in HTML today. If you've been doing HTML development for a while you may just have forgotten how kludgey and funky a lot of HTML behavior is, especially when it comes to more complex interactive or input components. Yes there are (often elaborate) workarounds and if you've been doing it for a while those hacks are second nature. But workarounds are a sign of a platform problem and not a badge of honor if you know them.

There are lot of other small enhancements in CSS but most of those are also still a ways away, because... browser support even in evergreen browsers is not there yet. Microsoft lags behind as usual waiting for official ratification which as usual is going at a snail's pace. Chicken and egg.

Just wait another 2 years and everything will be awesome!

Doing Awesome Things? Yes, but at what Cost?

Now I realize all of this is not keeping people from doing awesome stuff on the Web. Many applications may never need integration with mobile or native features. A typical data over forms application may not need fancy UI interactions. And that's great.

Or local access to a folder and be able to save a file without constantly having to throw up a dialog. Maybe you don't need access to the mobile phone's address book, or the SMS app to send (after validating access) legit messages out of your app. Or maybe the applications that you built don't require anything beyond using a UI framework like Bootstrap, Material Design or something more app like like KendoUi or DevExtreme etc.

But if we really want the Web to become the platform that takes over the desktop or mobile platforms and displace native applications, there's more to life than corporate forms over data applications. To realize the real promise of the Web and for things like PWA to become a real force as an application platform - those things have to be there.

If we plan on using the browser as an application platform that can address the needs of modern applications, why do we continue to hobble it by not advancing the core feature set that would allow it to integrate and provide access to those same native features that native applications have access to via UI abstractions that address common use cases.

Reinventing the Wheel over and over again

The reality is that to really build professional looking applications is very hard for the average developer because there's no consistent path for building Web UIs. Even if you choose one of the big frameworks that only gets you so far. You'll have to customize and hack your way around to fill in the gaps that a framework does not fill - which is usually quick complex due to the scattered dependencies and nested CSS styling nightmares most of these frameworks impose.

Again please understand that I'm not saying that you can't build applications with good UI, but I'm saying that the UI creation process is so fragmented that it's often difficult to make an educated choice of what tools to use or even whether to use a library or build your own.

There's a shit-ton of wasted effort reinventing the wheel over and over by individual developers. Reusability for UI on the Web is deplorable.

I've always been a big advocate of Web technology. Most of the paid work I do revolves around Web technology. If I have a choice I much rather build applications for the Web than a native desktop or mobile app. But the longer I sit here looking at where HTML is going the more I gnash my teeth and think to myself: Why the heck is this not ever getting better?

I've been feeling extremely frustrated with the Web space because on almost every new project I find myself in this place of not having a straight go-to answer on what tools to use to start a new application with. There are lots and lots of choices out there - but most of them have lots and lots of holes in them that need to be filled with time consuming busy work of reinventing that wheel.

I am getting frustrated waiting, and hearing the just give it another 2 years and then things will be awesome mantra. Because it never actually arrives.

Failure of Imagination: HTML

If you're a Web developer raise your hand if you have ever struggled with putting together a UI feature that seems relatively simple without immediately going to a UI framework. Maybe you needed a custom formatted list box, a masked input box, or dynamic tree or something as simple as an editable 'combo box'. Without a framework there are no good to make that happen.

If you build typical Web applications for business customers like me, you probably use a UI framework like Bootstrap, Material Design or UI toolkit like Kendo UI, Wijmo etc. that provides you a base set of features and 'controls' (air quotes that!). I rarely have the privilege of working with a 'designer' and so the task to deal with Web UI falls on developers and so Web frameworks are usually the baseline to start from.

But when that base framework doesn't have what you need, which happens regularly to me, you have to roll up your sleeves and start building a custom control from low level DOM infrastructure or using proprietary, and usually non-trivial and unintuitive framework abstractions. The fact that customization can be difficult is not really a vendor issue, but more so that each framework has a completely different set of implementation details so if you've built a customized component for one framework you can't use it anywhere else, nor can you just port over the logic to creat that component anywhere else.

Because there's essentially no component platform in the DOM only a bunch of HTML controls that have been there since the mid-nineties since the first Web browser was created, 'controls' are built through simulation of other HTML elements and those base controls. To build a serviceable combobox you draw boxes around an input control to simulate an input box. To display a drop down list you manually draw box and position the mouse at the mouse cursor and hope the algorithm is correct. And it handles the browser edge (ha ha) cases.

The basic building blocks of HTML controls are just not there to provide for more complex controls in a more consistent fashion.

The biggest shortcomings in HTML is the lack of forward movement in a few areas:

  • Input controls. We basically still have the same 12 input controls HTML 1 had
  • Integration with the browser host OS/Platform

Input controls

The first and maybe biggest failing of HTML is that it has a pitiful set of input controls.

  1. input
  2. textarea
  3. file (upload)
  4. checkbox
  5. radio
  6. radiogroup
  7. button
  8. select
  9. datalist

The <input> tag has quite a few additional variations for things like inputs for password, date and datetime, number and so on, although most of these are widely shunned because not all are supported on all browsers and because the more complex ones like the DatePicker are implemented in absolutely terrible ways and are not stylable.

The only even remotely complex control in native HTML is the <select> list control which is used for Listboxes and Dropdowns. This control is notoriously un-stylable. The API for selection handling in the list controls is so primitive it doesn't even track or allow setting the selected item(s) directly - you have to traverse the DOM children to find or mark selected items explicitly.

No Complex Input Controls

There are no other complex input controls. There's no combobox that you can type into. There is no autocomplete control, no (usable) date picker, no editable grid or heck even a scrollable readonly grid. The List control that is available can barely be styled, so much so that most frameworks simply discard the native controls and use HTML primitives to redraw lists completely and then set values in a hidden input or list control. How silly is that in this day and age?

Not only is the set of input controls really small, but these controls have almost no functionality associated with them.

Instead we keep re-inventing the wheel over and over again for each application.

Extensibility for the Web

The other issue that integration with native features is slow in coming and at this point it really feels like this may never actually happen. We often hear that extensibility is difficult to implement for browsers because Web applications run across many browsers, OS platforms and implementations have to be built for each platform.

But - we also know that tools like Cordova have existed for many moons now and that have made it possible to build extensions and integrations into native features. Why couldn't that sort of extensibility be built right into the browser platform itself? This alone would potentially open a whole slew of features and drive a boat load of innovation.

Extensibility is a feature!

Security is obviously a concern with any sort of extensibility, but that's something that browsers will have to address one way or another anyway going forward. We grant our native mobile platforms a lot of rights and yet we are supposed to feel secure when we install apps from an 'app store'.

Whatever that the native security model is, it can also be used for Web applications. Restricting behind permission prompts is the first line. If it takes some sort of registry that has some checks and review process, then so be it. But to just dismiss extensibility as something isn't going to happen stifles so many possibilities that could help drive innovation.

Again - you don't hear anything about extensibility because it is outright dismissed. Should it be? Security is hard, but it's not an unsolvable problem.

HTML Layout

To this day HTML has had a plethora of different layout engines, none of which have made page level layouts that most sites are made up of even reasonably intuitive. In the old days HTML Tables were the only way, then floats and fixed or absolutely positioned came into vogue. More recently there's been FlexBox which I've spent quite a bit of time with in the last few years. Flexbox works reasonably well, but it's funky syntax and inconsistent layout concepts and language, have made that technology an uphill battle. This plus the fact that broad browser support took 6 years to come to fruition just at a time when the next new thing was showing up on the radar.

That next new thing Layout engine is CSS Grid. CSS Grid looks like finally there will be a standard that actually addresses most of the concerns for layout. It inherits most of the concepts of Flexbox behavior (with better syntax) plus templated and named layout sections that can be rearranged via CSS in media queries typically. CSS Grid is what Flexbox should have been really.

What's frustrating here is that both FlexBox and CSS Grid happened nearly side by side with Flexbox getting to broader adoption first. And now... well, you'll want to throw out Flexbox code and use CSS Grid instead. It took an extremely long time before Flexbox was usable on mainstream Web sites due to browser support and now it looks that everything is pivoting to CSS Grid. In the meantime frameworks were updated to use Flexbox and will have to be updated again to use CSS Grid.

Just wait another 2 years and everything will be awesome.

The good news is FlexBox is pretty good with browser support now. CSS Grid works with all major evergreen browsers, but Internet Explorer - which sadly still has significant browser share - is not supported by CSS Grid.

All of this points to how bad the W3C 'design by committee' process is at bringing new features to the browser. Not only is it slow, but it's also bringing almost duplicated features that for the casual observer are not even easy to differentiate at a glance. It would have been nice to have CSS Grid and Flexbox as a single spec since there is so much feature overlap, but no - two separate but similar standards just to make things a more difficult to decide.

Common Components

If you've ever worked with any other type of UI framework you immediately realize that HTML has a tiny API for controlling rendering and interacting with these input controls. Input controls are one thing, but even beyond that common features are not provided by HTML. There's no support for menus or even a popups. There's no high level list control that can be customized with custom templates to render specialized content. Need a tree display - you're on your own.

Granted all of this can be done by hand. Building a tree display is not too much more work manually than it is with a native control. Except when you maybe also want to edit those else elements and track selection and add a slew of features that are expected to work in a tree. It can be done, but building controls that behave the way you expect them too is a lot of work. Work that usually has little to do with the business goals of an application that developers are building.

As a result of this you either have to build your own set of 'controls' (which is possible but it takes time) or you might decide to use a UI framework of some sort.

Building your own is no trivial task, because there's no API. We all know what happens when people just 'build their own' - all we have to do is look of the haphazard world of jQUery components. There are jQuery components for just about any type of functionality available, but man oh man, each and every one of these components looks different, uses it's own set of styling rules and nothing really fits together. You might want to use this component, but it doesn't play nice with the framework you're trying to plug it into.

So much wasted time because there's no coherent standard, a very limited API that provides no constraints and essentially encourages every one to do something different!

UI Frameworks

To fill this need of even the most basic UI constructs most Web developers use some sort of UI framework. Whether it's Bootstrap, Material Design, Ionic, or more component based libraries like KendoUI, Wijmo and so on.

UI Frameworks fill that basic need, but in many cases most of these frameworks also fall short of the providing complete solutions. I use Bootstrap for most apps I work with, and it has a fairly minimal set of UI 'controls'. I typically add some custom styling or use a purchased theme (which in most cases is an absolute nightmare to customize and barely fits the title of 'theme').

Once you get beyond the basic controls of the framework however, then the hunt is on to find a separate control that provides that functionality for the said framework.

How many times can I either build or try to find a Bootstrap Datepicker, AutoComplete control or grid that works with each version? What if on the next project I'm using is Material Design? I get to start all over and re-integrate. Each and every time you end up searching out or building a new component and trying to integrate that into the framework of choice.

And it's not like there's any standardization amongst frameworks. For example, Bootstrap is easy to use, but extending its styling in order to integrate custom controls that fit the UI style is no walk in the park. If you end up building your own, you're likely to spend a day getting the details right. And even if you find an open source component that fits the bill, third party integrations tend to be finicky and often prone to small little bugs, because they usually tackle more ambitious controls than the simple ones packaged in the main framework in the first place.

No doubt, UI frameworks make life easier but they are not an end-all solution to the more fundamental problem, which is that HTML lacks the core design essentials to build extensible components in a meaningful and consistent way.

Web Components

Ah yes, Web Components: the mythical solution to all of the Web's problems for the last 7 years.

Web Components sure sound promising. Always have since there was the initial discussion in the early 2011 timeframe.

Much like application frameworks like Angular and React etc., Web Components are meant to to create small self-contained islands of UX and functionality that can be reused. Web Components are more low level and focus on raw DOM interactions for things like input or display controls, where JavaScript frameworks focus more on high level application components that contain app logic.

Web Components provide an isolated DOM space (Shadow DOM) in which controls can live so that they are optionally not affected by CSS rules and transformations from the host page except for explicitly pulled in styling and page logic. The idea is that you can build components that will behave consistently no matter where you drop them into a page or which framework you use.

That sounds great and it might solve the problems I've described so far in providing a richer base line of base controls that can be reused. Even so even Web Components sure could benefit from a larger set of built-in base controls to start from.

But...

Just wait another 2 years and everything will be awesome!

The way things stand only Chrome has support for all the required WebComponents sub-features. Most other browsers are a sea of red:


Figure - WebComponents all sub-features browser support summary

Again, I have to ask this question: We know Web Components is a vital feature that everybody wants and the Web needs to move forward. All camps have been advocating Web Components for many years. So, why the heck are we 7 years into this technology without even a possible adoption date in sight?

Custom Drawn Controls

As a result of this minimal feature set, HTML relies on custom controls that aren't really controls at all but a bunch of HTML elements laid in composition around the limited controls above meant to mimic real controls that live in a native operating system.

I can just hear the cries now:

But, but... HTML - I can draw whatever I want with HTML, right? Right?

Sure given an unlimited amount of time I'm sure you can hand code any custom control you like, but the fact is most of us don't have that time. And we especially don't have it when we are switching to the UI and JavaScript framework du jour on the first full moon of every new year.

If you need custom input controls you're also still confined by the extremely limited feature set of the few native input controls available.

How many times have you hunted around for a DatePicker control that works with jQuery UI, then jQUery Mobile, then Bootstrap, and then with your custom framework? The horror of it all is that even if you find something that works it usually only works in the context of the framework it was designed for. Throw it into a different UI context and the whole shebang no longer looks right, or worse no longer works.

So maybe you're one of the 'no frameworks' guys who builds everything by hand. That's awesome and I really admire that if you're sticking with it (most don't), because I have done that in years past. But at some point realized that maintaining my own Web Framework is just too damn difficult for a single developer or even a small team to keep up and manage. And in the end it's a tough sell to clients you do work for that generally want a more standardized solution that they can find developers for.

Building UI components that display content is perhaps a reasonable endeavor. HTML is infinitely flexible with display layout but it absolutely sucks when it comes to input behavior because there are no behavior standards at all.

When you account for behavior you realize quickly how complex it is to build even a reasonably simple control and make it behave like a control is expected to behave. For example, think of implementing selection behavior on a hand drawn list control, or handling expansion in a hand drawn tree control - these are not trivial implementation details. You need to account for mouse and keyboard behavior, for hover, selected and selection behavior. Handle input searching. Handle multi-select. Handle accessibility, localization and the list goes on... the list goes on.

Developers who specialize in Control development know all of these details, but the average application dev usually never gives these things a second thought - until they try to implement them yourself.

Building user friendly input controls that have common behavior, look professional, support accessibility standards, support OS shortcuts and behaviors is hard.

Building re-usable controls is hard and it's not something that application developers should have to do.

Yet with HTML applications application developers are often forced to do just that because there are no decent built-in alternatives or there are no ready made components available that match the framework they happen to be working with.

What's missing in HTML is the underlying support platform - that common object model that provides the core semantics upon which you can then reasonably build new components in a consistent manner.

The unfulfilled lure of Third Parties

"Aha", I hear you say. "Why don't use a third party control, or control framework?"

There are powerful third party frameworks available from the big framework vendors and also from smaller vendors and even some free ones. Frameworks like Kendo UI, Wijmo, DevExtreme provide huge set of controls. But - these frameworks tend to be rather expensive with often complex licensing schemes and maintenance contracts, and if you do go that route you are really buying into a specific framework's look and feel.

If the framework can serve all of your needs - that's great. But... as many controls as these frameworks often contain you may still run into some feature that's not there. When you need something above and beyond now your task is to match the look and feel and behavior of that same framework which complicates the process of custom control creation even more.

Additionally, these frameworks implement their own object models that are often very complex to extend. While usage of frameworks is often well documented, extending them usually is not.

I also think that the extreme pricing on some of these frameworks is due to the sheer economics of competing with... free. The component vendor companies once enjoyed wide adoption of their frameworks at more reasonable costs. Now they are fighting against the tide of free open source frameworks (like Bootstrap) that are often much less features and provide mediocre functionality. The only way these companies can stay afloat is by charging an arm and a leg to make up for the enormous development cost and by sticking that cost to Enterprise customers with deep pockets. The little guy is pretty much priced out of the market for most of these frameworks.

This is a nasty devaluation side effect of OSS that has driven out the middle market - you now see either free (and often mediocre) or high end expensive components. There's little middle ground.

Does it have to be this way?

I know I'm dating myself, but I come from a background of Windows desktop development long before there even was 'Web development'. Say what you will about desktop development (or even Native development for devices these days) when it comes to providing consistent APIs and tooling to make it easier to build sophisticated UIs, native apps are doing a much better job.

HTML is not like the desktop so it can't be expected to behave the same, but compared to desktop applications and APIs HTML is just very, very sparse. Some will take that as a positive, but I'm pretty sure that many are realizing that this lack of an underlying platform architecture causes a lot of the friction I describe above in the development process.

It sucks to have to continually re-invent the same thing over and over again and when you have no real baseline model on which to build a custom implementation. You can't extend functionality that isn't there in the first place, so you often have to literally build from scratch.

Desktop Apps: APIs is what made them productive

I got my start in FoxPro, worked some in VB6 and MFC/C++ and then worked in .NET WinForms and WPF which I still use on occasion to this day for things that simply work better on the desktop - mostly tools or applications that need to interface with hardware.

When I think back on those days one thing that stands out to me is how easy and fast it was to develop functional applications due to a plethora of pre-made components, easy to use visual tools that allow easy placement and visualization of content. And maybe more importantly a well defined underlying UI API that supported creation of common controls that addressed the most common use cases.

HTML based UI development is lacking in all of these areas.

In these desktop environments you had an lots native controls that were built in to the base framework along with extensive tooling that allowed a rich design time experience. Controls like date pickers, comboboxes, grids, tree viewers, auto-selectors, menus, context menu, popups, even more custom things like masked input filters, validators and scroll viewers and so on were just there because they are part and parcel of the platform.

But even more importantly these UI frameworks came with something that HTML can only dream about: An actual well-defined and extensible object model that allowed you to easily extend or create new controls of your own relatively easily. Not only could you create your own controls because it was relatively easy to use base components and enhance them, but it was also relatively easy for many third parties to build third-party controls that were for sale (or in some cases free - remember this is long before OSS and free became the norm). Tons of third party controls were also available both for pay and for free.

None of that exists in HTML today. In HTML the only model you have is basically extend by composition of the limited base controls that you have.

Having a base set of components provides a more solid base line for building applications without having to run out and build or find a third party component each and every time you need even a slightly complex controls.

I often hear arguments that HTML is different than desktop because HTML layout is very fluid and that's why the model has to stay lean.

I don't really buy that argument. WPF on Windows also uses a compositional layout model and it's quite capable of supporting a rich component API along with a base set of controls. I'm not a huge fan of WPF and XAML, but it is good example of what is possible in terms of a rich API that works both as a compositional layout engine and provides the core needed for extensibility as well as a lot of built in components.

There's no reason that HTML can't do something similar.

HTML ain't the Desktop, Dude!

Lest you think I'm advocating building desktop applications: Not at all - I'm a Web developer at heart and I've built Web applications for well over 20 years now. I love what the Web stands for in terms of rapid deployment and making things publicly accessible without having to manage 'installations'. These days hot reloading and live building also make the development flow very smooth and yes I wish it could be that smooth for desktop apps as well (some parts of WPF support something similar).

But, I would like Web UI and DOM to move forward more rapidly and actually provide new functionality that seems appropriate for the types of applications that we are building today.

The current form of HTML/CSS feels like it's built for the platform and applications we had 10 years ago.

I also build the occasional desktop application and in fact have spent a lot of time over the last year and a half building Markdown Monster in WPF on the side, so I've been working both in desktop and Web applications. I invariably think to myself, "Why can't I do <insert feature here> on the Web?" (and also vice versa). The thing that sticks with me when doing desktop work is that if something isn't built-in it's usually relatively easy to build something that does work with relatively minimal effort.

With HTML I dread hitting that point in any application where I need a component or UI bit that isn't built-in, because it usually means I'll go on a treasure hunt to try to find something that probably isn't going to solve my problem completely. Alternately I end up building something from scratch. Either way, I'm bound to loose a shit-ton of time doing work that has nothing to do with my problem domain. I don't mind building stuff - that's what we do, but doing it so often and with such a limited base line is what gnaws on me.

Shouldn't the 'Web Platform' have built-in support for 'platform' features so that extensibility isn't something that I have to dread?

Ruffling Feathers

My goal is to ruffle some feathers into thinking about the future of HTML the Web as a platform and where we want it to go. If we keep up the current pace of things as we have for the last 8 years or so we'll continue to do the old dance:

Just wait another 2 years and everything will be awesome!

Do we really want to be doing that? There's always the promise and more promises, but things just never seem to be moving forward...

If you are doing Web development, you can probably relate to at least a few of the pain points I'm pointing out here. Yet - it's very rare to hear people voice their concerns about these issues.

Most of the interesting and forward thinking discussions around Web development these days is around JavaScript, not around the UI or the browser as a platform to integrate with mobile devices or desktop platforms. For all the talk of the browser 'winning' very little is happening to actually drive the browser into those places where it's actually 'not winning'.

I think that's a problem. It's clear that the Web is here to stay as the main platform for application development. But it seems that most Web developers have given up caring about what the UI platform looks like going forward. The prevailing attitude seems to center around put up and shut up. I'm sure there will be plenty of comments to that effect on this post.

But do you really want this to continue indefinitely:

Just wait another 2 years and everything will be awesome!

I don't think that's healthy. If there are known pain points, they should be out in the open and should be discussed. Change in this space is going to be slow no matter what, but it starts with some discussion of what is needed to drive the platform forward, and frankly I don't see much of that.

I realize I'm making a request without being qualified to affect change myself. But I think at the very least we need to have this discussion more often:

Where are we going with HTML and Web technologies? It really doesn't seem clear where HTML and Web dev is headed.

Crank it up

Am I giving up on Web Development? Of course not. And for now I also have to put up and shut up and continue to use what's available, because that's what I need to do to get the job done.

But that's not what this post is about - it's not about saying Web development sucks, but realizing that it could be so much better and hopefully getting a few people to think along the same lines and post their thoughts in their own blogs or online discussions elsewhere.

The last thing I want to see is us going back to native development as the first line for development. The Web has always been the future for application development and I believe it will go all the way in the end. No, I believe in the Web as a platform and I want it to stay as the dominant platform.

I just would like to do more with it.

I want to see improvements to make it easier, more consistent, and more integrated to allow us to tackle those things for which you traditionally still needed to use native applications because the browser's security UI gets in the way, or because features that are there are simply not accessible to generic (or even specific) browser APIs. To write all of that of because of security is short sighted. Security is important but if that's what's holding improvements back we need to figure out ways to make the security for us not against us.

My reason for writing this post is, that I'm frustrated with the state of client side Web UI development and I'm voicing my concerns in hopes that it might spark some discussion.

I know I am not the only one because I hear similar complaints from others.To be fair I often work with developer customers and clients who are more of the "just get 'er done" type rather than keeping up with the latest new fad du jour.

But maybe I'm just in an echo chamber and I need a reality check.

So let's hear it - do you share some of these concerns or do you feel the Web as it is is just doing fine? Leave a comment.

© Rick Strahl, West Wind Technologies, 2005-2018
Posted in HTML  

Which .NET Core Runtime Download do you need?

$
0
0

.NET Core has a number of different runtime downloads that you can grab to install the runtimes and the SDK. It's not immediately obvious what you need, so since I just went through this myself and had a discussion with a few folks at Microsoft (thanks @DamianEdwards and @RowanMiller). I thought I'd summarize if for nothing else than my own reference in the future since I seem to forget what I figured for the last release 😃.

Checking what's installed

The first thing you should probably know is what versions of the runtime and SDKs you have installed, if any. The easiest way to do this is to run the following from a command prompt:

dotnet --info

If that doesn't work and you get an error, it means that .NET Core is not installed at all. dotnet.exe installs as part of a runtime install and puts itself on the path so you should be able to do dotnet --info if it is installed.

dotnet.exe installs with a runtime install, but it only provides core features to provide info to run and application and provide info about the install: dotnet mydll.dll and dotnet --info. To build, publish or do anything else you need to install the SDK.

If .NET Core is installed dotnet --info produces the following output (here with the SDK installed):

The output tells you:

  • The installed SDK version
  • The active runtime version that's running this dotnet command
  • A list of all installed runtimes and SDKs

It's important to understand that you can have multiple runtimes and multiple SDKs installed and each project can use a different one. The runtime is determined by your project's runtime specifier in the .csproj file:

<TargetFramework>netcoreapp2.1</TargetFramework>

The SDK is either the last globally installed SDK which is the default, or you can explicitly override the SDK in a global.json placed in the solution root folder. The following explicitly forces my project to use the last RC SDK, instead of the RTM version:

{"sdk": {"version": "2.1.300-rc.31211"
  }
}

Generally, there should be no need to use a specific lower SDK version as the SDK is backwards compatible and can compile various versions of .NET Core applicatino back to v1.0. IOW, it's OK to use the latest SDK in almost all cases.

Downloadable Runtimes available

Let's get back to the downloadable installs. There are a number of different things you can download that install .NET Core:

  • .NET Core Runtime
  • .NET Core SDK
  • .NET Core Hosting Bundle
  • Visual Studio

Visual Studio

If you're on a Windows you're very likely to be using Visual Studio and if you have the latest version of Visual Studio installed you are likely to have the latest SDK, runtime as well as the required IIS hosting components installed.

If you're using Visual Studio you typically only need to update the components below if you need to target a specific version of .NET Core that is not already installed. If you're doing active development, the most likely scenario is that you'll be upgrading to the latest version anyway which is most likely going to match what Visual Studio installed.

.NET Core SDK - Install for a Dev Machine

The SDK is meant for non-Visual Studio build and management tasks. That's for command line use or if you're not on Windows specifically. The SDK basically provides what you need for a development setup to build and run .NET Core and all dependencies. The SDK is the largest download and it contains everything you need for a given platform.

Effectively it installs the dotnet.exe build tools along with support components. The SDK also installs a fixed version of the .NET Runtime with it which is required to run the SDK tooling. In other words if you download the latest SDK you typically also get the latest runtimes and you don't have to install the matched runtimes separately.

The versions are .NET Core SDK 2.1.300 and .NET Runtime 2.1.0 as shown in the figure above.

Here's what you see after a clean install of the .NET SDK:

What it contains

  • Specific version of the .NET Core Runtime (ie. 2.1.0)
  • ASP.NET Runtime Packages (Microsoft.AspNetCore.App/All)
  • Specific version of the .NET Core SDK Tools (ie. 2.1.300)
  • The IIS Hosting Components on Windows
  • Platform specific install (ie. x64)

When to install

  • On development machines (all you need typically)
  • On a server or container where you need to run dotnet to build/publish etc. commands
  • On a server if the server builds the application

.NET Core Runtimes

The .NET Core Runtimes are the smallest self-contained and specific component and contain the absolute minimum to run just .NET Core on a specific platform.

Note it a runtime install does not include the ASP.NET Core meta package runtime dependencies, so if your application references Microsoft.AspNetCore.App or Microsoft.AspNetCore.All you have to seperately download the ASP.NET Core package. However, if you explicitly reference all ASP.NET Core Nuget packages rather than using the meta packages, those packages are deployed as part of your application and it can run with just the runtime.

Essentially you are trading installation package size vs. a runtime pre-install requirement.

The runtime alone has no support for dotnet.exe beyond running and info, so you can't build or publish - whatever you use the runtime for has to be a completely pre-compiled and be able to run as is.

Here's what you see after a clean install:

Note that with just the Runtimes, running an ASP.NET Core application fails...

What it contains

  • Specific Runtime for the given platform (ie 2.1.0 for x64)
  • Does not include the ASP.NET Runtimes!

When to use

  • For production installs that include all dependencies
  • For installs that do not use the ASP.NET Meta packages

ASP.NET Core Installer

This package basically installs the missing ASP.NET Runtime meta packages that the base .NET Core Runtime package described in the previous step is missing.

Basically this installs support for the ASP.NET Core meta packages.

What it contains

  • The ASP.NET Runtime Meta Packages
  • Microsoft.AspNetCore.App
  • Microsoft.AspNetCore.All

When to use

  • When you need ASP.NET Meta Packages
  • Install ontop of a raw .NET Core Runtime install

.NET Core Windows Hosting Pack

As you can see so far the SDK and Runtimes by themselves are usually not the right tool for deployed applications because they don't include everything you need and for this reason - at least on Windows there's a special Hosting Pack download that contains everything you need to run an ASP.NET Core application on Windows.

This is perhaps the most confusing of the packages available because the naming doesn't really describe what it provides. You can think of this package as EVERYTHING except the dotnet SDK tools. This package installs both 32 and 64 bit runtimes, the ASP.NET Core Runtimes as well the IIS hosting components on Windows.

If you need the SDK tools you're better of just installing the SDK instead of this package.

Non-Windows Installs

The Windows Hosting pack is specific to Windows and there are no commensurate bundles for Linux or the Mac. On Linux/Mac you can use the SDK download for Dev machines, and .NET Core Runtime + ASP.NET Core Runtimes for typical installations.

What it includes

  • 32 bit and 64 .NET Core Runtimes
  • ASP.NET Runtime Packages (Microsoft.AspNetCode.App/All)
  • IIS Hosting Components

When to use

  • When deploying on Windows Servers and using IIS
  • Includes both 64 and 32 bit runtimes
  • Includes ASP.NET Core Meta Packages
  • Includes the IIS Hosting dependencies
  • When you don't need the dotnet SDK tooling

Download Sizes

To give a quick perspective of what each of these three different SDKs look like (on Windows) in terms of size, here's a screen shot of all three packages:

Download Page

That's a lot of options and frankly every time I install or update a version I forget what exactly I should install.

Microsoft recently updated the download page to make things a little bit easier to understand.

Note the info icons and the included in .NET Core SDK notes, which highlights that if you install the SDK you pretty much get everything.

Summary

To summarize what works best for Windows installs:

For Server Installs

  • Windows: Use the Windows Server Hosting Bundle
  • Mac/Linux: Instal .NET Core Runtime + ASP.NET Core Runtimes

For Development Machines

  • Install the SDK
  • or on Windows: Visual Studio

For absolutely minimal .NET Core Installs

  • Install the Runtime only

If you also use the ASP.NET Runtime Meta packages

  • Install the ASP.NET Runtimes

Hope this helps some of you out.

this post created and published with Markdown Monster
© Rick Strahl, West Wind Technologies, 2005-2018
Posted in ASP.NET Core  .NET Core  
Viewing all 630 articles
Browse latest View live