Markdown Monster 2.0 is here

This is a test post

It’s been a long road, but I’ve finally released Markdown Monster 2.0. Markdown Monster is a sophisticated, yet easy to use Markdown Editor for Windows. If you’ve followed this blog and my Twitter feed, you’ve probably seen some of the discussions around my (mis)adventures around the process of building the update for this version.

It’s big release in terms of underlying architecture and foundations. It’s updating some technologies to provide some future proofing by moving away from some legacy technologies and with that opening up new functionality that will make it easier to add enhancements to the Web based editor and preview in the future.

Some of the key internal features addressed by this release:

  • Switch all Web Interactions to WebView2 (Chromium)
  • 64 bit Application (now that IE is removed)
  • Async through much of the code base
  • Completely re-written Table Editor

If these don’t sound very exciting from an end user perspective, you are right. Most of these features are internal and they affect the underlying foundation that Markdown Monster sits on. Building out these changes actually took a lot of effort – a lot more than I expected. I’ve written quite a bit about the conversion from the Internet Explorer based WebBrowser views to the Chromium based Edge WebView2 control which touched a lot of the code base. That was a pretty major change as it is, but as part of that conversion it also required an intense refactoring to move the mostly sync application to mostly async. This proved to be a much more involved process than the browser conversion as it ended up touching a huge swath of the codebase. All in all this process took up a couple of months between actual implementation and tracking down lots of small little behavior changes/bugs that had crept up as a result of the async changeover.

Changes to make Improvements Easier Going forward

All of these ‘internal’ changes don’t bring much in the way of new features (with the exception of the new Table Editor perhaps), but the switch to using the Chromium based controls for the editor, preview, table editor and various other dialogs bring many improvements.

For end users the immediate benefit is that the Preview is now using a modern browser as opposed to the old IE based browser. While the preview templates were designed to work well with the IE based HTML features, there were always some edge cases that didn’t work in the past. For example, displaying Mermaid graphs or using MathML using MathJax equations didn’t work in the preview – they now work natively.

Some of the support libraries that MM uses also no longer work in IE so we can now update to more recent versions with additional features of these libraries. The ACE Editor too was starting to show some cracks with IE behavior divergences where features did not work correctly in IE, but work just fine in Chromium (or FireFox) browsers. All this is to say, the IE browser control was starting to become a burden on Markdown Monster, which so heavily relies on Web technology for both the editor and preview.

Why All This Work?

v1 has been working fine and it has been a stable application, so why did I do all this work?

Modern Web

I hinted at this in the last section: For a hybrid application that so strongly depends on Web technologies it’s important to have a stable and forward looking environment to run these Web interfaces in to take advantage of modern features and also of new features that may come down the pike. The compatibility reason is perhaps the strongest case here as about half the JS libraries used in MM no longer support IE in their latest versions. This update also updates some of these libraries to their latest improved versions.

Additionally the current JavaScript code base of Markdown Monster is written for EcmaScript 5 which is the last JavaScript version that IE11 supports. With the switch to Chromium I can finally take advantage of ES6+ features to reduce code complexity and improve performance. This won’t happen over night, with small changes at first and more refactoring and simplification of interfaces yet to come. But already there are a number of small features that have been updated to take advantage of modern JavaScript features, especially in the previewer and the new HTML based Table Editor.

Easy Debugging

Additionally the new WebView2 control makes it ridiculously easier to actually debug JavaScript and HTML code, compared to the old IE Web Browser control. Chromium based tools include the built-in Chrome Developer Tools support which can be popped up quickly and easily to step through code.

Being able to pop up the debugger like this:

and step through code as part of the desktop application is a huge step up from the rigamarole we had to go through with debugging the IE Web Browser control (and the VS route no longer works in vs2019).

There are debug options in the Markdown Monster settings that enable opening the dev tools optionally which is incredibly useful for debugging.

As simple as both the debugging and modern features sound, this opens up a lot of scope for feature improvements in Markdown Monster. Previously some things were just too complex to take on without an easy way to debug code. However, with the ability to easily step through code and examine running state, it’s much easier to take on more complex scenarios that require lots of interaction. For example, I remember having to debug the spell checking experience which is highly interactive with IE before – it’s now a breeze to do with the debugging tools and console to look at values and experiment with live values and state in the actual running application. Modern JavaScript features also make it easier to break out code into modules, although that has not happened yet. That’s on the list for the next rounds of internal updates sometime down the road.

The new Web based Table Editor in MM is the first beneficiary, taking advantage of some ES6+ features from the get go.

Sync to Async Struggles

But the biggest struggle in this entire conversion was the move from sync to async in the code base.

The conversion from mostly sync to async in Markdown Monster was not voluntary – it was forced on me by the introduction of the WebView2 control, which requires async operation for interaction with the DOM. In Markdown Monster DOM interaction is done via JavaScript calls into a wrapper library that I created around the editor with common operations. These calls are a key part of Markdown Monster and touch most parts of the application both at the core layer and in the UI. In the previous version this process was straight forward through HTML DOM COM automation that is built into MsHtml interfaces.

The WebView2 has no direct interface to the DOM of a loaded document. Interop between .NET and Chromium occurs over internal service interfaces that are essentially passing messages back and forth and then provide a wrapper around that. While DOM operations don’t ‘feel like’ async operation, the way that .NET interacts with the DOM loaded inside of the WebView control is actually via an async messaging interface. As a result the WebView2 requires all calls to the ExecuteScriptAsync() functionality to be async.

While there aren’t a huge number of calls that actually require waiting for results – a lot of calls can be async Fire and Forget – there are a few very frequently accessed calls that do things like retrieving and setting the document content, setting selections, and updating editor state that require waiting for results, or at minimum require synchronized operation.

But even these relatively few required async calls, ended cascading out into a major set of changes that affected more than 80% of Markdown Monster’s code base. I call this the Async Cascade and talk about it in great detail in this recent post.

Long story short, this conversion ended up being an extremely long winded affair that took much longer and caused many more side effects than I expected.

All of this is the reason that this ended up being a long road to getting this release out.

Responsiveness Improvements

But… the async conversion also has some nice benefits in the way of an overall more responsive UI. While the initial loading of the controls is actually slower and more janky than in the old version, once the controls are fully loaded the editor and preview are much more responsive. Because much of the editor interaction is now async, there’s much better responsiveness between editor and preview, and MM now works much better with large documents.

The editor also works a little faster and smoother now with the WebView, thanks to the efficiency of the Chromium engine.

Interesting side note: It seems that on initial load the editor is a little bit slower, but after a few minutes of usage the performance and ‘feel’ of each editor window seemingly improves quite drastically.\ I suspect this has to do with the recent changes in the Chromium engine, that does quick pre-compilation and optimized compilation of code once there’s some idle time available.

Loose Ends

Although I’m just releasing Markdown Monster now, I’ve been using v2 now for nearly 2 months full time and it’s been solid for me. It’s also been out in pre-release for a couple of weeks and looking at the logs – other than some WebView irregularities (due to the control internals it looks like) there aren’t any unusual issues with this update which is great.

There are likely to be a few odds and ends that I’ve overlooked, but so far it’s looking good. I’m keeping a close eye on the logs and reviewing any hard errors that are making it through as quickly as they come. If you run into any issues using v2, please file an issue on GitHub so it can be resolved quickly. You can expect a few rapid fire updates popping up in the downloads and on Chocolatey.

Table Editor Update

While this release doesn’t have a ton of big new features, perhaps the biggest new feature is the updated Table Editor, which has switched over from a native WPF form to using a browser based interface.

Markdown Monster Table Editor

The HTML based interface is significantly faster than the WPF interface and much more dynamic with the ability to very quickly insert, move and remove rows and columns. It’s easy to sort columns and simply tab through the table including the ability to create new rows as you tab past the end. The table editor reflects the layout theme from the currently active preview theme, so as you switch themes the table editor reflects that theme.

Related editor features include the ability to paste tables from the clipboard and from CSV files, or for editing existing Pipe, Grid or HTML tables from within editor content.

Most of the functionality existed previously in v1, but due to the IE limitations performance was not great, and key handling in the HTML interface was not working quite right due to some IE key handling weirdness. With the WebView all these issues disappeared plus the rendering and previewing of tables is much faster especially once you start working with larger tables – which oddly some people do. I got an email from a user who was editing a table with 2000 rows. Really? In Markdown? But Ok, if you can do it, somebody like will 😄

Other Updates

There have also been a number of other small changes in this release, more on par with minor version updates.

  • Allow Swapping Editor and Preview Location
    You can now swap the editor and preview location via a new View->Swap Editor and Preview Location menu option and a via Editor/Preview Splitter Context Menu.

  • New Splitter Context Menu
    Added a new context menu that displays options for swapping editor and preview, entering presentation mode and toggling the preview display.

  • Track Active Document in Folder Browser
    As a heavily requested feature, we’ve added support for optional document tracking in the folder browser. Using the FolderBrowser.TrackDocumentInFolderBrowser configuration switch (also via a toggle button in the Folder Browser) any time you change the document the Folder Browser navigates to that file.

  • Improved Folder Browser Navigation
    Folder browser navigation now shows previews for most text type documents in ‘inactive’ mode that is temporary until the next document is accessed. Documents become ‘active’ once you edit the document or double click to explicitly open for editing. Single click now also previews any non-edit formats externally, like PDFs, Office docs, etc. Executables open selected in Explorer but are not executed. Drag and Drop start operations are now less twitchy.

  • Move Support Binaries out of Root Folder
    Support binaries are now moved out of the root folder into a BinSupport subfolder to avoid ending up on the User’s path and causing naming conflicts. Only applications that should be visible on the user path now are: MarkdownMonster, mm and mmcli.

  • Make Settings HelpText Selectable
    You can now select the help text associated with a configuration setting in the Settings window. This allows picking up URLs and other fixed values more easily. (#817)

  • Dev: Add Debug Editor and Preview Template Paths
    Added configurable Editor and Preview Template paths that are configurable and allow pointing the template folders to the original development folder, rather than the deployed applications’ folders. This allows making changes to the Html/Web templates without having to recompile code. Settings are System.DebugEditorHtmlTemplatesPath and System.DebugPreviewHtmlTemplatesPath and they default to .\Editor and .\PreviewThemes which are fixed up at runtime.

As always there’s a more complete list of recent changes in the What’s new document.

No More Source Code in the Open

In case you missed it, about a month ago I pulled the source code on the Markdown Monster repository due to rampant abuse of the code for bypassing licensing, and blatant rip-offs and re-branding of the software. Markdown Monster was never free (FOSS), but it did have to code out in the open using a source open licensing scheme which unfortunately got taken advantage of. I don’t want to rehash all of the issues here, but if you’re interested I posted a long blog post in May on the hows and whys:

Taking down the Markdown Monster Source code

For those that are interested you can still get access to the source code in a private repository by requesting access to the private repo explicitly. The idea is that I have at least some level of control using the private repo, and I can revoke or pull access if it should turn out the abuses continue. But my feeling is that simply requiring acknowledgement is enough to keep most of the riff raff out.

This doesn’t solve all the piracy problems of course. Markdown Monster after all is a .NET application, and as such can be easily decompiled. That’s always a risk, but removing the source will at least prevent the drive by code editing on each release that I’ve seen in the past and which was way too easy before. Those determined will likely still continue, but it serves both as an obvious statement that this is not a free product and makes it at least a little bit more difficult to hi-jack Markdown Monster lock stock and barrel as some assholes have done.

Markdown Monster now also has a new licensing system that matches individual registrations to licenses so there’s a bit more control on my end. I hate having to resort to this. It means more work on my end and a little more complication for users. But it’s the only way that I can think of to at least stem some of the bleeding that had been going on.

Infrastructure

These last few months have been busy and a bit frustrating for me, as most of the work has focused on aspects of Markdown Monster that are internal and don’t have any immediately useful benefits. Most of the benefits are internal and some of them are more important for going forward with new features especially as they relate to the Web based code components.

This process involved both Markdown Monster itself as well as building out a new license server and updating my custom Point of Sales app to support the licensing directly. The end result is that there have been huge changes in the way MM works under the hood, how it’s administered, while outwardly showing very little change that affect end users.

Upgrading

Version 2.0 is a full version and therefore paid upgrade from v1. If you ordered Markdown Monster after or on January 1st, 2021, you can get a free upgrade. There’s more information on the Markdown Monster site, regarding the upgrade process:

Upgrading Markdown Monster to v2

Upgrade Process – please be patient

The upgrade process requires manual review of previous licenses so upgrades are not immediately confirmed. I’ll turn these around as quick as I can but it takes a little time.

Free upgrade processing requires putting in an upgrade order anyway and if qualified the order will be completed uncharged (although an authorization may still show for a couple of days). Please leave a note in the order’s Notes field on checkout as an extra reminder for the free upgrade.

Where do we go from here?

I think the infrastructure bits are done now, and I can now get back to focusing on features and usability improvements to make Markdown Monster better and actually provide more tangible benefits in Markdown Monster.

I hope those of you that are using Markdown Monster can help me in that respect by providing ideas and feature suggestions that could make your life with Markdown Monster better. If you have ideas please use GitHub Issues.

But first – a short break to catch a breath and reset sentiment and maybe bask in the satisfaction of finally pulling the trigger and putting out this Markdown Monster release.


this post created and published with the
Markdown Monster Editor

Thoughts on Async/Await Conversion in a Desktop App

If you’ve been following this blog and my Twitter feed, you know I’ve been going through a lengthy process of updating the Markdown Monster WPF desktop application from using an WebBrowser control based editor and preview interface to using the new WebView2 control. This new control provides a modern browser that use the Chromium engine which provides much better compatibility with modern Web Standards than the old Internet Explorer based WebBrowser control.

The new control has strict requirements to use asynchronous access to make calls into the Web Browser control, which Markdown Monster uses a lot to interact with the editor. The original application wasn’t built with Async in mind – especially not when it comes to accessing common operations that you typically don’t associate with asynchronous operation like say setting or getting the document text. The process of conversion of Markdown Monster from mostly sync to mostly async has been frustrating as heck and turned out to be a much bigger job than I anticipated. Not only in terms of actual work involved making the conversions, but also with many, many strange side effects that resulted from code going from sync to async despite making use of the async/await infrastructure in .NET.

So in this post I’ll go over a few of the issues I ran into, what I tried, what worked and what didn’t (hint: lots!). This isn’t a comprehensive post about best practices or how to approach async, so if that’s what you’re expecting or looking for this is not the right place. And frankly I’m not qualified to provide any advice in regards to async processing in desktop applications, as I’m still grappling with properly and efficiently utilizing async in non-server applications where UI interactions are much more susceptible to race conditions than (typically) more linear server applications.

As such this post is a bit rambling in regards to a few random issues that I ran into in my conversion process, as it’s taken from my notes I took along the way. I hope it’ll spark some discussion on experiences or ideas others have had in similar situations.

The Markdown Monster Async Scenario

The base scenario is that I had a mostly synchronous application originally in Markdown Monster. The application used standard non-async .NET code to handle most event
processing because – well most of the processing in this application is synchronous. Interacting with text in one or more documents is by its nature very synchronous so that that makes good sense, right?

There are a few exceptions for longer running operations in Markdown Monster, like Weblog publishing, downloading posts, Git Commits and few others, and those operations are explicitly started with async and then run in the background. These operations report results back as needed using strictly asynchronous messages via events or Dispatcher UI updated to synchronize back to the UI thread as needed. To me this makes good sense and it has served me well over the years in this application, rather than the prescribed advice of "make all the the things async".

Accordingly Markdown Monster went down that path: Essentially a sync app with a few controlled async operations along the way. And it’s worked well up to very recently.

Async is Viral – whether you like it or not

But – as I describe in this post – it turns out that async is viral. Once you start with it, and especially when you are forced into it via libraries that force async only, you often don’t have a choice but to turn perfectly good sync code into async just to make async code further down the call stack work properly. 😢

When I started using the new WebView2 control it was initially for the Markdown Previewer, which was isolated in an addin. I immediately ran into issue with the control’s async requirements, which mandate that all interactions into the DOM are handled asynchronously. Specifically the main interaction mechanism is ExecuteScriptAsync() which is used to essentially make scripting calls into the DOM. Unlike the WebBrowser control the WebView has no direct access to the HTML DOM and the only way to interact is via ExecuteScript(). For Markdown Monster this means any interactions into the DOM require either async access or using a message based API which is even worse for managing common (and typically very quick) operations, like retrieving or setting document content. No way around it!

Right away this introduced a good chunk of asynchronous interaction. The initial conversion for the previewer went reasonably well because it was isolated in a separate addin, and mostly dealt with operations that are one-way and of the ‘fire and forget’ variety, namely refreshing the preview which isn’t time critical or doesn’t require picking up a result.

However, as I started the conversion with the main Markdown editor interface (which runs in JavaScript code) I now found myself requiring access to the Editor’s primary JavaScript API wrapper I created. This wrapper provides the application interface into the common operations that Markdown Monster uses to fire into the ACE Editor JavaScript component that drives the editor. The scenario is that all editor interactions are fired from WPF into the JavaScript and in quite a few cases the editor returns data back to WPF. And with one stroke all of these interactions with the editor now became forcibly asynchronous.

While the Previewer was mostly passive interaction, the editor interface requires constant two-way access between the editor and the WPF application, and it’s woven into the code base in lots of places as editor interactions are the core feature of… well an editor.

The Async Cascade

What happens is that all of a sudden an application that involves a handful of asynchronous calls, cascades into the most of the application having to move to async. Yikes. What looked like a small integration turned into a major migration that touched a good 80% of the code base.

The async cascade works like this:

  • You have a method in the API you have to call async
  • Now the calling method that calls the API needs to be async
  • Now the method that calls the calling method needs async
  • … rinse and repeat until you get to the top of the hierarchy
  • … rinse and repeat for every dependency of code touched…

It gets outta hand – very, very quickly!

No Easy way to call Async Code from Sync Code

This cascade occurs because there’s no reliable way in .NET to call asynchronous code synchronously. And yes, I tried to go down that path, but – as anybody who’s tried it or who understands the Task based async APIs in .NET will tell you, (they did!) there are no reliable solutions for calling async code and wait for it to complete built into the framework.

Seems crazy right? There are properties (.Result) and methods (.Wait(), .GetResult()) to wait synchronously on async operations, but they are not actually safe to use in busy environments and they are prone to deadlocks. They work in some scenarios where operations are isolated and infrequent, but if you need to repeatedly call code using these async->sync transitions they are very likely to deadlock.

There are a few hacks that can make this better by using GetAwaiter().GetResult() and a number of others, but ultimately these workarounds just reduce the probability of a failure marginally, and they problem of deadlocks is still an issue.

You can use Task.Run() or Dispatcher.InvokeAsync() to asynchronously start the async chain. That allows you to call and run the asynchronous code using await, but if you need to actually wait on a result of the aggregate operation in the Task.Run() operation and you need a result or sequence operations, you’re right back to Square One of unreliable synchronous waiting on an async operation.

The closest I’ve come to actually making this work is to run a timer thread and check for a result value that gets set in the async operation. This is not reusable, very inefficient and very unnatural to write.

And this is exactly where I found myself in Markdown Monster and it wasn’t for lack of trying. At first I thought I can maybe get away with using a few isolated sync over async operations. But it quickly became apparent that none of the solutions were reliable and would either lock up or slow down the application drastically.

I fired off a Twitter thread innocently asking if this isn’t something that could be solved internally by the runtime and hundreds of messages later with various experts all chiming in with a few band aid workarounds, the overwhelming consensus is that the only way to fix async reliability is by making all calling code asynchronous. All the other workarounds are still going to break and deadlock.

Seems like there should be a solution, but… well, see the responses to the Twitter thread:

Yikes!

If you absolutely must call async as sync

So if you really have to call async code and get a sync result back or must wait for the return of an async call, there are a few slightly more reliable ways to do this. But… keep in mind that these still are not guaranteed to work. I played around with both of these solutions in Markdown Monster briefly and they both resulted in hanging code. So take these with a grain of salt and only consider them for specialty, one-off scenarios – if they are used in high traffic code it’s still almost guaranteed to fail.

With all of that anti-climactic stuff out of the way, here are the two ways that worked ‘better’ for me than other approaches:

Outside of WPF in a semi generic way the first way uses a very specific sequence of Task operations to run a task, unwrap the result or exception and then await the already completed result which attempts to minimize the amount of time the code spends blocking:

Thanks to Andrew Nosenko (@noseratio)

/// <summary>
/// Helper class to run async methods within a sync process.
/// Source: https://www.ryadel.com/en/asyncutil-c-helper-class-async-method-sync-result-wait/
/// </summary>
public static class AsyncUtils
{
    private static readonly TaskFactory _taskFactory = new
        TaskFactory(CancellationToken.None,
            TaskCreationOptions.None,
            TaskContinuationOptions.None,
            TaskScheduler.Default);

    /// <summary>
    /// Executes an async Task method which has a void return value synchronously
    /// USAGE: AsyncUtil.RunSync(() => AsyncMethod());
    /// </summary>
    /// <param name="task">Task method to execute</param>
    public static void RunSync(Func<Task> task)
        => _taskFactory
            .StartNew(task)
            .Unwrap()
            .GetAwaiter()
            .GetResult();

    /// <summary>
    /// Executes an async Task method which has a void return value synchronously
    /// USAGE: AsyncUtil.RunSync(() => AsyncMethod());
    /// </summary>
    /// <param name="task">Task method to execute</param>
    public static void RunSync(Func<Task> task, 
                CancellationToken cancellationToken, 
                TaskCreationOptions taskCreation = TaskCreationOptions.None,
                TaskContinuationOptions taskContinuation = TaskContinuationOptions.None,
                TaskScheduler taskScheduler = null)
    {
        if (taskScheduler == null)
            taskScheduler = TaskScheduler.Default;

        new TaskFactory(cancellationToken,
                taskCreation,
                taskContinuation,
                taskScheduler)
            .StartNew(task)
            .Unwrap()
            .GetAwaiter()
            .GetResult();
    }

    /// <summary>
    /// Executes an async Task&lt;T&gt; method which has a T return type synchronously
    /// USAGE: T result = AsyncUtil.RunSync(() => AsyncMethod&lt;T&gt;());
    /// </summary>
    /// <typeparam name="TResult">Return Type</typeparam>
    /// <param name="task">Task&lt;T&gt; method to execute</param>
    /// <returns></returns>
    public static TResult RunSync<TResult>(Func<Task<TResult>> task)
        => _taskFactory
            .StartNew(task)
            .Unwrap()
            .GetAwaiter()
            .GetResult();


    /// <summary>
    /// Executes an async Task&lt;T&gt; method which has a T return type synchronously
    /// USAGE: T result = AsyncUtil.RunSync(() => AsyncMethod&lt;T&gt;());
    /// </summary>
    /// <typeparam name="TResult">Return Type</typeparam>
    /// <param name="func">Task&lt;T&gt; method to execute</param>
    /// <returns></returns>
    public static TResult RunSync<TResult>(Func<Task<TResult>> func,
        CancellationToken cancellationToken,
        TaskCreationOptions taskCreation = TaskCreationOptions.None,
        TaskContinuationOptions taskContinuation = TaskContinuationOptions.None,
        TaskScheduler taskScheduler = null)
    {
        if (taskScheduler == null)
            taskScheduler = TaskScheduler.Default;

        return new TaskFactory(cancellationToken,
                taskCreation,
                taskContinuation,
                taskScheduler)
            .StartNew(func, cancellationToken)
            .Unwrap()
            .GetAwaiter()
            .GetResult();
    }
}

While this is way better than using .Result or .Wait() it still can result deadlock hangs and it did for me, mostly in a UI context.

If you’re using WPF, then there are some other more flexible options using the Dispatcher. The Dispatcher is aware of the event loop in WPF and so has the ability to yield so that UI can continue to respond while in a wait state.

public static TResult RunSync<TResult>(this Dispatcher disp, Func<Task<TResult>> del)
{
    var frame = new DispatcherFrame() {Continue = true};

    var dispOp = disp.InvokeAsync<Task<TResult>>(
        async ()=>  {
            try
            {
                return await del.Invoke();
            }
            finally
            {
                frame.Continue = false;
            }
        });

    // waits synchronously for frame.Continue = false
    // while pushing the message loop
    Dispatcher.PushFrame(frame);
    
    var task = dispOp.Task.Unwrap();
    return task.GetAwaiter().GetResult();
}

This is promising mainly because the Dispatcher in WPF has the ability to wait while still letting the UI process messages while waiting on Dispatcher.PushFrame(). But even this solution – while much better than just raw Task based APIs or even RunTask() – still ended up deadlocking Markdown Monster occasionally.

After many attempts of trying to make this work I eventually had to throw in the towel.

Bottom line for me was simply that waiting on async code synchronously was just not going to work in Markdown Monster due to the frequency that async code in question was being called potentially by simultaneously executing operations.

It seems crazy that there is no reliable solution to make an application wait for an async result other than a Task continuation and in fact some of the architects/designers of the Task library have outright said so. So here we are. Bottom line, sync over async is not a solution.

Async All the Things is the Only Way

As David Fowler so unceremoniously announced on the Twitter thread:

So after all of this async->sync experimentation and essentially failing, the only solution to make this work is to ensure that code can run asynchronously by making methods along the call hierarchy async methods (or just return Task or Task<T>). For me this means converting not just the handful of critical calls that get and set values from the editor but also many of the support functions, including some of the generic access wrappers.

What was supposed to be a simple migration just exploded the scope of changes required.

Walk to the Top Level: Event Handlers and async void

When adding async code, you have to walk from the point of await, back up the hierarchy until you hit code that naturally runs async code. In WPF typically this ends up being an event handler or Command object where you can replace void event handler methods with async void.

private async void WebLogStart_Loaded(object sender, System.Windows.RoutedEventArgs e)
{
    ...
    
    // code that gets markdown from WebView (async)
    var markdown = await editor.GetMarkdown();
    
    ...    
}

Similarly using command objects:

OpenRecentDocumentCommand = new CommandBase(async (parameter, command) =>
{ ...  }

or programmatic event handlers assigned in code:

var mi2 = new MenuItem()
{
    Header = "Add to dictionary", HorizontalContentAlignment = HorizontalAlignment.Right
};
mi2.Click += async (o, args) =>
{
    dynamic range = JObject.Parse(jsonRange);
    string text = range.misspelled;

    model.ActiveEditor.AddWordToDictionary(text);
    await model.ActiveEditor.EditorHandler.JsInterop.ReplaceSpellCheckRange(jsonRange, text);

    model.Window.ShowStatus("Word added to dictionary.", mmApp.Configuration.StatusMessageTimeout);
};

Basically anywhere an event handler or command is used it’s possible to replace the void signature with async void or by using expression syntax with a async prefix to the the method (async ()=> { }). This is a relatively simple way to get async code rolling from the the top of the call hierarchy.

Note that async void generally should be avoided in favor of async Task, as async void can cause exceptions to bubble out of the call context and fire unpredictably in another context. This may or may not be a problem depending on how the application runs and disposes of code, but it can end up causing unexpected crashes especially if exceptions are not caught before a shutdown.

The code in the samples above use async void because event handlers have to use async void due to the delegate signatures used to call the handlers. Using async void lets you use existing event handlers but using the async prefix to start off the async chain. Note that async Task does not work for event handlers, so async void is the only way to deal with event handlers!

async void event handlers can also behave quite differently than non-async event handlers – the reason is that the events – although async – are not actually awaited by the caller. This means the event code completes immediately, while any code that is awaited continues to run in the background. This can result in out of sync operation of code in some cases where timing or order of operations are critical.

Whether events are user interactions like button clicks or Command object, or whether it’s control events like load or activation or size changed etc. these can be easily turned into async code and server as the top level async operation you need to trace back to.

async void Gotchas

async void is a ‘quick’ way to get a top level async chain running, but keep in mind that the events are not actually called in a Task specific manner. Event handlers still call these events as void methods that are not awaited. Using async void is a hack that lets you use a top level async method to handle an essentially non-async event.

This has subtle implications on the behavior of the event. The event handler actually completes immediately and returns to the event caller, when it encounters an await in the event handler, while the async code runs in the background. The event has completed, but the code in response to it is still executing. IOW, the event caller no longer is guaranteed that the event handler code has completed when the event returns.

For most UI events this isn’t a problem because events are meant to fire out of order anyway. But it can introduce subtle differences in behavior and timing of the order of events getting processed.

For subtle example, check out this code in Markdown Monster where I handle a document type selection change. This value is databound and may be fired in very quick succession when documents are changed as values are unset and then reset.

private async void DocumentType_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
    if (Model.ActiveEditor == null)
        return;

    await Model.ActiveEditor.SetEditorSyntax(Model.ActiveEditor.MarkdownDocument.EditorSyntax);

    // this frequently fails - despite the null check above!
    SetTabHeaderBinding(TabControl.SelectedItem as TabItem, Model.ActiveEditor.MarkdownDocument);
}

In this code a selection event is fired and the code then proceeds to make some async modifications to the document – in this case explicitly set the syntax of the document (calls into the WebView, hence the async requirement). That call is async which means that the event immediately completes when the await is encountered.

The code then returns from the await but now other events may have already fired and changed the status of the Model.ActiveEditor via another selection. This can happen if quick (accidental) clicks occurs for a tree selection for example. With this asynchronous code SetTabHeaderBinding() now essentially fires completely outside of the scope of the original event and with the changed setting it blows up.

To fix this the code needs to be changed to either check for null again, or explicitly capture the Model.ActiveDocument and update the captured reference rather than the databound value that might be changing.

The safe thing to do is:

private async void DocumentType_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
    var doc = Model.ActiveEditor?.MarkdownDocument;
    if (doc == null)
        return;

    var tab = TabControl.SelectedItem as TabItem;

    await Model.ActiveEditor.SetEditorSyntax(doc.EditorSyntax);

    // due to async this may change on quick click throughs so explicitly check AGAIN
    if (tab == TabControl.SelectedItem)
        SetTabHeaderBinding(tab, doc);
}

Easy to do, but even easier to simply overlook!

This was never an issue with the sync code which ensured that the event code completed before then next selection could occur.

Bottom line: async void can have subtle changes in behavior where events are very quick and stack ontop of each other because the event handler code completes as soon as an await is encountered.

Another subtle difference is that await essentially causes WPF to catch up pending events, which results in much more frequent UI processing than sync code. Again this usually considered one of the benefits of async in that code doesn’t get tied up in long blocking execution. But in some cases like this one, it can cause race conditions because essentially the sequencing is now very different than the equivalent sync code.

I wouldn’t say this is a common issue – most event handlers are not so time critical that this is a problem, but the big thing with this is that it’s hard to catch these types of errors. They show up only in rare instances at runtime and are difficult to duplicate and track.

Top Level: Task.Run(), Dispatcher.InvokeAsync()

The other scenario is where you end up with code that can’t be async all the way to the top of the hierarchy. The option for these is to kick off async code (in WPF) using Dispatcher.InvokeAsync() which lets you essentially start an async operation from anywhere a Dispatcher is accessible. Likewise it’s possible to explicitly call Task.Run() to kick off an async operation when not inside of a context where a WPF Dispatcher is available.

These operations work great for fire and forget operations that need to be async but essentially don’t need to be waited on. In Markdown Monster thankfully a lot of code is of this variety and it’s relatively simple to convert code use this simply by wrapping it an InvokeAsync() handler:

The following is an example of a scenario where code coming in from an external interface – Named Pipes in this case – cannot be async at the top level. Dispatcher.InvokeAsync() to the rescue:

private void HandleNamedPipe_OpenRequest(string filesToOpen) =>
    Dispatcher.Invoke(async () =>
    {
        if (!string.IsNullOrEmpty(filesToOpen))
        {
            var parms = StringUtils.GetLines(filesToOpen.Trim());

            var opener = new CommandLineOpener(this);
            await opener.OpenFilesFromCommandLine(parms);

            BindTabHeaders();
        }

        Topmost = true;

        if (WindowState == WindowState.Minimized)
            WindowState = WindowState.Normal;

        WindowUtilities.SetForegroundWindow(Hwnd);

        // needs to fire out of band
        _ = Dispatcher.InvokeAsync(() => Topmost = false, DispatcherPriority.ApplicationIdle);
    }).FireAndForget();

You can use this like a complete method wrapper as I do above, or as a partial in the middle of a method:

public void OpenFavorites(bool noActivate = false)
{
   // sync code
   else if (!noActivate)
   {
       SidebarContainer.SelectedItem = FavoritesTab;

       // fire and forget
       Dispatcher.InvokeAsync(async () =>
       {
           var control = FavoritesTab.Content as FavoritesControl;
           var searchBox = await control.Search();
           ...
       });
   }
}

But… neither of these works if you need to get a result value from the async call.

Although you can call these methods to start an async operation, you end up with the same problem described above if you need to synchronously wait for operation to complete or return a result so these operations tend to be useful only for fire and forget operation.

Watch for Exceptions in Fire and Forget

One issue that needs to be considered with async code in general and in particular in Fire and Forget scenarios is that exceptions that occur inside of async code may fire on a separate, non-UI thread.

If you run code like this:

Dispatcher.InvokeAsync( async ()=> {
    ...	
    await 
});

and an exception occurs the exception is not actually fired or handled anywhere. In past versions of .NET this would actually cause the application to shut down as an unhandled exception on the non-main thread would immediately terminate the application. More recent versions are more forgiving, but the exception still is not monitored and depending on the context can cause exceptions not getting released or getting released at some indeterminate time later – often when the application shuts down. Excpetions get released when the finalizer runs on the wrapping code or task, and in some cases that may never happen until the app shuts down. The end result is that you effectively have a memroy leak.

Long story short you’ll want to make sure to either:

  • Wrap any async code into an exception handler to ensure that code doesn’t leave an exception hanging
  • Actually continue the task if an error occurs

Here’s some code thanks to Joe Albahari ( @linqpad ) that essentially provides you with FireAndForget() functionality, which specifically continues a task if an exception occurs, thereby resolving the exception immediately:

public static void FireAndForget(this Task t)
{
    t.ContinueWith(tsk => tsk.Exception,
        TaskContinuationOptions.OnlyOnFaulted);
}
public static void FireAndForget(this Task t, Action<Exception> del)
{
    t.ContinueWith( (tsk) => del?.Invoke(tsk.Exception), TaskContinuationOptions.OnlyOnFaulted);
}
Dispatcher.InvokeAsync( async ()=> {
    ...	
}).Task.FireAndForget();

or if calling async code in general:

public void UpdateDocument() 
{
	... 
	editor.JsInterop.UpdateStats().FireAndForget()	
}

// or using Task only
public Task UpdateDocument() 
{
	... 
	editor.JsInterop.UpdateStats().FireAndForget()	
	return Task.Completed;
}

This is useful as it ensures that Tasks are cleaned up and it’s also useful to avoid adding the overhead of async methods in scenarios when you effectively don’t need to wait for an async result. Each async method adds a state machine so unless you really need it, not using it reduces both code size and call overhead and this is an easy and clear way of doing it.

Another way to do this is globally via a TaskScheduler.UnObservedTaskException handler:

TaskScheduler.UnobservedTaskException += (s, e) => {
	// Error logging
	mmApp.HandleApplicationException(e.Exception as Exception, ApplicationErrorModes.TaskExecution);
	
	// mark as observed so it can release
	e.SetObserved();
};

This captures any exceptions on Task code that are not otherwise captured. By setting .SetObserved() the exceptions are cleaned up immediately. Although this is a quick fix, you probably want to capture exceptions closer to the source – this is similar to application level, ‘hail mary’ error handlers which are meant only meant for failure of last resort.

Converting to Async

All of the above is good for a lot of things, but in a complex application conversion it’s only going to get you so far. At some point you are going to end up with code that needs to run asynchronously and either be sequenced where one async task runs after another in the right order, or where an async task returns a value that you need to work with before code can continue.

This is what async/await is made for in the first place of course, but this is also where the async cascade starts and **has to be implemented up the call hierarchy".

So for Markdown Monster I ended up biting the bullet and going down the all async rabbit hole because, in MM there are number of rapid fire interactions with the editor document that have to be made asynchronously. This precludes using async->sync results which would cause hangups as discussed earlier.

At the end of the day, the entire interface API to the WebView control had to be created with async calling methods which where my ‘patient zero’ starting point from which to work backwards.

The most prominently used methods in the API dealt with getting and setting the editor content, setting and replacing selections with customized mark up and so. As you can imagine there are a lot of places in the code where these async methods are accessed.

Which in turn meant that a ton of code hosts these calls and needs to be converted as well. As mentioned earlier this snowballs very quickly where you may have an initial 20-30 methods that need to be converted to async but once you walk through the conversion to async through the entire hierarchy I ended up with closer to 250 methods that actually ended up being changed. Holy crap!

And then once you go down this path you realize that now your APIs and any code touched is inconsistent, so you make more code async to provide a more consistent interface for the application. When it was all set and done nearly 400 methods ended up changing signatures to async.

To put this into perspective let me give you an example of how a single async call ripples up to the top all of which need to become async now:

  • WebView interop access method: JsInterop.GetValue()
  • MarkdownDocumentEditor.GetValue()
  • MarkdownDocumentEditor.OpenDocument()
  • Window.OpenDocument() (wraps some UI behavior)
  • OpenDocumentCommand button handler command

So for a single call to GetValue() in the WebView control there are 5 methods that are affected to get to an async root.

This code then hits about 15 other methods that call either of the OpenDocument() methods. And then 5 of those methods… and so on and so on. You can see how this gets out of control quickly.

The process is basically to go from:

public void EventHandler_Method(object s, EventArgs e) 

to

public async void EventHandler_Method(object s, EventArgs e) 

Or for anonymous methods like command handlers:

OpenDocumentCommand = new CommandBase(async (parameter, command) =>
{ }

Code then can use await instead of straight calls:

OpenDocumentCommand = new CommandBase(async (parameter, command) =>
	var file = parameter as String;
	if (!string.IsNullOrEmpty(file) && File.Exists(file))
	{
	    await Model.Window.OpenTab(file, rebindTabHeaders: true);
	    return;
	}
});

Often that’s as simple as just pre-pending the await command before a call, but be careful there.

But it’s not as simple as just converting sync methods to async and adding await statements. Once method signatures are changed you end up with code that in many instances is broken but won’t actually cause an error:

Instead you end up with:

For the former you have a few choices to choose from for these which are all fire and forget:

  • await AsyncCall()
  • var task = AsyncCall()
  • _ = AsyncCall() (fire and forget)
  • AsyncCall().FireAndForget() (best option)

If you don’t care about execution order on a method call – ie. fire and forget – then just calling the method using a discard var _ is probably the easiest and most efficient, although you have to worry about the exception handling mentioned earlier. To be safe using .FireAndForget() (or continuing manually) is the safe way to ensure Exceptions don’t trigger unexpectedly.

Tracking all of this down takes time and while not difficult, it’s tedious as heck. And when you first start on this task, making the first few changes is frustrating as hell as you end up with more errors than before you made the change in the first place!

At one point while working on this I had nearly 500 errors showing in the Visual Studio error list! Talk about a mountain out of a mole hill!

Finally be careful of the Null Propagating operator with code like this which compiles, but fails at runtime if a null is encountered:

public async Task SetEditorFocus()
{
    try
    {
    	// This!
        await EditorHandler?.JsEditorInterop?.SetFocus();
    }
    catch (Exception ex)
    {
        mmApp.Log("Handled: AceEditor.setfocus() failed", ex, logLevel: LogLevels.Warning);
    }
}

If any of the ? values are null an exception is thrown because the values are expected to be of type Task rather than an object. Because null is an invalid Task to be awaited the code blows up on null. You can read more in this detailed post on how to fix and work around this issue.

Does the Async Conversion work?

Funny question, but after going through the error cascade that resulted in having to fix so many errors all at once without running code in between, I wasn’t sure whether the code was going to run on the other end of that process.

The good news is that for Markdown Monster while the async conversion was a huge undertaking, none of it was difficult, just incredibly time consuming. It’s full on whack-a-mole where you fix one thing, and 10 more spring up until you get towards the top of the call chain.

But after all that the application came up just fine and worked with now mostly async processing for a good chunk of it!

Timing Problems

Running is one thing, but running well is another. And here I ran into some serious issues that ended up resulting in another long stretch of work to tweak startup operation, and reduce jankiness of the UI.

Once the application had been physically converted and all the async code had been walked to the top, the app runs, but the application now is behaving very differently with UI operations actually happening more haphazardly.

When you convert synchronous code to async, it’s more than just changing syntax, so although code generally maintains its original logic flow structure, the actual framework UI execution may actually happen differently.

In Markdown Monster this caused some big problems with UI jankiness especially during startup with tabs and content bouncing around wildly. For example, when MM starts up it may end up loading a bunch of tabs into the editor, based on retrieving disk content and feeding it into the editor – asynchronously. All of these operations are now async where before they were purely sequential.

This has two consequences:

  • Internally the async load behavior (especially of the WebView) is unpredictable
  • There’s no ‘safe’ way to detect final load completion

In the pre-async code I could put off making the form visible until everything was completely loaded and there was good confidence that this could happen at the right time. With purely async loading of the Web View there’s no such guarantee. In fact, even with mitigations that I’ve put into place there are still some scenarios where the browser has to reload content because other parts of the application are still busy creating the content to be displayed.

The problem here are subtle timing issues that were no problem in sync code, but can result in out of order execution of events, even when using await to attempt queuing things one after the other.

For example, here’s the code that opens the last open documents on startup:

private async Task<TabItem> OpenRecentDocuments()
{
    var conf = Model.Configuration;
    TabItem selectedTab = null;

    foreach (var doc in conf.OpenDocuments.Take(mmApp.Configuration.RememberLastDocumentsLength))
    {
        if (doc.Filename == null)
            continue;

        if (File.Exists(doc.Filename))
        {
			// async tab call here
            var tab = await OpenTab(doc.Filename, selectTab: false,
                batchOpen: true,
                initialLineNumber: doc.LastEditorLineNumber);

            if (tab == null)
                continue;

            var editor = tab.Tag as MarkdownDocumentEditor;
            if (editor == null)
                continue;

            if (doc.IsActive)
                selectedTab = tab;
            }
        }
    }
    return selectedTab;
}

So even though the OpenTab() calls are made with an await clause, the load behavior does not appear to be completely sequential. Because the individual calls as async what happens in the context of the calls may happen out of band and perhaps not exactly in sequential order. I double checked to ensure that all the UI operations that affect the initial load are indeed awaited all the wait down the hierarchy and they are. Stepping through I can also see the await code waiting for completion.

But because the WebView’s initialization is async as well, the initial control load can be delayed. Which in turn delays the content loading which in turn delays the visibility activation.

Yet I still have crazy janky, jumping jack window behavior because behind the WebView preview is refreshing out of band and some operations are essentially not running in the exact order as they did before.

It literally took me a couple of days of tweaking to strike a balance between horribly slow loads and getting a jank free display to come up.

Bleh!

Async Artifacts in Applications

To be fair this is not an async problem in general, but a specific problem with the way the WebView2 control handles async activation which has all sorts of behavior quirks. But alas it demonstrates the fact the underlying async implementations can have unexpected behavior effects on your application even when seemingly using await to sequence code.

What I’m getting at here is that even when you use await to sequence async calls, there may still be things in the UI frameworks that now behave differently than they did before with sync code just by the very nature of the underlying async operations that occur. In Desktop UI applications in particular, where there Dispatcher often takes liberties on what order UI operations are fired in the first place this can introduce extra havoc into an already chaotic event sequence.

To be fair Markdown Monster is a bit of a unique scenario because it loads several WebView2 controls simultaneously and that control has asynchronous startup behavior which can be unpredictable due to its interaction with the UI thread. This may not be as much of a problem for ‘normal’ WPF UI code. But nevertheless, be aware that async introduces subtle differences in timing and behavior that can change the otherwise autonomous behavior of the UI.

I’ve been able to mitigate some of this during initial load by simply hiding the controls until the documents are ready, but even that appears to be elusive as the DOMContentCompleted events are not firing when the document is completely loaded yet. In some instances that still ends up rendering before the HTML of the page is ready to render.

Finding the Odds and Ends

I’m now mostly through the async conversion in Markdown Monster. I’ve been running MM using this new codebase for a few weeks with additional changes coming in and it’s working well. But I still find little places where code is broken due to async conversions – a missed result value or a FireAndForget() that should have waited instead of just blasting on.

It’s an ongoing process.

On the flip-side while startup perf is now slower due to the timing issues, once the editor runs it has better performance and generally ‘feels smoother’ in operation. Before there were occasional short hangs or stutters, especially with larger documents as the preview was refreshing (which tied up the UI thread). Those issues are pretty much gone now. Even very large documents now work better as the preview rendering happens completely out of band.

So while a big effort, in the long run it’s been worth the effort. I also expect the WebView control to eventually iron out some of the problems it has now, which hopefully will allow removing some of the delay mitigations I have in place to make for smooth startup.

Summary

I don’t claim to be an expert when it come to async usage, as should be clear by my flailing around and experimenting with different angles to find what works. So this post isn’t meant as guidance, but more as a starting point for discussion or a review of things that you are likely to run into when converting an application from sync to async.

The conversion from sync to async in Markdown Monster was a long and painful journey for me – it sucked! It took way longer than I expected it to, and ended up causing a lot of run-on issues – especially the timing issues – that were very time consuming with trial and error resolutions. I must have tried a thousand different startup combinations before arriving at a non-optimal compromise. There were a few times when I was considering just going back to the old code 😄

I went through with it though, and it turned out OK, but there are still rough edges – although to be fair most of these have to do with the funky async behavior of the WebView than anything else.

At the end of the day porting an existing sync application to async is not trivial, and it’s much preferable to build an application from the ground up using async rather than retrofitting an existing sync app to async!

Start with async and work async into the application properly right from the start. At that point it’s manageable because you can see the effects of async behavior as you build your application and you can adjust appropriately and you can build it from the top down async, rather than from the bottom up as a conversion typically ends up with.

When I started Markdown Monster in 2015 all the async functionality existed and I could have started with async from the get-go. I didn’t because frankly – no direct need. I’ve never had issues with mostly sync code in UI applications and using async only selectively where it makes sense for long running operations (like downloads, output generation, searches etc.). In most cases the potential UI hanging operations are easily isolated and if they are long running processes it’s unlikely you’d be awaiting them anyway and opting for events or notifications to specify completion which can be managed with one off Task or other background operations.

I can’t help but think that a lot of this pain could have been avoided if the developers of the WebView2 would have just provided the ability to call into the DOM synchronously. There’s nothing inherently async about DOM access. There’s no IO you’re waiting on and DOM interactions calling into code tend to be universally fast. What’s slow is not the DOM code calls, but DOM UI updates, which happen in the background, separately anyway. In short, there’s no realistic reason that DOM access should have to be async. If there was an ExecuteScript() non-async function in addition to ExecuteScriptAsync(), it would allowed me to capture the two or three absolutely critical and very fast operations that needed to be sync in Markdown Monster, and avoid most of the pain that I describe in this post walking the async cascade to the top. If both ExecuteScript() and ExecuteScriptAsync() existed I could have then selectively used the async interfaces where they actually make sense, waiting on a slow running DOM operations. I consider this a deep design flaw in the WebView2 control especially in light of the WebView2 being considered as a replacement for the WebBrowser control.

But I was forced into this conversion by the switch to the WebView2 control which exposes async-only interfaces for interop.

I really wish that developers of tools and libraries would think long and hard about providing async-only APIs. There’s no reason every application should be built all async – with all the associated overhead and difficulties of side by side code – just to support the few functions that actually require async functionality. Case in point: 90% of the async work I did in Markdown Monster was just to make the tiny bit of code 10 levels down the call stack run. Not because there’s some divine app improvement that comes with async code.

Despite all this, I’m going to think long and hard about whether I want to start a new application and not use async. Despite my feelings about using async selectively only where it’s needed. I think with new projects I will immediately jump into going async from the top down. I’m not a fan, but these days there are just too many (inconsiderate) libraries that are async only and I don’t want to end up in a similar situation where a dependency forces my hand again after the fact.

If you want to try out Markdown Monster 2.0 that’s using the new code I’m talking about here, you can download the latest preview release from the download site.


this post created and published with the
Markdown Monster Editor

Dev Intersection 2017 Session Slides and Samples Posted

I’ve posted my Session Slides and code samples from last week’s DevIntersection conference. It’s been a while since I’ve been at a .NET Conference and as always after all the toil and tension getting ready for sessions, the conference and sessions end up being a blast as was catching up with friends after hours.

Thanks for those of you that attended my sessions and filled out the sessions rooms so nicely 😃. There were also a lot of good questions and discussions after all sessions which is always great. I was especially happy to see so many turn out to the Localization talk – which is a tough sell in the best of circumstances, and especially tough as the last session on the last day.

Here are the three sessions (or two if I count the Angular/ASP.NET one as a single long session).

  • Using Angular with ASP.NET
    Part 1: Getting started
    Part2: Putting it all together

    Part 1 of this session was basically an all code demo for creating a first Angular app and then hooking it up to an ASP.NET Core backend API. Part 2 then looked at a more realistic albeit small application and dives into the details about how to integrate Angular and ASP.NET and manage many common aspects like error handling, user authentication, deployment and hosting and more.

    The slides for these sessions are combined into a single large deck that are much more numerous than what I used during the session, filling in the details that were either covered by code samples or handled in the live coding bits.

    Samples and Slides:
    https://github.com/RickStrahl/DI2017-AspNet-Core-Angular

  • Localization in ASP.NET Core
    This session introduced localization in .NET in general and then jumped into the specifics of how to use the new dependency injection based localization features in ASP.NET Core. Several sample pages that are provided in the Github link below. The session also covered how to use Westwind.Globalization as a database driven resource localizer, along with a discussion on how to implement a custom Localizer implementation in .NET Core.

    Samples and Slides:
    https://github.com/RickStrahl/DI2017-ASP.NET-Core-Localization

Hope some of you find these materials useful. Enjoy.


this post created and published with
Markdown Monster

JavaScript Debugging in a Web Browser Control with Visual Studio

Debugging an embedded Web Browser control in a Windows application can be a pain. The Web Browser control is essentially an embedded instance of the Internet Explorer engine, but it lacks any of the support tooling for debugging.

A few months ago I posted a about using Firebug Lite to provide at least Console output to your JavaScript/HTML based logic. This basically provides an integrated console – based on inline JavaScript – with full support for console logging output including deep object tree access for variables. If all you need is to access a few simple values to check state or other informational settings, this is certainly a quick and easy to go.

Visual Studio Studio HTML Debugging

But if you need to debug more complex code, using Console based output can only get you so far. A few days ago I had introduced a regression bug into the spell checking code in Markdown Monster and man was it tricky to debug. Console debugging had me running in circles.

##AD##

Right around the same time I got a comment on the Firebug Debugging post that casually mentioned that you can use Script debugging in an EXE application by externally attaching a debugger to the EXE and then choosing Script Debugger. A bit of experimenting ensued…

I vaguely knew that Visual Studio can debug Internet Explorer code, but didn’t put the pieces together to see how to do this with my own applications like Markdown Monster that are running an embedded Web Browser control. It didn’t occur to me because standalone Windows projects like a WPF app don’t offer script debugging as part of the debugging UI.

However it is possible to use Visual Studio for debugging the Web Browser control code, but you need to explicitly attach the debugger to do it. And since you are attaching to a process, it works with any kind of EXE application not just .NET applications.

To set this up:

  • Start up your application from Explorer or Command Line
  • In Visual Studio use Tools->Attach to Process
  • Attach the Debugger to Script Code
  • Pick your EXE from the Process list

Here’s what the Attach Dialog should look like:

Once the debugger is attached, Visual Studio automatically tracks any scripts that are running in a Script Documents section in the Solution Explorer and you can open the document from there.

Markdown Monster uses HTML for the Markdown editor and also the preview so both of these pages and their scripts show up in the Script Documents section immediately.

Now to debug code:

  • Open the script file from Script Documents (not from your project!)
  • Set a breakpoint anywhere
  • Run your code to the breakpoint
  • Examine variables by hovering
  • Fix broken sheiit
  • Go on with your bad self!

To open the JavaScript Console and DOM Explorer:

  • Type JavaScript Console into Quick Launch
  • Type DOM Explorer into Quick Launch

Here’s what all of this (minus the DOM Explorer) looks like in Visual Studio:

Once you have a breakpoint set you can examine variables and drill into objects just like you’d expect to do in Visual Studio.

You can also open the JavaScript Console which gives you interactive access to the document and script code running in it – just as you would with the regular F12 tools in Internet Explorer. There’s also a DOM Explorer that lets you drill into any open document’s DOM tree and CSS. These two features use the same F12 tools that you use in full Internet Explorer, just planted into Visual Studio. Usually this wouldn’t be so exciting since the F12 tools work fine in IE, but since the Web Browser Control doesn’t have a shell and hence no F12 tools support, this fills a big glaring hole in Web Browser Control development.

##AD##

Watch where you make Code Changes!

If you look at my screen shot you can see that the script file debugged is open and I can edit this file and make changes – if I reload the page (or open a new document in Markdown Monster for example) the change shows up in the executing code.

But be aware that the path to the script file will be in the application’s deployment folder, not the file that might live in your project.

In Markdown Monster the Editor and Preview JavaScript code is part of my .NET project as content files that are copied to the output folder when the project builds.

When you debug with the Visual Studio debugger, the files in Script Documents are the actual files running, which are in the deployment folder (ie. \bin\Release). So if you make changes to a script file, make sure you copy those changes back to your project folders after you’re done debugging or else the changes are simply overwritten the next time you compile your project.

You’ve been warned! 😃

I say this because I’ve done this more than a few times in the past – debugged my files made some changes, then recompiled the .NET project and: Faaaark!, I just overwrote my changes. Don’t let that happen to you.

Debug me!

Having full debugging support for the Web Browser Control in my own Windows applications is going to make my life a lot easier. I’ve made do with Console output based debugging for a long time and while it’s a huge step up from no debug support at all it can be tedious. Using the full debugger is a huge leg up when dealing with more complex code in JavaScript.

After I hooked up the debugger in Visual Studio I found my Spellcheck issue in Markdown Monster in a matter of a couple of minutes after previously spending well over an hour with trying to find the right console.log() calls to try and trace down the bug.

Using the Attach to Process is a little cumbersome, and it makes it difficult to debug startup code debugging, but if you really need to debug a complex issue, this little bit of extra work is well worth the effort.

This works with any executable – it doesn’t have to be a .NET application like my WPF Markdown Monster app. I have an ancient FoxPro application that also use the Web Browser control and I can debug the HTML/JavaScript code the same way in Visual Studio. Heck even an old MFC application Web Browser would be debuggable this way.

For .NET projects designed for Visual Studio, it would be nice though, if standard EXE debugging would have an additional option to start Script Debugging along side the .NET (or native) debugger, but I can live with Attach Debugger.

I didn’t know about this until – well today. And I’m excited – this will make my life a lot easier when I run into JavaScript integration problems. Awesome! Thanks to @Donnchadha for pointing me in the right direction.


this post created with
Markdown Monster

Updating my AlbumViewer Sample to ASP.NET Core 1.1 and Angular 4

As those of you that come here frequently know, I’ve been building and updating an ASP.NET Core sample API project called AlbumViewer. It’s a small AlbumViewer application that tracks Artists, albums and tracks and represents a small application that provides typical CRUD operations as well as authentication and application management features. It’s been my ‘reference project’ that I use to experiment with ASP.NET Core as well as Angular, and I’ve dragged it through all the many versions that started with early previews of ASP.NET Core all the way up to .NET Core 1.1 and .csproj, as well an original Angular 1 application, dragged through all the way to Angular 4.0.

In this post I want to briefly touch on the latest set of updates, which are:

  • Moving from .NET project.json based projects to .csproj projects
  • Moving from Angular 2.x to Angular 4
  • Moving from an Angular Starter template to the Angular CLI
  • Rethinking about how to setup an Angular Project in a ASP.NET Core Solution

The AlbumViewer Application

The sample AlbumViewer application is available on Github:

and you can check it out online at:

The application is a mobile friendly Web application that browses Albums, Artists and Tracks. It also supports editing of the data via a simple client and server based authentication mechanism.

The application runs in full browser mode:

Album Viewer Desktop Browser View

as well as in mobile mode:

Album Viewer Mobile View

The UI is bootstrap based and it feels a bit dated by now (after all this app is now going on nearly 4 years), but it’s certainly functional to as a responsive Web app on all devices.

There’s more info on the repo’s home page on features and how to set this up if you want to play with it.

Moving to .csproj from project.json – Uneventful

As I’ve mentioned I’ve carried this project forward from the early ASP.NET Core betas to the current version. The latest update moves the application to the new .csproj project system and .NET Core 1.1.

I had some trepidation before I started given how old this project was, but surprisingly this process went very smooth.

I used Visual Studio to update by:

  • Installing VS 2017
  • Opening my old project.json project
  • VS offers an upgrade

Done!

As part of the update process my code has moved to .NET Standard 1.6.1 which has significantly reduced the Package clutter seen in .NET Core projects:

Notice that the Westwind.Utilities project has no dependencies outside of .NET Standard which is very nice compared to the nasty clutter that occurred in older versions before .NET Standard!

So the update process worked the first time and surprisingly the application came up immediately and *just ran! There was one problem with Entity Framework (discussed later) that forced me to roll back the EF packages, but otherwise the application just ran on the first try. Again – yay!

If you’re not using Visual Studio, or you’re running on mac or linux you can also install the latest .NET Core SDK then:

dotnet migrate

which performs the same task as Visual Studio’s project migration which simply uses this command line tooling. For kicks I rolled back my project and did the upgrade with dotnet migrate and ended up with an identical configuration and a running application.

So Kudos to the .NET project folks – the migrate functionality seems to be working very well.

Update project.json to 1.1 first then Migrate

One piece of advice though: If you plan on doing the project.json migration, move your project.json based project to the latest version (1.1) first and make sure everything runs.

Then upgrade the project to .csproj using dotnet migrate or Visual Studio’s migration. This is one less thing to worry about. FWIW, I migrated from 1.0.1 and everything still worked, but I still would recommend going as far as you can with project.json before moving over.

.csproj only works in Visual Studio 2017

Remember that once you switch to .csproj based projects, you can no longer use Visual Studio 2015 – you have to use VS 2017 as the new tooling is not going to be back ported to VS 2015. This feels like a step back to the old days of .NET where Visual Studio versions had to be matched with .NET versions, but at least that’s only forward compatibility. Visual Studio 2017 supports all old project types so it’s easy to move development forward to VS 2017. I’ve been on VS 2017 for nearly half a year and VS 2015 is no longer installed.

VS 2017 also requires migration to .csproj. You can’t open project.json based projects in Visual Studio 2017 witout

Another thing to keep in mind that when .NET Core 1.2 (or whatever the next full version will be named/numbered is) ships the dotnet migrate command will be retired, so this is meant as an intermediary tooling feature. Project.json too will be discontinued and no longer supported – 1.1 is the last version that works with it.

So the time to migrate is now if you have older .NET Core projects.

If you’re curious what exactly changed in migrating from project.json to .csproj here are the two Github commits that include all the changes in the AlbumViewer project:

Entity Framework Regression Bug

I ran into an issue with Entity Framework for 1.1.1 right away in the AlbumViewer application. Specifically I was unable to run a query that projects a result property based on .Count() or other aggregate operations:

public async Task<List<ArtistWithAlbumCount>> GetAllArtists()
{
    return await Context.Artists
        .OrderBy(art => art.ArtistName)
        .Select(art => new ArtistWithAlbumCount()
        {
            ArtistName = art.ArtistName,
            Description = art.Description,
            ImageUrl = art.ImageUrl,
            Id = art.Id,
            AmazonUrl = art.AmazonUrl,
            
            // THIS LINE HERE IS THE PROBLEM IN EF 1.1.1
            AlbumCount = Context.Albums.Count(alb => alb.ArtistId == art.Id)
        })
        .ToListAsync();
}

The problem in the code above is the AlbumCount property and Albums.Count() – removing the count makes the query work. This was found by quite a few people and is discussed in some detail here on Github:

The only way to work around that issue was to roll back to EntityFramework 1.1.0:

<PackageReference Include="Microsoft.EntityFrameworkCore" Version="1.1.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="1.1.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.0" />
<PackageReference Include="Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="1.1.0" />

The bug is acknowledged but there’s no 1.1.x update. It’ll be fixed for the 1.2.x release. For now using the older 1.1.0 packages is the way to get around this unless you want to jump into pre-release packages. Sure wish the 1.1.x package was updated.

And that’s it for the server side .NET updates

Moving to the Angular CLI

Once the ASP.NET application had been moved I decided to also move the Angular front end application to use the Angular CLI. The Angular CLI is the ‘official’ command line interface for creating and running new Angular projects, and building final production output for your project. The CLI has a ton of additional features, like run tests, adding new components of all types, but so far I’ve been focused on using the core features of the CLI which is new projects, running the application with the development and live-reload server and building final production output.

I’ve struggled for quite a while with finding the right solution to creating new Angular projects. I’ve gone through a bunch of different starter template projects, even ended up using a custom one I created for a while. While this worked all these solutions have always been problematic once Angular rev’d – getting Angular and dependencies properly updated was always a major undertaking that usually involved creating a new project with the latest starter template and then diffing the two configurations.

The Angular CLI has been around for quite a while, but until recently I wouldn’t touch it because early iterations produced some pretty horrible starter projects. However, in recent versions ramping up to the 4.0 release of Angular, the Angular CLI started to improve drastically, producing clean starter projects that closely follow the Angular style guide and provide a minimal template that provides most of what you need and no more.

There are still a few pain points – like the lack of automatic routing hookups, but overall the CLI provides a very usable and productive project after running ng new. Routing is missing a sample route hookup, so you have to find the right dependencies and set up your first routes manually which is not the most obvious thing. Luckily the Angular documentation is very good, and shows how to do add the necessary pieces.

Upgrading to the Angular CLI from a Starter Template

All that said about the CLI, I had to migrate my existing WebPack based template installation to the CLI project. The good news is that this wasn’t as much of a pain as a I thought it would be. This is due to the fact that the Angular CLI produces a project that has most of the dependencies already defined.

The approach I took was:

  • Create a new CLI project
  • Copy my existing application’s App folder contents
  • Add any non-referenced dependencies to angular-cli.json
  • Add all dependencies and declarations to app.module.ts
  • Copy routes from old route config to new config

To my surprise the application fired right up and ran after getting the module configuration properly set up. The process was a lot easier than I thought.

This brings home a big point I’ve been making about Angular for a long while: While setup and configuration can be a bit overwhelming, once you get into the guts of building an application the process is very straightforward and logical, and… crazy productive. Upgrading to the Angular CLI reflects that as well – the main application code ran without any modifications whatsoever. The move to the Angular CLI only required getting the configuration properly set up to map to the setup the CLI creates.

Upgrading to Angular 4

Angular 4 was released a few weeks back and there are many improvements, most of which live behind the scenes and provide improved performance and smaller runtime packages for the base vendor bundles and your own transpiled production code.

I’d been running Angular 2.4 previously, and the upgrade to Angular 4 was almost a non-event. The Angular site has update instructions (scroll down a little) that show the the NPM commands to update all packages including the CLI. After that simply run the application just as you did before and you’re off to the races.

Very nice.

Since upgrading to Angular 2.0 final, I’ve not had to do any upgrade specific fixing of code to the AlbumViewer project which is pretty cool, especially given all the major pre-release pain that those of us crazy enough to play with Angular before release (for a loooong time) went through. Upgrades have been smooth with no hiccups for me personally. Each release has a few changes including some relatively small breaking changes, but these are usually very minor and affect not very common use cases. It appears the Angular team is trying very hard to not break backwards compatibility even with the major release of 4.0.

Managing ASP.NET Core and Angular together

One thing I’ve been going back and forth on with various projects has been how to break out the client and server side of the application. In my previous iterations of the AlbumViewer I had use a custom starter template and shoehorned that starter template into my ASP.NET application’s wwwroot folder. The end result was that I had all my source files off wwwroot/app and then had the final build get the data into the root /wwwroot folder.

While this worked this has always been a pain, because it’s a customized setup that requires moving things around after using any type of starter template. These templates – including the Angular CLI want to run out of a dedicated folder and making that happen inside of the ASP.NET Core project itself is not really clean.

The ASP.NET Core Repository does provide JavaScript Services:

This library consists of a bunch of Yeoman project templates plus some pretty cool ASP.NET Core backend services that interface with NodeJs to provide server side pre-rendering of JavaScript/Angular code.

While this stuff is pretty cool, it still doesn’t really fit the project layout that Angular projects naturally want to use.

Ideally I want to use the official solution for creating new projects which is the Angular CLI with its supported mechanism for updating to the latest versions, and as cool as the ASP.NET JavaScript services are, when it comes time to update you’re back on your own.

Breaking out the Angular Project

So, rather than continuing down the ‘everything Web in one project’ approach, I decided to break out the Angular project into its own folder and ‘Web Site’ project. The Angular application now lives in it’s own folder on disk, and is accessible in Visual Studio as an old school Web Site project.

Separated Angular and ASP.NET Core Projects

I don’t actually edit the Angular project in Visual Studio as I use WebStorm for my client side development. But it’s nice to have the Angular project part of the solution so I can see the code there and make the occasional quick edit from VS. The key thing is that the Server Web API and Angular Front End projects are separate.

I like this because while developing I am already running a separate server for my front end project, using the Angular CLI server on port 4200, connecting the Web API server on port 5000. This means I have to already build my Web API in such a way that * it supports remote connections with CORS anyway
it supports remote connections with CORS anyway.

This means I can potentially run my backend API and my front end as completely separate Web sites.

To make switching between sites a bit easier I use a client side configuration service class in which I define a baseUrl that determines where the server will be accessed:

@Injectable()
export class AppConfiguration {
  constructor(){
      this.setToastrOptions();
      console.log("AppConfiguration ctor");

      if(location.port && (location.port == "3000") || (location.port== "4200") )
        this.urls.baseUrl = "http://localhost:5000/"; // kestrel

      //this.urls.baseUrl = "http://localhost:26448/"; // iis Express
      //this.urls.baseUrl = "http://localhost/albumviewer/"; // iis
      //this.urls.baseUrl = "https://samples.west-wind.com/AlbumViewerCore/";  // online
  }

  urls = {
    baseUrl: "./",
    //baseUrl: "http://localhost/albumviewer/",
    //baseUrl: "http://localhost:5000/",
    //baseUrl: "https://albumviewer2swf.west-wind.com/",
    albums: "api/albums",
    album: "api/album",
    artists: "api/artists",
    artist: "api/artist",
    artistLookup: "api/artistlookup?search=",
    saveArtist: "api/artist",
    login: "api/login", //"api/login",
    logout: "api/logout",
    isAuthenticated: "api/isAuthenticated",
    reloadData: "api/reloadData",
    url: (name,parm1?,parm2?,parm3?) => {
      var url = this.urls.baseUrl + this.urls[name];
      if (parm1)
        url += "/" + parm1;
      if (parm2)
        url += "/" + parm2;
      if (parm3)
        url += "/" + parm3;

      return url;
    }
  };

Using this approach it’s very easy to switch the server between different locations by injecting the AppConfiguration service and then using the config.urls.url() function to build up a url:

@Injectable()
export class AlbumService {
  constructor(private http: Http,
              private config:AppConfiguration) {
  }

  albumList: Album[] = [];
  
  getAlbums(): Observable<Album[]> {
    return this.http.get(this.config.urls.url("albums"))
        .map((response)=> {
          this.albumList = response.json();
          return this.albumList;
        })
        .catch( new ErrorInfo().parseObservableResponseError );
  }
  ...
}

This makes it super easy to switch between backends. For example, I find it useful during development sometimes to just use the static backend on my live samples server because I know it’s always up and running. Other times I want to use my local development server because I know i’ll be making changes to the server as I work.

Building Output into wwwroot

If you don’t want to run two separate Web sites/virtuals for your front end and backend code you can still generate all of your output into the server’s wwwroot folder easily enough. In fact I set up my angular-cli.json file to point at the wwwroot folder in the server project:

{
  "apps": [
    {
      "outDir": "../AlbumViewerNetCore/wwwroot"
      ...
    }
    ...
}

This always builds the application into the wwwroot folder when I build a production build with:

> ng build --prod

This produces the final packaged Angular application output into the Web app’s wwwroot folder that I can ran in a single Web site:

Combined API and Angular Web Site

Summary

This project has been an interesting one for me and the excercise of keeping it up to date with the latest versions is a great reference to understand some of the core features of both the server side ASP.NET Core and .NET Core frameworks as well as the Angular bootstrapping code.

I’m happy to say that the latest set up updates for both the .NET and Angular pieces have been spectacularily painless, compared to crazy churn that was happening with both frameworks not so long ago. Both Microsoft and Google seem to have gotten the memo that breaking code for each minor release is going to piss people off and eventually drive them away from your products.

And it looks like it’s working. Both ASP.NET and Angular Updates to major new versions and build tools happened with minimal effort.

Here’s to many more updates that are as relatively painless as this.


this post created with
Markdown Monster