(1 item) |
|
(1 item) |
|
(5 items) |
|
(1 item) |
|
(1 item) |
|
(2 items) |
|
(2 items) |
|
(4 items) |
|
(1 item) |
|
(6 items) |
|
(2 items) |
|
(4 items) |
|
(1 item) |
|
(4 items) |
|
(2 items) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(2 items) |
|
(2 items) |
|
(5 items) |
|
(3 items) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(3 items) |
|
(1 item) |
|
(1 item) |
|
(2 items) |
|
(8 items) |
|
(2 items) |
|
(7 items) |
|
(2 items) |
|
(2 items) |
|
(1 item) |
|
(2 items) |
|
(1 item) |
|
(2 items) |
|
(4 items) |
|
(1 item) |
|
(5 items) |
|
(1 item) |
|
(3 items) |
|
(2 items) |
|
(2 items) |
|
(8 items) |
|
(7 items) |
|
(3 items) |
|
(7 items) |
|
(6 items) |
|
(1 item) |
|
(2 items) |
|
(5 items) |
|
(5 items) |
|
(7 items) |
|
(3 items) |
|
(7 items) |
|
(16 items) |
|
(10 items) |
|
(27 items) |
|
(15 items) |
|
(15 items) |
|
(13 items) |
|
(16 items) |
|
(15 items) |
I just read this article by Joe Duffy about dealing with asynchronous exceptions.
He provides an interesting recommendation:
"Start paired operations inside a try block when possible."
In this context, 'paired operations' are ones where if the first operation executes, you always want the second one to execute too. A good
example would be lock acquisition and release - if you acquire a lock, you will want to be releasing it at some point. This is exactly the scenario
in which you use the C# using
statement, or a try
...finally
pair of blocks.
Joe's recommendation says that the first operation of the pair should be inside the try
block. In the context of a
using
statement, that means inside the block - the using
statement expands the block of code you supply into
a try
block, and it generates the finally
block for you.
The reason for putting the first of the paired operations into the try
block is entirely to deal with the possibility of asynchronous
exceptions. (Principally thread aborts, which as you know, are evil. But
they are a fact of life at AppDomain shutdown time.) To see why putting the code outside of the try
block could be a problem,
consider this example:
First(); // What if thread abort occurs here? try { DoStuff(); } finally { Last(); }
Asynchronous exceptions can occur at any time. If one occurs on the comment line above, then clearly the First()
method will have
been executed, but the Last()
will not have - the exception occurred before the try
block was entered, so the
finally
block will never run.
Joe's article recommends this instead:
try { // (2) First(); // (1) DoStuff(); } finally { Last(); }
This solves the problem of the previous example. If an asynchronous exception occurs immediately after First()
runs on line (1),
Last()
still runs.
But what if an exception occurs on line (2)? Or if one occurs in the middle of First()
? In this case Last()
will run
even though First()
may not have run, or may have run only partially. So to use this idiom, you need to make sure that the second of
your paired operations behaves correctly if it is called even when the first operation was not.
The most inconvenient aspect of trying to follow this style is that there is no compiler support. We often get the using
keyword
to generate finally
blocks for us whenever deterministic cleanup is required, but the using
statement puts the
initialisation code before the try
block. The only way to stick to Joe's recommendation here is to go back to coding
try
...finally
blocks by hand - the using
statement can't help us here.
Joe points out that his recommendation "deviate[s] from guidance given in the past". And the using
block construct is based on
the old guidance not this new guidance. But curiously, Joe goes on to recommend that you use using
or lock
blocks.
(And to be fair, he also calls out the fact that this appears to contradict the previous recommendation.)
The lock
block turns out to be particularly interesting. By all rights it should suffer from the same problem as the using
statement because it generates very similar code. In particular, it puts the lock acquisition outside of the try
block. But apparently there
is a "JIT hack" which recognizes when you are using a lock
statement (or equivalent code; it looks for the pattern of code that lock
generates, so it is not really a C#-specific thing. It will work with VB.NET's SyncLock
too, for example.). The JIT guarantees that for any
code that uses this construct, it will eliminate any window between acquiring the lock and entering the try
block!
This makes me slightly queasy for a couple of reasons. First, I don't really like the magic special case handling of this in the JIT. But more importantly,
I'm not a fan of the lock statement. I've written about this a
few
times before, but in short, my main problem with lock
is that you can't specify a timeout. Blocking indefinitely means your program system is irretrievably hosed if it deadlocks.
So I don't use lock
. But using
apparently doesn't get this special treatment, as far as I can tell. (And it probably
shouldn't - what if I'm writing code that depends on the current behaviour?)
Joe tells us that this isn't really a big deal, because this only really matters for thread abort scenarios. Those should really only ever happen to you at AppDomain shutdown time. (If you use them for anything else, stop doing that!) And in that particular case, critical finalizers in Whidbey provide a 'good enough' form of cleanup.
It still leaves me feeling rather uncomfortable. The fact that they felt the need to build in this hack for the lock
keyword makes it
hard to believe it when Joe says this "shouldn't be a concern to you". That and the new experimental idioms Joe illustrates towards the end of the
article - if they are considering these things, doesn't that mean this is important? And the observation near the end that there is no good
solution for the IDisposable idiom doesn't fill me with warmth.
I think the most important thing to take away is the "Never initiate asynchronous thread aborts in the Framework" rule. Or, as I've always said: Thread.Abort is evil.