(1 item) |
|
(1 item) |
|
(5 items) |
|
(1 item) |
|
(1 item) |
|
(2 items) |
|
(2 items) |
|
(4 items) |
|
(1 item) |
|
(6 items) |
|
(2 items) |
|
(4 items) |
|
(1 item) |
|
(4 items) |
|
(2 items) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(2 items) |
|
(2 items) |
|
(5 items) |
|
(3 items) |
|
(1 item) |
|
(1 item) |
|
(1 item) |
|
(3 items) |
|
(1 item) |
|
(1 item) |
|
(2 items) |
|
(8 items) |
|
(2 items) |
|
(7 items) |
|
(2 items) |
|
(2 items) |
|
(1 item) |
|
(2 items) |
|
(1 item) |
|
(2 items) |
|
(4 items) |
|
(1 item) |
|
(5 items) |
|
(1 item) |
|
(3 items) |
|
(2 items) |
|
(2 items) |
|
(8 items) |
|
(7 items) |
|
(3 items) |
|
(7 items) |
|
(6 items) |
|
(1 item) |
|
(2 items) |
|
(5 items) |
|
(5 items) |
|
(7 items) |
|
(3 items) |
|
(7 items) |
|
(16 items) |
|
(10 items) |
|
(27 items) |
|
(15 items) |
|
(15 items) |
|
(13 items) |
|
(16 items) |
|
(15 items) |
Deadlocks are a tricky problem that can occur whenever you write multithreaded code. Every time you acquire some kind of lock, such as a Monitor or a Mutex, you may be risking deadlock: you are asking your thread to sit and wait for some external event. If that external event cannot occur until your thread completes some other work, you have deadlock - the thread is blocked and will therefore be unable to do the work required to unblock it.
Many people tend to think of deadlocks as being a phenomenon specific to the acquisition of locks. However, any number of resources may come into play in deadlock scenarios. The essence of deadlock is a paradox: deadlock occurs when a thread is not able to proceed until a particular operation is performed, but that operation cannot be performed until the thread is able to proceed.
Looking at a deadlock in those terms, the classic 'deadly embrace' example seems somewhat convoluted, since it contains two instances of this paradox. (In the unlikely event that you're not familiar with this style of deadlock, here's a short summary. We have two threads and two locks. Each thread has acquired one of the locks and is blocked waiting to acquire the other. The first thread is unable to proceed until the lock currently held by the second thread is released. But the second thread won't release that lock until after it has acquired both locks and has done whatever work it needs to do, and that can't happen until the second thread has acquired the lock held by the first thread. In other words, the lock the first thread is waiting for cannot become available until the lock already held by the first thread is released. So there we have an instance of the paradox, but of course the second thread is an exact mirror image.)
There are examples of deadlock which are simpler. And not all involve locks - since the essence of deadlock revolves around the requirement that a particular thread perform a particular operation, any system involving thread affinity tends to be fertile ground for deadlocks. Indeed it is possible to induce thread affinity based deadlock without locking on any shared resources at all.
Consider this example from a Windows Forms application. (Windows Forms controls have thread affinity: almost anything that you can do to them must be done on the same thread on which they were created. This thread affinity is essentially a consequence of how Win32 itself works.)
private Stream rx, tx; private void btnSend_Click(object sender, System.EventArgs e) { byte[] message = { 0x46, 0x6f, 0x6f, 0x21 }; tx.Write(message, 0, message.Length); } private void btnReceive_Click(object sender, System.EventArgs e) { byte[] message = new byte[4]; rx.Read(message, 0, message.Length); string messageText = Encoding.UTF8.GetString(message); txtDisplay.Text = messageText; }
Suppose in this example that data will only emerge from the rx
stream if some data has been sent
to the tx
stream. One very simple way of doing this is to set up a TCP connection to ourselves, and
make tx
and rx
the two ends of the connection - that way, whatever bytes we write
into tx
will emerge from rx
. While this is a simple concept, it requires a little unattractive
boilerplate:
// Set up TCP connection to ourselves. // // First, listen for a connection. int port = 54321; Socket listen = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); listen.Bind(new IPEndPoint(IPAddress.Any, port)); listen.Listen(1); // Next make an 'outbound' connection attempt. Socket txSock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IAsyncResult connIar = txSock.BeginConnect( new IPEndPoint(IPAddress.Loopback, port), null, null); // Accept this attempt on the listening socket, and // then complete the connection. Socket rxSock = listen.Accept(); txSock.EndConnect(connIar); // Wrap each end of the TCP connection in a stream. rx = new NetworkStream(rxSock, FileAccess.Read, true); tx = new NetworkStream(txSock, FileAccess.Write, true);
Assume the btnSend_Click
and btnReceive_Click
functions above are click handlers
for buttons on the form. If the Send button is clicked, this writes some data into the TCP connection. If the Receive
button is then clicked, it retrieves this data and displays it. But what it Receive is clicked first? The call to
rx.Read
will block until data is available. The only thing that is going to make data available is if we
send some data down the TCP connection, which is what happens when the Send button is clicked. But because
Windows Forms has thread affinity, click handlers can only run on the thread on which the control was created. If the
Send and Receive buttons are on the same form, they must have been created on the same thread, which means
that their click handlers must run on the same thread. The upshot of this is that the Send click handler cannot
run until the Receive click handler has finished.
And there's that paradox: the UI thread cannot proceed until some data is sent, but data will not be sent until the UI thread is allowed to proceed. We have deadlock. What's more, we deadlocked in a single-threaded program with no locks in sight! Be careful out there...
(You might argue that this is just a hang rather than technically a deadlock, since it doesn't appear to meet common definition in which two processes are involved. But it depends on what you think process means. The term 'deadlock' is freely used for single processes involving multiple threads, so a 'process' doesn't have to be an OS process. If you consider it to be some task consisting of a sequence of operations, then it could be a thread, an OS process, the process of sending data, or the process of receiving data. So by that definition, this example has two 'processes' - a send process and a receive process. They just happen to run on the same thread. But hey, on a single-processor machine, all your processes run on the same processor, so ultimately, what's the difference?)
First of all, you should avoid blocking the UI thread if you can possibly help it. The btnReceive_Click
function is doing a bad thing - NetworkStream.Read
won't return until data is available, which means
the UI will not respond to user input until data is available. Even if I hadn't contrived a deadlock this would still be a
bad thing. This kind of work should either be done asynchronously or on a different thread in order to keep the UI
responsive. In this particular case it would also avoid the deadlock. For example:
private void btnReceive_Click(object sender, System.EventArgs e) { byte[] message = new byte[4]; rx.BeginRead(message, 0, message.Length, new AsyncCallback(ReadComplete), message); } private void ReadComplete(IAsyncResult iar) { if (InvokeRequired) { // Get called back on UI thread. object[] args = { iar }; BeginInvoke(new AsyncCallback(ReadComplete), args ); return; } rx.EndRead(iar); byte[] message = (byte[]) iar.AsyncState; string messageText = Encoding.UTF8.GetString(message); txtDisplay.Text = messageText; }
You might be thinking that now we've gone multi-threaded, it would be necessary to synchronize access to the
NetworkStream
objects, since these are not designed to be accessed concurrently from multiple
threads. However, the code never touches these objects from anything other than the UI thread - the only point at
which another thread gets involved is the first time the ReadComplete
completion function is called, at
which point it uses Control.BeginInvoke
to pass control back to the UI thread. All we've done here is
avoid blocking the UI thread while we wait for incoming data. All of the useful work is still done on the UI thread.
So that's the first rule for avoiding deadlock in UI apps: avoid blocking the UI thread. Happily, this rule also promotes the responsiveness of your application.
The second rule is a little more subtle. You should also avoid making other threads wait for the UI thread to do
something. For example, you should prefer Control.BeginInvoke
to Control.Invoke
.
The problem with blocking a worker thread while you wait for the UI thread to do something is that it's very difficult
to be sure that the UI thread isn't waiting for your worker thread to do something. For example, consider the call to
BeginInvoke
in the ReadComplete
method. Would it have been safe to replace it with
a call to the synchronous Invoke
?
I don't actually know the answer to that question, because it will depend on the internal implementation of
NetworkStream
. While I suspect that it will probably work, I can't say with absolute certainty that it
will, which seems reason enough to avoid it. Indeed it's not that hard to imagine a reason why it might not work.
During the first call to ReadComplete
, the bit of code that is calling us is the
NetworkStream
. I have no way of knowing whether the NetworkStream
is doing some
internal locking that will mean further calls to BeginRead
or EndRead
may end up
blocking. (Although these APIs are the asynchronous versions, that simply means I won't be made to wait for data to
become available; it doesn't mean I won't be made to wait until the NetworkStream
is ready for me.)
What if the NetworkStream
has some internal code like this:
// Note: it's almost certainly rather more complex than // this in reality... private void OnSocketReceiveComplete(IAsyncResult iar) { lock (this) { int c = sock.EndReceive(iar); // Retrieve the call object we set up for this // async read operation, and set its status // to complete. (This means its IsCompleted flag // will return true, and any thread waiting on // its AsyncWaitHandle will be unblocked.) MyCallObject s = (MyCallObject) iar.AsyncState; s.Complete(); // Get the completion handler for the call // to NetworkStream.BeginRead AsyncCallback endReadCallback = s.Callback; if (endReadCallback != null) { endReadCallback(s); } } }
If it worked like that, the worker thread would own the NetworkStream
's monitor for the duration
of the first ReadComplete
callback. If ReadComplete
used Invoke
that
means we'd be in possession of that monitor while waiting for the UI thread. What if the UI thread is also trying to use
the NetworkStream
? If it calls some other method on NetworkStream
which acquires
the same lock internally, then we have a problem: the UI thread is blocked until that lock is released, but the lock will
not be released until the UI thread is able to proceed, because the worker thread that owns the lock is waiting for the
UI thread. It's that same old paradox: the UI is blocked waiting for the UI thread to do something, so we will get
deadlock.
I always recommend Control.BeginInvoke
over Control.Invoke
because it is often
very difficult to be certain of which locks you are holding, or what other work in progress might be waiting for your
function to return. If you don't control all of the code above you on the call stack, you can never be really sure what
the implications of blocking will be.
In general, completion handlers for asynchronous operations should try to avoid performing blocking operations.
Preferring Control.BeginInvoke
in such handlers is just one example of this idea.