So given
std::promise p; // promise for
// communications channel
the detecting task's code is trivial,
… // detect event
p.set_value();// tell reacting task
and the reacting task's code is equally simple:
… // prepare to react
p.get_future().wait();// wait on future
// corresponding to p
… // react to event
Like the approach using a flag, this design requires no mutex, works regardless of whether the detecting task sets its std::promise
before the reacting task wait
s, and is immune to spurious wakeups. (Only condition variables are susceptible to that problem.) Like the condvar-based approach, the reacting task is truly blocked after making the wait
call, so it consumes no system resources while waiting. Perfect, right?
Not exactly. Sure, a future-based approach skirts those shoals, but there are other hazards to worry about. For example, Item 38explains that between a std::promise
and a future is a shared state, and shared states are typically dynamically allocated. You should therefore assume that this design incurs the cost of heap-based allocation and deallocation.
Perhaps more importantly, a std::promise
may be set only once. The communications channel between a std::promise
and a future is a one-shot mechanism: it can't be used repeatedly. This is a notable difference from the condvar- and flag-based designs, both of which can be used to communicate multiple times. (A condvar can be repeatedly notified, and a flag can always be cleared and set again.)
The one-shot restriction isn't as limiting as you might think. Suppose you'd like to create a system thread in a suspended state. That is, you'd like to get all the overhead associated with thread creation out of the way so that when you're ready to execute something on the thread, the normal thread-creation latency will be avoided. Or you might want to create a suspended thread so that you could configure it before letting it run. Such configuration might include things like setting its priority or core affinity. The C++ concurrency API offers no way to do those things, but std::thread
objects offer the native_handle
member function, the result of which is intended to give you access to the platform's underlying threading API (usually POSIX threads or Windows threads). The lower-level API often makes it possible to configure thread characteristics such as priority and affinity.
Assuming you want to suspend a thread only once (after creation, but before it's running its thread function), a design using a void
future is a reasonable choice. Here's the essence of the technique:
std::promise p;
void react(); // func for reacting task
void detect() // func for detecting task
{
std::thread t([] // create thread
{
p.get_future().wait();// suspend t until
react(); // future is set
});
… // here, t is suspended
// prior to call to react
p.set_value();// unsuspend t (and thus
// call react)
… // do additional work
t.join(); // make t unjoinable
} // (see Item 37)
Because it's important that t
become unjoinable on all paths out of detect
, use of an RAII class like Item 37's ThreadRAII
seems like it would be advisable. Code like this comes to mind:
void detect() {
ThreadRAII tr( // use RAII object
std::thread([] {
p.get_future().wait();
react();
}),
ThreadRAII::DtorAction::join// risky! (see below)
);
… // thread inside tr
// is suspended here
p.set_value(); // unsuspend thread
// inside tr
…
}
This looks safer than it is. The problem is that if in the first “ …
” region (the one with the “thread inside tr
is suspended here” comment), an exception is emitted, set_value
will never be called on p
. That means that the call to wait
inside the lambda will never return. That, in turn, means that the thread running the lambda will never finish, and that's a problem, because the RAII object tr
has been configured to perform a join
on that thread in tr
's destructor. In other words, if an exception is emitted from the first “ …
” region of code, this function will hang, because tr
's destructor will never complete.
There are ways to address this problem, but I'll leave them in the form of the hallowed exercise for the reader. [19] A reasonable place to begin researching the matter is my 24 December 2013 blog post at The View From Aristeia, "ThreadRAII + Thread Suspension = Trouble?"
Here, I'd like to show how the original code (i.e., not using ThreadRAII
) can be extended to suspend and then unsuspend not just one reacting task, but many. It's a simple generalization, because the key is to use std::shared_future
s instead of a std::future
in the react
code. Once you know that the std::future
's share
member function transfers ownership of its shared state to the std::shared_future
object produced by share
, the code nearly writes itself. The only subtlety is that each reacting thread needs its own copy of the std::shared_future
that refers to the shared state, so the std::shared_future
obtained from share
is captured by value by the lambdas running on the reacting threads:
std::promise p; // as before
void detect() // now for multiple
{ // reacting tasks
auto sf = p.get_future() .share(); // sf's type is
// std::shared_future
std::vector vt; // container for
// reacting threads
for (int i = 0; i < threadsToRun; ++i) {
vt.emplace_back([ sf]{ sf.wait(); // wait on local
react(); }); // copy of sf; see
} // Item 42 for info
// on emplace_back
… // detect hangs if
// this "…" code throws!
p.set_value(); // unsuspend all threads
…
for (auto& t : vt) { // make all threads
t.join(); // unjoinable; see Item 2
} // for info on "auto&"
}
The fact that a design using futures can achieve this effect is noteworthy, and that's why you should consider it for one-shot event communication.
Things to Remember
• For simple event communication, condvar-based designs require a superfluous mutex, impose constraints on the relative progress of detecting and reacting tasks, and require reacting tasks to verify that the event has taken place.
• Designs employing a flag avoid those problems, but are based on polling, not blocking.
• A condvar and flag can be used together, but the resulting communications mechanism is somewhat stilted.
Читать дальше