public: // block in their dtors
…
private:
std::shared_futurefut;
};
Of course, if you have a way of knowing that a given future does not satisfy the conditions that trigger the special destructor behavior (e.g., due to program logic), you're assured that that future won't block in its destructor. For example, only shared states arising from calls to std::async
qualify for the special behavior, but there are other ways that shared states get created. One is the use of std::packaged_task
. A std::packaged_task
object prepares a function (or other callable object) for asynchronous execution by wrapping it such that its result is put into a shared state. A future referring to that shared state can then be obtained via std::packaged_task
's get_future
function:
int calcValue(); // func to run
std::packaged_task // wrap calcValue so it
pt(calcValue); // can run asynchronously
auto fut = pt.get_future(); // get future for pt
At this point, we know that the future fut doesn't refer to a shared state created by a call to std::async
, so its destructor will behave normally.
Once created, the std::packaged_task pt
can be run on a thread. (It could be run via a call to std::async
, too, but if you want to run a task using std::async
, there's little reason to create a std::packaged_task
, because std::async
does everything std::packaged_task
does before it schedules the task for execution.)
std::packaged_task
s aren't copyable, so when pt
is passed to the std::thread
constructor, it must be cast to an rvalue (via std::move
— see Item 23):
std::thread t(std::move(pt)); // run pt on t
This example lends some insight into the normal behavior for future destructors, but it's easier to see if the statements are put together inside a block:
{ // begin block
std::packaged_task
pt(calcValue);
auto fut = pt.get_future();
std::thread t(std::move(pt));
… // see below
} // end block
The most interesting code here is the “ …
” that follows creation of the std::thread
object t
and precedes the end of the block. What makes it interesting is what can happen to t
inside the “ …
” region. There are three basic possibilities:
• Nothing happens to t
. In this case, t
will be joinable at the end of the scope. That will cause the program to be terminated (see Item 37).
• A join
is done on t
. In this case, there would be no need for fut
to block in its destructor, because the join
is already present in the calling code.
• A detach
is done on t
. In this case, there would be no need for fut
to detach
in its destructor, because the calling code already does that.
In other words, when you have a future corresponding to a shared state that arose due to a std::packaged_task
, there's usually no need to adopt a special destruction policy, because the decision among termination, joining, or detaching will be made in the code that manipulates the std::thread
on which the std::packaged_task
is typically run.
Things to Remember
• Future destructors normally just destroy the future's data members.
• The final future referring to a shared state for a non-deferred task launched via std::async
blocks until the task completes.
Item 39: Consider void
futures for one-shot event communication.
Sometimes it's useful for a task to tell a second, asynchronously running task that a particular event has occurred, because the second task can't proceed until the event has taken place. Perhaps a data structure has been initialized, a stage of computation has been completed, or a significant sensor value has been detected. When that's the case, what's the best way for this kind of inter-thread communication to take place?
An obvious approach is to use a condition variable ( condvar ). If we call the task that detects the condition the detecting task and the task reacting to the condition the reacting task, the strategy is simple: the reacting task waits on a condition variable, and the detecting thread notifies that condvar when the event occurs. Given
std::condition_variable cv; // condvar for event
std::mutex m; // mutex for use with cv
the code in the detecting task is as simple as simple can be:
… // detect event
cv.notify_one(); // tell reacting task
If there were multiple reacting tasks to be notified, it would be appropriate to replace notify_one
with notify_al
l, but for now, we'll assume there's only one reacting task.
The code for the reacting task is a bit more complicated, because before calling wait
on the condvar, it must lock a mutex through a std::unique_lock
object. (Locking a mutex before waiting on a condition variable is typical for threading libraries. The need to lock the mutex through a std::unique_lock
object is simply part of the C++11 API.) Here's the conceptual approach:
… // prepare to react
{ // open critical section
std::unique_lock lk(m); // lock mutex
cv.wait(lk); // wait for notify;
// this isn't correct!
… // react to event
// (m is locked)
} // close crit. section;
// unlock m via lk's dtor
… // continue reacting
// (m now unlocked)
The first issue with this approach is what's sometimes termed a code smell : even if the code works, something doesn't seem quite right. In this case, the odor emanates from the need to use a mutex. Mutexes are used to control access to shared data, but it's entirely possible that the detecting and reacting tasks have no need for such mediation. For example, the detecting task might be responsible for initializing a global data structure, then turning it over to the reacting task for use. If the detecting task never accesses the data structure after initializing it, and if the reacting task never accesses it before the detecting task indicates that it's ready, the two tasks will stay out of each other's way through program logic. There will be no need for a mutex. The fact that the condvar approach requires one leaves behind the unsettling aroma of suspect design.
Even if you look past that, there are two other problems you should definitely pay attention to:
• If the detecting task notifies the condvar before the reacting task wait
s, the reacting task will hang. In order for notification of a condvar to wake another task, the other task must be waiting on that condvar. If the detecting task happens to execute the notification before the reacting task executes the wait
, the reacting task will miss the notification, and it will wait forever.
Читать дальше