• You need access to the API of the underlying threading implementation. The C++ concurrency API is typically implemented using a lower-level platform- specific API, usually pthreads or Windows' Threads. Those APIs are currently richer than what C++ offers. (For example, C++ has no notion of thread priorities or affinities.) To provide access to the API of the underlying threading implementation, std::threadobjects typically offer the native_handlemember function. There is no counterpart to this functionality for std::futures (i.e., for what std::asyncreturns).
• You need to and are able to optimize thread usage for your application. This could be the case, for example, if you're developing server software with a known execution profile that will be deployed as the only significant process on a machine with fixed hardware characteristics.
• You need to implement threading technology beyond the C++ concurrency API, e.g., thread pools on platforms where your C++ implementations don't offer them.
These are uncommon cases, however. Most of the time, you should choose task-based designs instead of programming with threads.
Things to Remember
• The std::threadAPI offers no direct way to get return values from asynchronously run functions, and if those functions throw, the program is terminated.
• Thread-based programming calls for manual management of thread exhaustion, oversubscription, load balancing, and adaptation to new platforms.
• Task-based programming via std::asyncwith the default launch policy handles most of these issues for you.
Item 36: Specify std::launch::asyncif asynchronicity is essential.
When you call std::asyncto execute a function (or other callable object), you're generally intending to run the function asynchronously. But that's not necessarily what you're asking std::asyncto do. You're really requesting that the function be run in accord with a std::async launch policy . There are two standard policies, each represented by an enumerator in the std::launchscoped enum. (See Item 10for information on scoped enums.) Assuming a function fis passed to std::asyncfor execution,
• The std::launch::async launch policymeans that fmust be run asynchronously, i.e., on a different thread.
• The std::launch::deferred launch policymeans that fmay run only when getor waitis called on the future returned by std::async. [16] This is a simplification. What matters isn't the future on which get or wait is invoked, it's the shared state to which the future refers. ( Item 38 discusses the relationship between futures and shared states.) Because std::future s support moving and can also be used to construct std::shared_future s, and because std::shared_future s can be copied, the future object referring to the shared state arising from the call to std::async to which f was passed is likely to be different from the one returned by std::async . That's a mouthful, however, so it's common to fudge the truth and simply talk about invoking get or wait on the future returned from std::async .
That is, f's execution is deferred until such a call is made. When getor waitis invoked, fwill execute synchronously, i.e., the caller will block until ffinishes running. If neither getnor waitis called, fwill never run.
Perhaps surprisingly, std::async's default launch policy — the one it uses if you don't expressly specify one — is neither of these. Rather, it's these or-ed together. The following two calls have exactly the same meaning:
auto fut1 = std::async(f); // run f using
// default launch
// policy
auto fut2 = std::async( std::launch::async | // run f either
std::launch::deferred, // async or
f); // deferred
The default policy thus permits f to be run either asynchronously or synchronously. As Item 35points out, this flexibility permits std::asyncand the thread-management components of the Standard Library to assume responsibility for thread creation and destruction, avoidance of oversubscription, and load balancing. That's among the things that make concurrent programming with std::asyncso convenient.
But using std::asyncwith the default launch policy has some interesting implications. Given a thread texecuting this statement,
auto fut = std::async(f); // run f using default launch policy
• It's not possible to predict whether f will run concurrently with t , because f might be scheduled to run deferred.
• It's not possible to predict whether f runs on a thread different from the thread invoking get or wait on fut . If that thread is t, the implication is that it's not possible to predict whether fruns on a thread different from t.
• It may not be possible to predict whether f runs at all, because it may not be possible to guarantee that getor waitwill be called on futalong every path through the program.
The default launch policy's scheduling flexibility often mixes poorly with the use of thread_localvariables, because it means that if freads or writes such thread-local storage (TLS), it's not possible to predict which thread's variables will be accessed:
auto fut = std::async(f); // TLS for f possibly for
// independent thread, but
// possibly for thread
// invoking get or wait on fut
It also affects wait-based loops using timeouts, because calling wait_foror wait_untilon a task (see Item 35) that's deferred yields the value std::launch::deferred. This means that the following loop, which looks like it should eventually terminate, may, in reality, run forever:
using namespace std::literals; // for C++14 duration
// suffixes; see Item 34
void f() // f sleeps for 1 second,
{ // then returns
std::this_thread::sleep_for(1s);
}
auto fut = std::async(f); // run f asynchronously
// ( conceptually )
while (fut. wait_for(100ms) != // loop until f has
std::future_status::ready) // finished running...
{ // which may never happen!
…
}
If fruns concurrently with the thread calling std::async(i.e., if the launch policy chosen for fis std::launch::async), there's no problem here (assuming feventually finishes), but if fis deferred, fut.wait_forwill always return std::future_status::deferred. That will never be equal to std::future_status::ready, so the loop will never terminate.
Читать дальше