• You need access to the API of the underlying threading implementation. The C++ concurrency API is typically implemented using a lower-level platform- specific API, usually pthreads or Windows' Threads. Those APIs are currently richer than what C++ offers. (For example, C++ has no notion of thread priorities or affinities.) To provide access to the API of the underlying threading implementation, std::thread
objects typically offer the native_handle
member function. There is no counterpart to this functionality for std::future
s (i.e., for what std::async
returns).
• You need to and are able to optimize thread usage for your application. This could be the case, for example, if you're developing server software with a known execution profile that will be deployed as the only significant process on a machine with fixed hardware characteristics.
• You need to implement threading technology beyond the C++ concurrency API, e.g., thread pools on platforms where your C++ implementations don't offer them.
These are uncommon cases, however. Most of the time, you should choose task-based designs instead of programming with threads.
Things to Remember
• The std::thread
API offers no direct way to get return values from asynchronously run functions, and if those functions throw, the program is terminated.
• Thread-based programming calls for manual management of thread exhaustion, oversubscription, load balancing, and adaptation to new platforms.
• Task-based programming via std::async
with the default launch policy handles most of these issues for you.
Item 36: Specify std::launch::async
if asynchronicity is essential.
When you call std::async
to execute a function (or other callable object), you're generally intending to run the function asynchronously. But that's not necessarily what you're asking std::async
to do. You're really requesting that the function be run in accord with a std::async
launch policy . There are two standard policies, each represented by an enumerator in the std::launch
scoped enum
. (See Item 10for information on scoped enum
s.) Assuming a function f
is passed to std::async
for execution,
• The std::launch::async
launch policymeans that f
must be run asynchronously, i.e., on a different thread.
• The std::launch::deferred
launch policymeans that f
may run only when get
or wait
is called on the future returned by std::async
. [16] This is a simplification. What matters isn't the future on which get or wait is invoked, it's the shared state to which the future refers. ( Item 38 discusses the relationship between futures and shared states.) Because std::future s support moving and can also be used to construct std::shared_future s, and because std::shared_future s can be copied, the future object referring to the shared state arising from the call to std::async to which f was passed is likely to be different from the one returned by std::async . That's a mouthful, however, so it's common to fudge the truth and simply talk about invoking get or wait on the future returned from std::async .
That is, f
's execution is deferred until such a call is made. When get
or wait
is invoked, f
will execute synchronously, i.e., the caller will block until f
finishes running. If neither get
nor wait
is called, f
will never run.
Perhaps surprisingly, std::async
's default launch policy — the one it uses if you don't expressly specify one — is neither of these. Rather, it's these or-ed together. The following two calls have exactly the same meaning:
auto fut1 = std::async(f); // run f using
// default launch
// policy
auto fut2 = std::async( std::launch::async | // run f either
std::launch::deferred, // async or
f); // deferred
The default policy thus permits f to be run either asynchronously or synchronously. As Item 35points out, this flexibility permits std::async
and the thread-management components of the Standard Library to assume responsibility for thread creation and destruction, avoidance of oversubscription, and load balancing. That's among the things that make concurrent programming with std::async
so convenient.
But using std::async
with the default launch policy has some interesting implications. Given a thread t
executing this statement,
auto fut = std::async(f); // run f using default launch policy
• It's not possible to predict whether f
will run concurrently with t
, because f might be scheduled to run deferred.
• It's not possible to predict whether f
runs on a thread different from the thread invoking get
or wait
on fut
. If that thread is t
, the implication is that it's not possible to predict whether f
runs on a thread different from t
.
• It may not be possible to predict whether f
runs at all, because it may not be possible to guarantee that get
or wait
will be called on fut
along every path through the program.
The default launch policy's scheduling flexibility often mixes poorly with the use of thread_local
variables, because it means that if f
reads or writes such thread-local storage (TLS), it's not possible to predict which thread's variables will be accessed:
auto fut = std::async(f); // TLS for f possibly for
// independent thread, but
// possibly for thread
// invoking get or wait on fut
It also affects wait
-based loops using timeouts, because calling wait_for
or wait_until
on a task (see Item 35) that's deferred yields the value std::launch::deferred
. This means that the following loop, which looks like it should eventually terminate, may, in reality, run forever:
using namespace std::literals; // for C++14 duration
// suffixes; see Item 34
void f() // f sleeps for 1 second,
{ // then returns
std::this_thread::sleep_for(1s);
}
auto fut = std::async(f); // run f asynchronously
// ( conceptually )
while (fut. wait_for(100ms) != // loop until f has
std::future_status::ready) // finished running...
{ // which may never happen!
…
}
If f
runs concurrently with the thread calling std::async
(i.e., if the launch policy chosen for f
is std::launch::async
), there's no problem here (assuming f
eventually finishes), but if f
is deferred, fut.wait_for
will always return std::future_status::deferred
. That will never be equal to std::future_status::ready
, so the loop will never terminate.
Читать дальше