… // compute roots,
// store them in rootVals
rootsAreValid = true;
}
return rootVals;
}
private:
mutable bool rootsAreValid{ false }; // see Item 7 for info
mutable RootsType rootVals{}; // on initializers
};
Conceptually, roots
doesn't change the Polynomial
object on which it operates, but, as part of its caching activity, it may need to modify rootVals
and rootsAreValid
. That's a classic use case for mutable
, and that's why it's part of the declarations for these data members.
Imagine now that two threads simultaneously call roots
on a Polynomial
object:
Polynomial p;
…
/* ----- Thread 1 ----- */ /*------- Thread 2 ------- */
auto rootsOfP = p.roots(); auto valsGivingZero = p.roots();
This client code is perfectly reasonable. roots
is a const
member function, and that means it represents a read operation. Having multiple threads perform a read operation without synchronization is safe. At least it's supposed to be. In this case, it's not, because inside roots
, one or both of these threads might try to modify the data members rootsAreValid
and rootVals
. That means that this code could have different threads reading and writing the same memory without synchronization, and that's the definition of a data race. This code has undefined behavior.
The problem is that roots
is declared const
, but it's not thread safe. The const
declaration is as correct in C++11 as it would be in C++98 (retrieving the roots of a polynomial doesn't change the value of the polynomial), so what requires rectification is the lack of thread safety.
The easiest way to address the issue is the usual one: employ a mutex
:
class Polynomial {
public:
using RootsType = std::vector;
RootsType roots() const {
std::lock_guard g(m);// lock mutex
if (!rootsAreValid) { // if cache not valid
… // compute/store roots
rootsAreValid = true;
}
return rootVals;
} // unlock mutex
private:
mutable std::mutex m;
mutable bool rootsAreValid{ false };
mutable RootsType rootVals{};
};
The std::mutex m
is declared mutable
, because locking and unlocking it are non- const
member functions, and within roots
(a const
member function), m
would otherwise be considered a const
object.
It's worth noting that because std::mutex
is a move-only type (i.e., a type that can be moved, but not copied), a side effect of adding m
to Polynomial
is that Polynomial
loses the ability to be copied. It can still be moved, however.
In some situations, a mutex is overkill. For example, if all you're doing is counting how many times a member function is called, a std::atomic
counter (i.e, one where other threads are guaranteed to see its operations occur indivisibly — see Item 40) will often be a less expensive way to go. (Whether it actually is less expensive depends on the hardware you're running on and the implementation of mutexes in your Standard Library.) Here's how you can employ a std::atomic
to count calls:
class Point { // 2D point
public:
…
double distanceFromOrigin() const noexcept // see Item 14
{ // for noexcept
++callCount; // atomic increment
return std::sqrt((x * x) + (y * y));
}
private:
mutable std::atomic callCount{ 0 };
double x, y;
};
Like std::mutex
es, std::atomic
s are move-only types, so the existence of callCount
in Point
means that Point
is also move-only.
Because operations on std::atomic
variables are often less expensive than mutex acquisition and release, you may be tempted to lean on std::atomic
s more heavily than you should. For example, in a class caching an expensive-to-compute int
, you might try to use a pair of std::atomic
variables instead of a mutex:
class Widget {
public:
…
int magicValue() const {
if (cacheValid) return cachedValue;
else {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cachedValue = val1 + val2;// uh oh, part 1
cacheValid = true; // uh oh, part 2
return cachedValue;
}
}
private:
mutable std::atomiccacheValid{ false };
mutable std::atomiccachedValue;
}
;
This will work, but sometimes it will work a lot harder than it should. Consider:
• A thread calls Widget::magicValue
, sees cacheValid
as false
, performs the two expensive computations, and assigns their sum to cachedValue
.
• At that point, a second thread calls Widget::magicValue
, also sees cacheValid
as false
, and thus carries out the same expensive computations that the first thread has just finished. (This “second thread” may in fact be several other threads.)
Such behavior is contrary to the goal of caching. Reversing the order of the assignments to cachedValue
and CacheValid
eliminates that problem, but the result is even worse:
class Widget {
public:
…
int magicValue() const {
if (cacheValid) return cachedValue;
else {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cacheValid = true; // uh oh, part 1
return cachedValue = val1 + val2;// uh oh, part 2
}
}
…
};
Imagine that cacheValid
is false
, and then:
• One thread calls Widget::magicValue
and executes through the point where cacheValid
is set to true
.
• At that moment, a second thread calls Widget::magicValue
and checks cacheValid
. Seeing it true
, the thread returns cachedValue
, even though the first thread has not yet made an assignment to it. The returned value is therefore incorrect.
There's a lesson here. For a single variable or memory location requiring synchronization, use of a std::atomic
is adequate, but once you get to two or more variables or memory locations that require manipulation as a unit, you should reach for a mutex. For Widget::magicValue
, that would look like this:
Читать дальше