… // compute roots,
// store them in rootVals
rootsAreValid = true;
}
return rootVals;
}
private:
mutable bool rootsAreValid{ false }; // see Item 7 for info
mutable RootsType rootVals{}; // on initializers
};
Conceptually, rootsdoesn't change the Polynomialobject on which it operates, but, as part of its caching activity, it may need to modify rootValsand rootsAreValid. That's a classic use case for mutable, and that's why it's part of the declarations for these data members.
Imagine now that two threads simultaneously call rootson a Polynomialobject:
Polynomial p;
…
/* ----- Thread 1 ----- */ /*------- Thread 2 ------- */
auto rootsOfP = p.roots(); auto valsGivingZero = p.roots();
This client code is perfectly reasonable. rootsis a constmember function, and that means it represents a read operation. Having multiple threads perform a read operation without synchronization is safe. At least it's supposed to be. In this case, it's not, because inside roots, one or both of these threads might try to modify the data members rootsAreValidand rootVals. That means that this code could have different threads reading and writing the same memory without synchronization, and that's the definition of a data race. This code has undefined behavior.
The problem is that rootsis declared const, but it's not thread safe. The constdeclaration is as correct in C++11 as it would be in C++98 (retrieving the roots of a polynomial doesn't change the value of the polynomial), so what requires rectification is the lack of thread safety.
The easiest way to address the issue is the usual one: employ a mutex:
class Polynomial {
public:
using RootsType = std::vector;
RootsType roots() const {
std::lock_guard g(m);// lock mutex
if (!rootsAreValid) { // if cache not valid
… // compute/store roots
rootsAreValid = true;
}
return rootVals;
} // unlock mutex
private:
mutable std::mutex m;
mutable bool rootsAreValid{ false };
mutable RootsType rootVals{};
};
The std::mutex mis declared mutable, because locking and unlocking it are non- constmember functions, and within roots(a constmember function), mwould otherwise be considered a constobject.
It's worth noting that because std::mutexis a move-only type (i.e., a type that can be moved, but not copied), a side effect of adding mto Polynomialis that Polynomialloses the ability to be copied. It can still be moved, however.
In some situations, a mutex is overkill. For example, if all you're doing is counting how many times a member function is called, a std::atomiccounter (i.e, one where other threads are guaranteed to see its operations occur indivisibly — see Item 40) will often be a less expensive way to go. (Whether it actually is less expensive depends on the hardware you're running on and the implementation of mutexes in your Standard Library.) Here's how you can employ a std::atomicto count calls:
class Point { // 2D point
public:
…
double distanceFromOrigin() const noexcept // see Item 14
{ // for noexcept
++callCount; // atomic increment
return std::sqrt((x * x) + (y * y));
}
private:
mutable std::atomic callCount{ 0 };
double x, y;
};
Like std::mutexes, std::atomics are move-only types, so the existence of callCountin Pointmeans that Pointis also move-only.
Because operations on std::atomicvariables are often less expensive than mutex acquisition and release, you may be tempted to lean on std::atomics more heavily than you should. For example, in a class caching an expensive-to-compute int, you might try to use a pair of std::atomicvariables instead of a mutex:
class Widget {
public:
…
int magicValue() const {
if (cacheValid) return cachedValue;
else {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cachedValue = val1 + val2;// uh oh, part 1
cacheValid = true; // uh oh, part 2
return cachedValue;
}
}
private:
mutable std::atomiccacheValid{ false };
mutable std::atomiccachedValue;
};
This will work, but sometimes it will work a lot harder than it should. Consider:
• A thread calls Widget::magicValue, sees cacheValidas false, performs the two expensive computations, and assigns their sum to cachedValue.
• At that point, a second thread calls Widget::magicValue, also sees cacheValidas false, and thus carries out the same expensive computations that the first thread has just finished. (This “second thread” may in fact be several other threads.)
Such behavior is contrary to the goal of caching. Reversing the order of the assignments to cachedValueand CacheValideliminates that problem, but the result is even worse:
class Widget {
public:
…
int magicValue() const {
if (cacheValid) return cachedValue;
else {
auto val1 = expensiveComputation1();
auto val2 = expensiveComputation2();
cacheValid = true; // uh oh, part 1
return cachedValue = val1 + val2;// uh oh, part 2
}
}
…
};
Imagine that cacheValidis false, and then:
• One thread calls Widget::magicValueand executes through the point where cacheValidis set to true.
• At that moment, a second thread calls Widget::magicValueand checks cacheValid. Seeing it true, the thread returns cachedValue, even though the first thread has not yet made an assignment to it. The returned value is therefore incorrect.
There's a lesson here. For a single variable or memory location requiring synchronization, use of a std::atomicis adequate, but once you get to two or more variables or memory locations that require manipulation as a unit, you should reach for a mutex. For Widget::magicValue, that would look like this:
Читать дальше