在前面的UnsafeCachingFactorizer类中,我们尝试用两个AtomicReferences变量来保存最新的数值及其因数分解结果,但这种方式并非是线程安全的,因为我们无法以原子方式来同时读取或更新这两个相关的值。同样,用volatile类型的变量来保存这些值也不是线程安全的。然而,在某些情况下,不可变对象能提供一种弱形式的原子性。
因式分解Servlet将执行两个原子操作:更新缓存的结果,以及通过判断缓存中的数值是否等于请求的数值来决定是否直接读取缓存中的因数分解结果。每当需要对一组相关数据以原子方式执行某个操作时,就可以考虑创建一个不可变的类来包含这些数据,例如程序清单3-12中的OneValueCache。
@Immutable
class OneValueCache {
private final BigInteger lastNumber;
private final BigInteger[] lastFactors;
public OneValueCache(BigInteger i,
BigInteger[] factors) {
lastNumber = i;
lastFactors = Arrays.copyOf(factors, factors.length);
}
public BigInteger[] getFactors(BigInteger i) {
if (lastNumber == null || !lastNumber.equals(i))
return null;
else
return Arrays.copyOf(lastFactors, lastFactors.length);
}
}
对于在访问和更新多个相关变量时出现的竞争条件问题,可以通过将这些变量全部保存在一个不可变对象中来消除。如果是一个可变的对象,那么就必须使用锁来确保原子性。如果是一个不可变对象,那么当线程获得了对该对象的引用后,就不必担心另一个线程会修改对象的状态。如果要更新这些变量,那么可以创建一个新的容器对象,但其他使用原有对象的线程仍然会看到对象处于一致的状态。
程序清单3-13中的VolatileCachedFactorizer使用了OneValueCache来保存缓存的数值及其因数。当一个线程将volatile类型的cache设置为引用一个新的OneValueCache时,其他线程就会立即看到新缓存的数据。
@ThreadSafe
public class VolatileCachedFactorizer implements Servlet {
private volatile OneValueCache cache =
new OneValueCache(null, null);
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = cache.getFactors(i);
if (factors == null) {
factorfactors = factor(i);
cache = new OneValueCache(i, factors);
}
encodeIntoResponse(resp, factors);
}
}
与cache相关的操作不会相互干扰,因为OneValueCache是不可变的,并且在每条相应的代码路径中只会访问它一次。
通过使用包含多个状态变量的容器对象来维持不变性条件,并使用一个volatile类型的引用来确保可见性,使得VolatileCachedFactorizer在没有显式地使用锁的情况下仍然是线程安全的。
程序清单3-13中存在『先检查后执行』(Check-Then-Act)的竞态条件。
OneValueCache类的不可变性仅保证了对象的原子性。
volatile仅保证可见性,无法保证线程安全性。
综上,对象的不可变性+volatile可见性,并不能解决竞态条件的并发问题,所以原文的这段结论是错误的。
疑惑已经解决了。
结论:
cache对象在service()中只有一处写操作(创建新的cache对象),其余都是读操作,这里符合volatile的应用场景,确保cache对象对其他线程的可见性,不会出现并发读的问题。返回的结果是factors对象,factors是局部变量,并未使cache对象逸出,所以这里也是线程安全的。
The cache object has only one write operation (creating a new cache object) in service(), and the rest are read operations. This is consistent with the volatile application scenario, ensuring the visibility of the cache object to other threads, and no concurrent reads will occur. The problem.
The returned result is the factors object. Factors are local variables and do not cause the cache object to escape, so it is also thread-safe here.
My personal understanding:
(1). The cache object has only one write operation in service(), but multiple threads will perform this write operation. For example, after thread A brings in parameter a1 to execute the service method, the cache contains number=a, lastFactors=[a]; then thread a2 enters the service method and brings in a, and executes BigInteger[] factors = cache.getFactors(i); to obtain When the thread switches to the cached data [a] for judgment, a C thread comes and brings in the parameter c to execute the method. In fact, the cache at this time is c and [c]. Theoretically, the value read by thread B is already expired. . . It's just because of the business meaning of "caching" that this expired value will not cause a program error. . . In other words, the thread safety of this example is now suitable for this business. . .
(2).OneValueCache’s immutability also has limitations. After an array of type BigInteger[] accepts external parameters, use Arrays.copyOf to prevent the value from changing after initialization. But if it is not a BigInteger type but another type, you must ensure that the type is also an immutable object.
In short, I feel that for a beginner like me, the lock is still the best to use. After all, program optimization is based on the absence of BUGs. If there are hidden loopholes due to misunderstandings on either side, they will be on their knees.