Composing locks isn't a Java problem - it's a fundamental abstraction problem with locks. This is one of the reasons why you usually reach for higher level abstractions than mutexes.
> And computeIfAbsent can end up holding the lock for too long if the function is slow.
How is this different from any other lock-holding code written anywhere?
I’m saying Java is exceptionally bad at this because every object is its own mutex.
And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object. If there’s no object to make the critical section is very small. But as the object sprouts features you start smashing face first into Amdahl.
> And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object.
What is the alternative you suggest? If you care about having the predicate actually hold, and you also don't want to have to hold the lock while constructing the object, then you're going to end up in an optimistic-concurrency scenario where you check the predicate under lock, compute the object, and check again before swapping the value in. You may end up having to throw your work away when you discover the predicate changed. Java is no better nor worse at doing this than anything else.
Do people actually use `synchronized` methods in Java these days? It's been commonly described as an anti-pattern (for all the reasons discussed upthread here) two decades ago already.
The more useful question is has it been expunged from the JDK and common libraries. I think it's been more like 10-12 years since it really started being talked about in more than certain subcommunities and that's almost 20 years' worth of existing libraries.
OpenTelemetry is a fairly recent library. Even if you ignore some test fakes (where, let's face it, who cares), it still uses it in a few places, and uses lock objects in others. I don't see much evidence of recursion going on with the former. But that's how things always start and later there's running and screaming.
Some amount of legacy cruft is not unexpected, but it's sad that it can be seen in new code. In .NET, which has similarly problematic semantics with lock(), linters have been flagging lock(this) for ages.
I wonder where this patently bad idea of every object carrying its own publicly accessible mutex originated in the first place. Did Java introduce it, or did it also copy that from somewhere else? And what was the motivation?
Can't attest to the history of `lock` statement from the top of my head but the API shape of lock and Monitor.Enter/Exit methods it is desugared to looks like Win32's EnterCriticalSection and LeaveCriticalSection. Other Monitor's methods like Wait and Pulse look like pthread's condvar and mutex functions.
.NET also has MethodImplOptions.Synchronized like Java does. However, the only place I have ever seen this attribute was on TextWriter.Synchronized implementation in CoreLib and nowhere else.
Java itself has `Lock` and `Condition`. In the end, most synchronization primitives do the same high-level actions and bound to end up having similar API.
As for `lock(this)`, much like with many other historically abused techniques that have become frowned upon - it's not bad per se if you own the type and know that it is internal and will not be observed outside of the assembly it is defined in, provided it is small enough. It's footgun-prone, but generally very few code paths will lock an arbitrary object instance at all, so most of the time it's something you see so rarely it has become "just write a comment why and move on" when using it. Of course this requires more deliberation and it's easier to default to blanket policies that ignore context. It can be difficult to get people to "use the appropriate tool" mentality.
.NET is also getting it's a separate `Lock` type, on top of all the existing synchronization primitives, to move a little further away from other legacy aspects of `lock`ing on object instances.
It's not Monitor itself that's problematic. It's that every object is implicitly associated with one, and anyone who holds a reference to an object can lock it. It doesn't matter if the type is internal - it can still be upcast to System.Object and leaked that way.
In practice this means that unless you can guarantee that you never, ever leak a reference anywhere, you don't know who else might be locking it. Which makes it impossible to reason about possible deadlocks. So the only sane way to manage it is to have a separate object used just for locking, which is never ever passed outside of the object that owns the lock.
And yes, this is absolutely bad design. There's no reason why every object needs a lock, for starters - for the vast majority of them, it's just unnecessary overhead (and yes, I know the monitors are lazily created, but every object header still needs space to store the reference to it). Then of course the fact that it's there means that people take the easy path and just lock objects directly instead of creating separate locks, just because it's slightly less code - and then things break. It's almost always the wrong granularity, too.
Thing is, I haven't seen this design anywhere outside of Java and .NET (which copied it from Java along with so many other bad ideas). Everybody else uses the sane and obvious approach of creating locks explicitly if and when they are needed.
Right. Synchronized is the key word here. The vast majority of code doesn't involve synchronized, and therefore the vast majority of objects don't have locks associated with them. That's quite important.
Those classes which do use synchronized were just going to create a ReentrantLock held for the duration of the call anyway, in which case it's all monitorEnter and monitorExit, regardless.
> This is going to put a damper on any further conversation.
> in which case it's all monitorEnter and monitorExit, regardless.
Oops, I need to correct myself!
ReentrantLock doesn't depend upon monitorEnter/Exit, but rather AbstractQueuedSynchronizer and LockSupport, which ultimately delegate to Unsafe methods like park/unpark and CAS (*compareAndSet*). Don't know why I had that confused in my head.
In any case, the point holds that "synchronized" as a language feature has mostly a zero cost for code that doesn't use it. It's a red herring when discussing modern Java concurrency.
And computeIfAbsent can end up holding the lock for too long if the function is slow.