The concern is that if a process reads a partially updated ticket number from another process, it might end up with a ticket number that falsely represents its position in the queue. The algorithm is designed to handle such cases:
If the read value is too low (because of a partial update), the process will wait longer than necessary, which doesn’t violate mutual exclusion but may affect fairness temporarily.
If the read value is too high, it doesn’t allow the reading process to jump ahead in the queue unfairly.
The key point is that even if a process reads a partially updated value, it either waits longer than necessary or gets a number that still respects the ordering. This is because the algorithm uses relative ordering (i.e., based on comparing ticket numbers) rather than absolute values.
Yeah something wasn’t sitting well with me about this reasoning and Wikipedia says this:
> Lamport's bakery algorithm assumes a sequential consistency memory model. Few, if any, languages or multi-core processors implement such a memory model. Therefore, correct implementation of the algorithm typically requires inserting fences to inhibit reordering
So yeah, you do need fences because a multi processor superscalar CPU is going to reorder the entering and ticket variables and break the exclusion rules. But then if you have fences, don’t you have the ability to implement a simpler mutual exclusion mechanism by definition?
If the read value is too low (because of a partial update), the process will wait longer than necessary, which doesn’t violate mutual exclusion but may affect fairness temporarily.
If the read value is too high, it doesn’t allow the reading process to jump ahead in the queue unfairly.
The key point is that even if a process reads a partially updated value, it either waits longer than necessary or gets a number that still respects the ordering. This is because the algorithm uses relative ordering (i.e., based on comparing ticket numbers) rather than absolute values.