When any data is posted to the database, we look for the save action from CosmosDB to ensure consistent reads.
We are very sure that the data we are reading is up to date, but there is always a waiting time for acknowledge if all the replicas are up to date. With Strong consistency, however, we always read the updated data which results in higher latency and higher consistency but lower availability and lower performance as everybody is waiting to receive the data to read from somewhere.
Stale Reads? (Bounded Staleness)
Stale reads mean that data you are currently reading is out of date and is not updated. Till some point, it is okay but you need to define that how much out of date, data do you need. So basically, it is okay to have dirty reads, but only till the level where your specified out of date is in control. After the out of date specified by you, reads will not be accepted and Cosmos DB will switch to strong consistency not allowing the stale reads too.
We can specify two things here to define how old is the data which you call the stale data.
– Lag in time
– Update Operations
Lag in time gives you the data that is older than maybe ‘t’ seconds, and the update operations give you the information of the data that has been updated for more than ‘x’ many times.
Another concept that comes in when talking about consistency levels is Session Consistency.
This is default level of consistency set for any database. Session consistency maintains a session for all the writers of the database by maintaining a unique session key. This session key then ensures that the writers will always have guaranteed read of the correct and updated data in any of the replicas, never getting a stale version.
However, this will only be applicable to the writers and the other users may still face dirty reads.
In consistent prefix, it is possible that you are getting dirty reads, but one thing is made sure of here, and this is that whatever data you are reading is the one that is updated in all the replicas, however this is possible it is not the most up to date version.
Also, if the same data gets updated multiple times, we will get only that version which has been replicated. That means, we wont get access to the versions not replicated yet. So this acts as a guarantee provided by Consistent prefix.
In Eventual consistency level, it is possible that you get dirty reads and there is no guarantee of the replicated data being up to date. But the good part here is that there is no waiting time as we need not wait for any acknowledgement that all the replicas have been updated.
This one is the total opposite of Strong Consistency as, in here there is no guarantee at all about the up to date reads from the replicas.