Leaders Logo

When Trust is Faster: The Role of Optimistic Locking in High Scalability Architectures

Introduction

With the exponential growth of web and mobile applications, software architectures need to evolve to handle ever-increasing volumes of requests and data. In this context, scalability and efficiency become essential requirements. One of the most prominent strategies in this scenario is Optimistic Locking.

Unlike traditional locking-based approaches, this technique assumes that conflicts are exceptions, not the rule, allowing multiple transactions to access and modify data simultaneously. This article explores how and why trusting this approach can, paradoxically, make systems faster and more predictable in high-concurrency environments.

Fundamentals of Optimistic Locking

Definition and Principles

Optimistic Locking is a concurrency control technique that assumes most transactions occur without conflicts. Instead of locking resources during manipulation, the system allows multiple transactions to operate in parallel on the same data, resolving any inconsistencies only at the time of confirming changes. This strategy reduces wait time and increases throughput, becoming particularly efficient in distributed scenarios.

It is one of the most effective ways to enhance concurrency without degrading consistency, especially in high-load systems. Synchronization is deferred to the moments when it is truly necessary, maximizing parallelism and reducing resource idle time (GRAMOLI; KUZNETSOV; RAVI, 2012).

The Cost of Concurrency and the Role of Transactional Memory

The choice between Optimistic and Pessimistic Locking is not trivial. The cost of managing concurrency can, in some cases, outweigh the gains from parallelism, especially when the system heavily relies on transactional locks (KUZNETSOV; RAVI, 2011). Therefore, understanding the profile of operations and the frequency of conflicts is essential to define the ideal approach.

On the other hand, solutions based on version validation or timestamps can drastically reduce synchronization overhead while maintaining data integrity with less impact on performance.

Thus, Optimistic Locking emerges as a balance between freedom and security. It reduces contention without compromising integrity, especially when combined with automated retry mechanisms and asynchronous processing in distributed queues.

Latency, Predictability, and Modern Architectures

Modern solutions, such as the Plor system (CHEN et al., 2022), highlight that latency predictability is one of the biggest challenges in transactional systems. When correctly implemented, Optimistic Locking helps reduce tail latency, the response time in worst-case scenarios, resulting in more uniform and predictable operations (CHEN et al., 2022).

Large-scale infrastructures, like those of ByteDance, apply optimistic control principles in their scheduling and resource management systems, exemplified by the Gödel project (XIANG et al., 2023). The philosophy is simple but powerful: synchronize as little as possible, trust in success statistics, and optimize performance for the common case.

Trusting Without Losing Consistency

In NoSQL databases, where eventual consistency is often adopted, the balance between performance and integrity is even more delicate. Kanungo and Morena (2024) emphasize that the consistency model directly impacts throughput and, consequently, the user experience (KANUNGO; MORENA, 2024).

Optimistic Locking acts as a bridge between the worlds of strong consistency and global scalability: it enables locally consistent operations without compromising large-scale performance.

Furthermore, in ecosystems based on messaging and independent microservices, this approach proves to be especially advantageous. Control is distributed, and the cost of an isolated rollback tends to be much lower than maintaining persistent locks between services.

How It Works

Basic Process

The functioning of Optimistic Locking can be divided into three main steps:

  1. Reading: the system obtains the current state of the resource.
  2. Modification: the application makes changes locally, in memory.
  3. Validation: before confirming the changes, the system checks if the data has not been altered by another transaction.
SVG Image of the Article

Comparison with Pessimistic Locking

The main advantage of Optimistic Locking is its ability to maximize concurrency. In systems where conflicts are rare, this approach allows for significantly superior performance as it avoids unnecessary locks and frees resources more quickly.

Pessimistic Locking, on the other hand, locks resources until the transaction is completed, which can create contention and degrade performance in high-load scenarios. The result is an increase in latency and wait time, especially in systems with a large volume of simultaneous transactions.

Implementing Optimistic Locking in C#

Basic Example

Below is a simplified example of how to implement Optimistic Locking in C# using Entity Framework:


public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
    [Timestamp] // Attribute for version control
    public byte[] RowVersion { get; set; }
}

public void UpdateProduct(Product updatedProduct)
{
    using (var context = new ApplicationDbContext())
    {
        context.Products.Attach(updatedProduct);
        context.Entry(updatedProduct).State = EntityState.Modified;

        try
        {
            context.SaveChanges(); // Attempt to save changes
        }
        catch (DbUpdateConcurrencyException)
        {
            // Conflict handling
            Console.WriteLine("The product was modified by another transaction.");
        }
    }
}

Advanced Example with Transactions

For more complex scenarios, consider the following example that uses explicit transactions:


public void UpdateProductWithTransaction(Product updatedProduct)
{
    using (var transaction = context.Database.BeginTransaction())
    {
        try
        {
            context.Products.Attach(updatedProduct);
            context.Entry(updatedProduct).State = EntityState.Modified;
            context.SaveChanges();
            transaction.Commit(); // Confirm changes
        }
        catch (DbUpdateConcurrencyException)
        {
            transaction.Rollback(); // Revert the transaction in case of conflict
            Console.WriteLine("Conflict detected while updating the product.");
        }
    }
}

When to Use Optimistic Locking

Ideal Scenarios

Optimistic Locking is ideal in scenarios where:

  • Write operations are rare compared to read operations.
  • Data is often not updated simultaneously by multiple transactions.
  • Latency needs to be minimized, and wait time should be reduced.

Scenarios to Avoid

On the other hand, the use of Optimistic Locking should be avoided when:

  • Write transactions are frequent and occur under high concurrency.
  • Data requires strict concurrency control.

Conclusion

Optimistic Locking presents itself as an effective solution for improving performance and scalability in modern systems. By adopting this approach, developers can offer faster and more responsive user experiences, even under heavy load. Trust in data and the ability to operate in high-concurrency environments are crucial for the success of contemporary applications.

References

  • GRAMOLI, Vincent; KUZNETSOV, Petr; RAVI, Srivatsan. Optimism for boosting concurrency. arXiv preprint arXiv:1203.4751, 2012. reference.Description
  • KUZNETSOV, Petr; RAVI, Srivatsan. On the cost of concurrency in transactional memory. In: International Conference On Principles Of Distributed Systems. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. p. 112-127. reference.Description
  • CHEN, Youmin et al. Plor: General transactions with predictable, low tail latency. In: Proceedings of the 2022 International Conference on Management of Data. 2022. p. 19-33. reference.Description
  • KANUNGO, Sonal; MORENA, Rustom D. Original Research Article Concurrency versus consistency in NoSQL databases. Journal of Autonomous Intelligence, v. 7, n. 3, 2024. reference.Description
  • XIANG, Wu et al. Gödel: Unified large-scale resource management and scheduling at ByteDance. In: Proceedings of the 2023 ACM Symposium on Cloud Computing. 2023. p. 308-323. reference.Description
About the author