The first generation of microservices was envisioned as stateless request-response endpoints. But it’s now clear that microservices must often maintain some state. For example, microservices tasked with running machine learning models or engaged in statistical classification must maintain the state of their models and their parameter weights. This brings us to one of the biggest challenges—where is that state stored? Options like RDBMSs are too slow, do not scale, and have inflexible schema models. Distributed in-memory caching, however, is the only widely adopted enterprise technology that offers high speed, scalability, and dynamic schema evolution.
In this webinar, we will discuss: