No posts were found matching that criteria.
Hazelcast Cloud Enterprise is the new cloud-native managed service that allows you to quickly set up Hazelcast IMDG in a public cloud, fully managed for you by Hazelcast. This tutorial will walk through deployment of Hazelcast Cloud Enterprise on Amazon AWS.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
This white paper discusses the value of cloud deployments and how Hazelcast Cloud Enterprise can help you get the most out of your investment in cloud applications.
Hazelcast Cloud is an on-demand managed service for the Hazelcast In-Memory Data Grid.
While a microservices architecture is more scalable than a monolith, it has a direct hit on performance. To cope with that, one performance improvement is to set up a cache. It can be configured for database access, for REST calls or just to store session state across a cluster of server nodes. In this demo-based talk, I’ll show how Hazelcast In-Memory Data Grid can help you in each one of those areas and how to configure it.
In this talk, Marko will show one approach which allows you to write a low-latency, auto-parallelized and distributed stream processing pipeline in Java that seamlessly integrates with a data scientist's work taken in almost unchanged form from their Python development environment. The talk includes a live demo using the command line and going through some Python and Java code snippets.
Having fault-tolerance can be a factor to choose a distributed system even if the expected load can be handled by a single machine - a distributed system can tolerate failures of its parts while a system running on a single machine cannot. How can a stream-processing engine guarantee an exactly-once semantics? Viliam will describe the Chandy-Lamport algorithm that can be used to consistently snapshot the global state of a distributed system. I’ll also describe its special simplified case that’s used in Jet.
There are common themes when people describe their reasons for rearchitecting legacy business applications, at a technical level: Speed & Scalability. At a business level: The need to gain new real-time insights. These legacy applications commonly center around some central datastore such as a relational database. Moving away from this architecture requires massive migration effort. The talk is a practical introduction to CDC (Change Data Capture). It covers an architecture, trade-offs, tooling, and demos.
This case study on how Airbus Defence and Space uses Hazelcast is a great example of how ease-of-use goes a long way in helping engineers build critical systems for large-scale initiatives.
There are no more posts.