CP Subsystem
The CP Subsystem is a component of a Hazelcast cluster that builds an in-memory strongly consistent layer. It is accessed via HazelcastInstance.getCPSubsystem()., Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions.
Docs
|
JSON Support
Hazelcast IMDG recognizes JSON data structures when saved as a value to a Map using the HazelcastJsonValue type. Once saved, all standard operations can be carried out, such as Predicate Queries and Aggregations. Hazelcast support for in-memory JSON storage provides a 400% increase in throughput when compared to popular NoSQL document stores. JSON support is available for all Hazelcast IMDG clients.
Docs
|
Pipelining
A new convenience API for rapid population of the cluster is now available. The Pipelining API manages multiple async ingests. You can send multiple requests in parallel using a single thread and therefore can increase throughput.
Docs
|
ICountDownLatch
ICountDownLatch , Hazelcast’s distributed implementation of java.util.concurrent.CountDownLatch , is a synchronization aid that allows one or more threads––in one or more application instances––to wait until a set of operations being performed in other threads across the cluster completes.
ICountDownLatch is initialized with a given count. The countDown() method is a non-blocking operation that decrements the count. When the count reaches zero, all threads blocking on the await() method are allowed to proceed.
Docs
|
FencedLock
FencedLock is a linearizable and distributed and reentrant implementation of java.util.concurrent.locks.Lock. FencedLock is accessed via CPSubsystem.getLock(String), . It is CP with respect to the CAP principle. It works on top of the Raft consensus algorithm. It offers linearizability during crash-stop failures and network partitions. If a network partition occurs, it remains available on at most one side of the partition. FencedLock works on top of CP sessions. Please see the CP Sessions section for more information about CP sessions.
Docs
|
Flake ID Generator
Hazelcast Flake ID Generator is used to generate cluster-wide unique identifiers. Generated identifiers are long primitive values and are k-ordered (roughly ordered). IDs are in the range from 0 to Long.MAX_VALUE .
Docs
|
CRDT PN Counter
The PN Counter is a lightweight Positive-Negative Counter implementation, which is a CRDT (Conflict-free Replicated Data Type). Each cluster member can increment and decrement the counter value and these updates are propagated to all members. Since operations are local (on a Hazelcast member), your application can achieve great performance for high-volume traffic, such as counting likes, page views or connected user count.
Docs
|
Cardinality Estimator Service (HyperLogLog)
Hazelcast’s cardinality estimator service is a data structure which implements Flajolet’s HyperLogLog algorithm for estimating cardinalities of unique objects in theoretically huge data sets. The implementation offered by Hazelcast includes improvements from Google’s version of the algorithm, i.e., HyperLogLog++. Some common use cases includes:
Calculating unique site visitor metrics (real-time) daily, weekly, monthly, yearly or ever, based on IP or user.
Measuring how a campaign performs (impressions, clicks etc) in advertising.
Docs
|
IdGenerator
IdGenerator, is a distributed id generator that facilitates creating ids that are unique across application instances in a cluster.
Docs
|
List
Hazelcast List is similar to Hazelcast Set, but Hazelcast List also allows duplicate elements. Hazelcast List also preserves the order of elements. Hazelcast List is a non-partitioned data structure where values and each backup are represented by their own single partition. Hazelcast List cannot be scaled beyond the capacity of a single machine. All items are copied to local and iteration occurs locally.
List data structure can also be used by Hazelcast Jet for fast batch processing. Hazelcast Jet uses List as a source (reads data from List) and as a sink (writes data to List). Please see the Fast Batch Processing use case for Hazelcast Jet.
Docs
|
Lock
Lock is the distributed implementation of java.util.concurrent.locks.Lock . If you lock using an ILock, the critical section that it guards is guaranteed to be executed by only one thread in the entire cluster. Even though locks are great for synchronization, they can lead to problems if not used properly. Also note that Hazelcast Lock does not support fairness.
Docs
|
Map
Hazelcast Map (IMap) extends the interface java.util.concurrent.ConcurrentMap and hence java.util.Map . It is the distributed implementation of Java Map. You can perform operations like reading and writing from/to a Hazelcast map with the well known get and put methods. In addition, search can be run on maps. Finally, maps may be integrated with a database using MapStore .
Map data structures can also be used by Hazelcast Jet for real-time stream processing (by enabling the Event Journal on your map) and fast batch processing. Hazelcast Jet uses Map as a source (reads data from Map) and as a sink (writes data to Map).
Docs
|
MultiMap
Hazelcast MultiMap is a specialized map where you can store multiple values under a single key. Just like any other distributed data structure implementation in Hazelcast, MultiMap is distributed and thread-safe.
Docs
|
Queues
Hazelcast distributed Queue is an implementation of java.util.concurrent.BlockingQueue . Being distributed, it enables all cluster members to interact with it. Using Hazelcast distributed queue, you can add an item in one machine and remove it from another one.
Docs
|
ReplicatedMap
Unlike IMap , which is partitioned to balance data across the cluster, ReplicatedMap is fully replicated such that all members have the full map in memory. It replication is weakly consistent––rather than eventually consistent––and done on a best-effort basis.
ReplicatedMaps have faster read-write characteristics, since all data is present in local members and writes happen locally and eventually replicated. Replication messages are also batched to minimize network operations.
ReplicatedMaps are useful for immutable objects, catalog data, or idempotent calculable data.
Docs
|
Ringbuffer
Hazelcast Ringbuffer is a lock-free distributed data structure that stores its data in a ring-like structure. Think of it as a circular array with a given capacity. Each Ringbuffer has a tail, where the items are added, and a head, where the items are overwritten or expired. You can reach each element in a Ringbuffer using a sequence ID, which is mapped to the elements between the head and tail (inclusive) of the Ringbuffer. It supports single and batch operations and is very high-performance.
Docs
|
Semaphores
Hazelcast ISemaphore is the distributed implementation of java.util.concurrent.Semaphore . Semaphores offer permits to control the thread counts in the case of performing concurrent activities. To execute a concurrent activity, a thread grants a permit or waits until a permit becomes available. When the execution is completed, the permit is released.
Docs
|
Set
A Set is a collection where every element only occurs once and where the order of the elements doesn’t matter. The Hazelcast com.hazelcast.core.ISet implements the java.util.Set . Hazelcast Set is a distributed and concurrent implementation of java.util.Set .
In Hazelcast, the ISet (and the IList ) is implemented as a collection within MultiMap , where the id of the set is the key in the MultiMap and the value is the collection.
Docs
|
Topic and ReliableTopic
Hazelcast provides a distribution mechanism for publishing messages that are delivered to multiple subscribers. This is also known as a publish/subscribe (pub/sub) messaging model. Publishing and subscribing operations are cluster wide. When a member subscribes to a topic, it is actually registering for messages published by any member in the cluster, including the new members that joined after you add the listener.
ReliableTopic is backed by a Ringbuffer with a backup to avoid message loss and to provide isolation between fast producers and slow consumers.
Docs
|