Open Gitter Chat

Hazelcast IMDG Features

Hazelcast IMDG 3.9 Open Source Architecture

Hazelcast IMDG 3.9 Open Source Architecture Diagram

For additional operation and high scale features see the Hazelcast IMDG Operational In-Memory Computing Platform.

Distributed Caching

AtomicLong

IAtomicLong,Hazelcast’s distributed implementation of java.util.concurrent.atomic.AtomicLong, offers most of AtomicLong’s operations such as get, set, getAndSet, compareAndSet and incrementAndGet. Since IAtomicLong is a distributed implementation, these operations involve remote calls and hence their performances differ from AtomicLong.

You can send functions to an IAtomicLong. The reason for using a function instead of a simple code line like atomicLong.set(atomicLong.get() + 2)); is that the IAtomicLong read and write operations are not atomic. Since IAtomicLong is a distributed implementation, those operations can be remote ones, which may lead to race problems. By using functions, the data is not pulled into the code, but the code is sent to the data. This makes it more scalable.

See the Docs for this feature

AtomicReference

IAtomicReference, Hazelcast’s distributed implementation of java.util.concurrent.atomic.AtomicReference, offers compare-and-set and get-and-set operations on object references that are guaranteed atomic across application instances in a cluster.

See the Docs for this feature

CountDownLatch

ICountDownLatch, Hazelcast’s distributed implementation of java.util.concurrent.CountDownLatch, is a synchronization aid that allows one or more threads––in one or more application instances––to wait until a set of operations being performed in other threads across the cluster completes.

ICountDownLatch is initialized with a given count. The countDown() method is a non-blocking operation that decrements the count. When the count reaches zero, all threads blocking on the await() method are allowed to proceed.

See the Docs for this feature

Cardinality Estimator Service (HyperLogLog)

Hazelcast’s cardinality estimator service is a data structure which implements Flajolet’s HyperLogLog algorithm for estimating cardinalities of unique objects in theoretically huge data sets. The implementation offered by Hazelcast includes improvements from Google’s version of the algorithm, i.e., HyperLogLog++. Some common use cases includes:

  • Calculating unique site visitor metrics (real-time) daily, weekly, monthly, yearly or ever, based on IP or user.
  • Measuring how a campaign performs (impressions, clicks etc) in advertising.
See the Docs for this feature

IdGenerator

IdGenerator is a distributed id generator that facilitates creating ids that are unique across application instances in a cluster.

See the Docs for this feature

List

Hazelcast List is similar to Hazelcast Set, but Hazelcast List also allows duplicate elements. Hazelcast List also preserves the order of elements. Hazelcast List is a non-partitioned data structure where values and each backup are represented by their own single partition. Hazelcast List cannot be scaled beyond the capacity of a single machine. All items are copied to local and iteration occurs locally.

See the Docs for this feature

Lock

ILock is the distributed implementation of java.util.concurrent.locks.Lock. If you lock using an ILock, the critical section that it guards is guaranteed to be executed by only one thread in the entire cluster. Even though locks are great for synchronization, they can lead to problems if not used properly. Also note that Hazelcast Lock does not support fairness.

See the Docs for this feature

Map

Hazelcast Map (IMap) extends the interface java.util.concurrent.ConcurrentMap and hence java.util.Map. It is the distributed implementation of Java map. You can perfrom operations like reading and writing from/to a Hazelcast map with the well known get and put methods. In addition Search and Map/Reduce can be run on Maps. Finally, maps may be integrated with a database using MapStore.

See the Docs for this feature

MultiMap

Hazelcast MultiMap is a specialized map where you can store multiple values under a single key. Just like any other distributed data structure implementation in Hazelcast, MultiMap is distributed and thread-safe.

See the Docs for this feature

Queues

Hazelcast distributed Queue is an implementation of java.util.concurrent.BlockingQueue. Being distributed, it enables all cluster members to interact with it. Using Hazelcast distributed queue, you can add an item in one machine and remove it from another one.

See the Docs for this feature

Replicated Map

Unlike IMap, which is partitioned to balance data across the cluster, ReplicatedMap is fully replicated such that all members have the full map in memory. It replication is weakly consistent––rather than eventually consistent––and done on a best-effort basis.

ReplicatedMaps have faster read-write characteristics, since all data is present in local members and writes happen locally and eventually replicated. Replication messages are also batched to minimize network operations.

ReplicatedMaps are useful for immutable objects, catalog data, or idempotent calculable data.

See the Docs for this feature

Ringbuffer

Hazelcast Ringbuffer is a lock-free distributed data structure that stores its data in a ring-like structure. Think of it as a circular array with a given capacity. Each Ringbuffer has a tail, where the items are added, and a head, where the items are overwritten or expired. You can reach each element in a Ringbuffer using a sequence ID, which is mapped to the elements between the head and tail (inclusive) of the Ringbuffer. It supports single and batch operations and is very high-performance.

See the Docs for this feature

Semaphores

Hazelcast ISemaphore is the distributed implementation of java.util.concurrent.Semaphore. Semaphores offer permits to control the thread counts in the case of performing concurrent activities. To execute a concurrent activity, a thread grants a permit or waits until a permit becomes available. When the execution is completed, the permit is released.

See the Docs for this feature

Set

A Set is a collection where every element only occurs once and where the order of the elements doesn’t matter. The Hazelcast com.hazelcast.core.ISet implements the java.util.Set. Hazelcast Set is a distributed and concurrent implementation of java.util.Set.

In Hazelcast, the ISet (and the IList) is implemented as a collection within MultiMap, where the id of the set is the key in the MultiMap and the value is the collection.

See the Docs for this feature

Topic and ReliableTopic

Hazelcast provides a distribution mechanism for publishing messages that are delivered to multiple subscribers. This is also known as a publish/subscribe (pub/sub) messaging model. Publishing and subscribing operations are cluster wide. When a member subscribes to a topic, it is actually registering for messages published by any member in the cluster, including the new members that joined after you add the listener.

ReliableTopic is backed by a Ringbuffer with a backup to avoid message loss and to provide isolation between fast producers and slow consumers.

See the Docs for this feature

Distributed Compute

Entry Processor

An entry processor enables fast in-memory operations on a map without having to worry about locks or concurrency issues. It can be applied to a single map entry or to all map entries. It supports choosing target entries using predicates. You do not need any explicit lock on entry: Hazelcast locks the entry, runs the EntryProcessor, and then unlocks the entry.

Hazelcast sends the entry processor to each cluster member and these members apply it to map entries. Therefore, if you add more members, your processing is completed faster.

See the Docs for this feature

Executor Service

One of the coolest features of Java 1.5 is the Executor framework, which allows you to asynchronously execute your tasks (logical units of work), such as database query, complex calculation, and image rendering.

The default implementation of this framework (ThreadPoolExecutor) is designed to run within a single JVM. In distributed systems, this implementation is not desired since you may want a task submitted in one JVM and processed in another one. Hazelcast offers IExecutorService for you to use in distributed environments: it implements java.util.concurrent.ExecutorService to serve the applications requiring computational and data processing power.

With IExecutorService, you can execute tasks asynchronously and perform other useful tasks. If your task execution takes longer than expected, you can cancel the task execution. In the Java Executor framework, tasks are implemented as java.util.concurrent.Callable and java.util.Runnable. If you need to return a value and submit to Executor, use Callable. Otherwise, use Runnable (if you do not need to return a value). Tasks should be Serializable since they will be distributed.

See the Docs for this feature

Scheduled Executor

Scheduled Executor is the distributed implementation of Java’s ScheduledExecutorService API. You can now schedule tasks at a given moment in time, or repetitive scheduling at fixed intervals in your cluster.

See the Docs for this feature

Partition Predicate

Partition Predicate is a control mechanism for QueryEngine to schedule a Query. Once scheduled, Partition Predicate strips away from actual Predicate, and then actual Predicate is executed on appropriate partition. This benefits applications making use of smart partitioning.

User Defined Services

In the case of special/custom needs, Hazelcast IMDG SPI (Service Provider Interface) module allows users to develop their own distributed data structures and services.

The SPI makes it possible to write first class distributed services/data-structures yourself. With the SPI, you can write your own data-structures if you are unhappy with the ones provides by Hazelcast. You also could write more complex services, such as an Actor library.

See the Docs for this feature

Fast Batch and Stream Processing

Hazelcast Jet – a new open source, distributed, stream processing engine. Jet integrates with Hazelcast in-memory data grid (IMDG) to process data in parallel across nodes in near real time. Hazelcast Jet uses directed acyclic graphs (DAG) to model relationships between tasks in the data-processing pipeline. The system is built on one-record-per-time architecture, which allows Jet to process data immediately, rather than in batches.

Hazelcast Jet has an implementation of java.util.stream for Hazelcast IMDG’s IMap and IList. java.util.stream operations are mapped to a DAG and then executed, and the result returned to the user. The computation is run as distributed and parallelized.

Get Hazelcast Jet

Distributed Query

Fast Aggregations

Prior to Hazelcast IMDG 3.8, Aggregations were based on our Map engine. Fast Aggregations functionality is the successor of the Aggregators. Instead of running on the Map engine they run on the Query infrastructure. Their performance is tens to hundreds times better since they run in parallel for each partition and are highly optimized for speed and low memory consumption. Much faster and a simpler API.

R aggregate(Aggregator, R> aggregator);

See the Docs for this feature

Continuous Query

ContinuousQueryCache ensures that all update messages are available in local memory for fast access. This is beneficial when queries on distributed IMap data are very frequent and local, in-memory performance is required.

See the Docs for this feature

Listener with Predicate

Listener with Predicate enables you to listen to the modifications performed on specific map entries. It is an entry listener that is registered using a predicate. This makes it possible to listen to the changes made to specific map entries.

See the Docs for this feature

Query

Hazelcast partitions your data and spreads it across a cluster of servers. You can iterate over the Map entries and look for certain entries (specified by predicates) you are interested in. However, this is not very efficient because you will have to bring the entire entry set and iterate locally. Instead, Hazelcast allows you to run distributed queries on your distributed map.

If you add new members to the cluster, the partition count for each member is reduced and hence the time spent by each member on iterating its entries is reduced. Therefore, the Hazelcast querying approach is highly scalable. Another reason it is highly scalable is the pool of partition threads that evaluates the entries concurrently in each member. The network traffic is also reduced since only filtered data is sent to the requester.

See the Docs for this feature

Integrated Clustering

Hibernate Second Level Cache

Hazelcast provides a distributed second level cache for your Hibernate entities, collections and queries. This cache associates with the Session Factory object. This cache is not restricted to a single session, but is shared across sessions, so data is available to the entire application, not just the current user. This can greatly improve application performance as commonly used data can be held in memory in the application tier. Implied by the name, Hibernate will go to the first level cache first, and if the entity is not there, it will go to the second level.

Get this Plugin for Hibernate5
Get this Plugin for Hibernate3 and Hibernate4
See the Docs for this feature

Generic Web Sessions

Filter based Web Session Replication for JEE Web Applications without requiring changes to the application.

Filter Session Replication is a feature where the state of each created HttpSessionObject is kept in a distributed Map. If one of the servers goes down, users will be routed to other servers without their noticing that a server went down. Also, you can scale your web servers under heavy load, and easily add new servers to your cluster. We use delta updates for efficient operation even with the largest session objects.

See the Docs for this feature

Tomcat Clustered Web Sessions

Session Replication is a container specific module that enables session replication for JEE Web Applications without requiring changes to the application.

Tomcat Session Replication is a Hazelcast IMDG module where the state of each created HttpSessionObject is kept in a distributed Map. If one of the servers goes down, users will be routed to other servers without their noticing that a server went down. Also, you can scale your web servers under heavy load, and easily add new servers to your cluster. We use delta updates for efficient operation even with the largest session objects.

Get this Plugin
See the Docs for this feature

Jetty Clustered Web Sessions

Session Replication is a container specific module that enables session replication for JEE Web Applications without requiring changes to the application.

Jetty Session Replication is a Hazelcast IMDG module where the state of each created HttpSessionObject is kept in a distributed Map. If one of the servers goes down, users will be routed to other servers without their noticing that a server went down. Also, you can scale your web servers under heavy load, and easily add new servers to your cluster. We use delta updates for efficient operation even with the largest session objects.

Get this Plugin
See the Docs for this feature

Grails 3

This plugin integrates Hazelcast data distribution framework into your Grails application. You can reach distributed data structures (Map, Queue, List, Topic) injecting hazelService. Also you can cache your domain class into Hazelcast distributed cache.

You may replace Ehcache with Hazelcast as secondary Hibernate cache implementation.

Get this Plugin
See the Docs for this feature

Hazelcast JCS Resource Adapter

Hazelcast JCS resource adapter is a system-level software driver used by a Java application to connect to an Hazelcast Cluster.

Get this Plugin
See the Docs for this feature

Standards

JCache

JCache is the standardized Java caching layer API. The JCache caching API is specified by the Java Community Process (JCP) as Java Specification Request (JSR) 107.

Starting with release 3.3.1, Hazelcast offers a specification compliant JCache implementation. It is not just a simple wrapper around the existing APIs; it implements a caching structure from ground up to optimize the behavior to the needs of JCache. The Hazelcast JCache implementation is 100% TCK (Technology Compatibility Kit) compliant and therefore passes all specification requirements. It has asynchronous versions of almost all operations to give the user extra power.

See the Docs for this feature

Apache jclouds Support

Hazelcast supports the Apache jclouds API, allowing applications to be deployed in multiple different cloud infrastructure ecosystems in an infrastructure-agnostic way.

Get this Plugin
See the Docs for this Plugin

Cloud and Virtualization Support

Amazon Web Services

Hazelcast AWS cloud module helps Hazelcast cluster members discover each other and form the cluster on AWS. It also supports tagging, IAM Role, and connecting clusters from clients outside the cloud.

Get this Plugin
See the Docs for this feature

Azure Cloud Discovery

Azure DiscoveryStrategy provides all Hazelcast instances in a cluster by returning VMs within your Azure resource group that are tagged with a specified value.

Get this Plugin
See the Docs for this feature

Discovery Service Provider Interface (SPI)

The Hazelcast Discovery Service Provider Interface is an extension SPI to attach external cloud discovery mechanisms. Discovery finds other Hazelcast instances based on filters and provides their corresponding IP addresses.

The SPI ships with support for Apache jclouds and Google’s Kubernetes as reference implementations.

See the Docs for this feature

Docker

Docker containers wrap up Hazelcast IMDG in a complete filesystem that contains everything it needs to run – code, runtime, system tools, system libraries – guaranteeing that it will always run the same, regardless of the environment it is running in.

You can deploy your Hazelcast projects using the Docker containers. Hazelcast has the following images on Docker:

  • Hazelcast
  • Hazelcast Enterprise
  • Hazelcast Management Center
  • Hazelcast OpenShift
See the Docs for this feature

Eureka

Eureka is a REST based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. Hazelcast supports Eureka V1 discovery; Hazelcast members within EC2 Virtual Private Cloud can discover each other using this mechanism. This discovery feature is provided as a Hazelcast plugin.

Get this Plugin
See the Docs for this module

Kubernetes

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions.

Get this Plugin

Apache jclouds

Hazelcast supports the Apache jclouds API, allowing applications to be deployed in multiple different cloud infrastructure ecosystems in an infrastructure-agnostic way.

Get this Plugin
See the Docs for this module

Zookeeper Discovery

The Hazelcast Zookeeper Discovery plugin provides a service based discovery strategy by using Apache Curator for communicating your Zookeeper server for Hazelcast 3.6.1+ Discovery SPI enabled applications.

Get this Plugin
See the Docs for this module

Storage

On-Heap

The on-heap store refers to objects that will be present in the Java heap (and also subject to GC). Java heap is the space that Java can reserve and use in memory for dynamic memory allocation. All runtime objects created by a Java application are stored in heap. By default, the heap size is 128 MB, but this limit is reached easily for business applications. Once the heap is full, new objects cannot be created and the Java application shows errors.

High Density Memory Store

Hazelcast High-Density Memory Store, the successor to Hazelcast Elastic Memory, is Hazelcast’s new enterprise grade back end storage solution. This solution is used with the Hazelcast JCache implementation. By default, Hazelcast offers a production ready, low garbage collection (GC) pressure, storage back end. Serialized keys and values are still stored in the standard Java map, such as data structures on the heap. The data structures are stored in serialized form for the highest data compaction, and are still subject to Java Garbage Collection.

In Hazelcast Enterprise, the High-Density Memory Store is built around a pluggable memory manager which enables multiple memory stores. These memory stores are all accessible using a common access layer that scales up to Terabytes of main memory on a single JVM. At the same time, by further minimizing the GC pressure, High-Density Memory Store enables predictable application scaling and boosts performance and latency while minimizing pauses for Java Garbage Collection.

See the Docs for this module

WAN

WAN Replication

There are cases where you need to synchronize multiple clusters to the same state. Synchronization of clusters, also known as WAN (Wide Area Network) Replication, is mainly used for replicating stats of different clusters over WAN environments like the Internet.

See the Docs for this feature

Cluster Management

JMX API per node

Hazelcast members expose various management beans which include statistics about distributed data structures and the states of Hazelcast node internals. The metrics are local to the nodes, i.e. they do not reflect cluster wide values. The JMX API allows you to access these metrics.

  • Atomic Long (IAtomicLong)
  • Atomic Reference (IAtomicReference)
  • Countdown Latch (ICountDownLatch)
  • Executor Service (IExecutorService)
  • List (IList)
  • Lock (ILock)
  • Map (IMap)
  • MultiMap (MultiMap)
  • Replicated Map (ReplicatedMap)
  • Queue (IQueue)
  • Semaphore (ISemaphore)
  • Set (ISet)
  • Topic (ITopic)
  • Hazelcast Instance (HazelcastInstance)
See the Docs for this feature

Management Center

Hazelcast Management Center enables you to monitor and manage your nodes running Hazelcast. In addition to monitoring overall state of your clusters, you can also analyze and browse your data structures in detail, update Map configurations and take thread dump from nodes. With its scripting and console module, you can run scripts (JavaScript, Groovy, etc.) and commands on your nodes.

See the Docs for Management Center

Statistics API per node

You can gather various statistics from your distributed data structures via Statistics API. Since the data structures are distributed in the cluster, the Statistics API provides statistics for the local portion (1/Number of Nodes) of data on each node. You can gather the following statistics:

  • Map Statistics
  • Multimap Statistics
  • Queue Statistics
  • Topic Statistics
  • Executor Statistics
See the Docs for this feature

Clustered JMX

Clustered JMX via Management Center allows you to monitor clustered statistics of distributed objects from a JMX interface. You can use jconsole or any other JMX client to monitor your Hazelcast Cluster. Use the Clustered JMX interface to integrate Hazelcast Management Center with New Relic and AppDynamics.

See the Docs for this feature

Clustered REST

For Hazelcast Enterprise, the Clustered REST API is exposed from Management Center to allow you to monitor clustered statistics of distributed objects. To enable Clustered REST on your Management Center, you need only pass a system property at startup.

See the Docs for this feature

Client-Server Protocols

Memcached Client

A Memcached client written in any language can talk directly to a Hazelcast cluster. No additional configuration is required. (Hazelcast Memcached Client only supports ASCII protocol. Binary Protocol is not supported.)

See the Docs for this feature

Open Binary Client Protocol and Client Implementation Guide

Hazelcast’s new client-server protocol now supports versioning and easy client implementation. This provides enterprises deployment and upgrade flexibility by allowing clients to be upgraded independently of servers. Caching services may be deployed and upgraded enterprise-wide, without forcing clients across business units to upgrade in lock step.

The accompanying protocol documentation and client implementation guide also allows clients to be easily implemented in any platform. The implementation guide ships with a Python reference implementation.

View the Guide

REST

The Clustered REST API is exposed from Management Center to allow you to monitor clustered statistics of distributed objects.

See the Docs for this feature

Security Suite

Pluggable Socket Interceptor

Hazelcast allows you to intercept socket connections before a node joins to cluster or a client connects to a node. This provides the ability to add custom hooks to join and perform connection procedures (like identity checking using Kerberos, etc.).

See the Docs for this feature

Encryption: Asymmetric and Symmetric

Hazelcast allows you to encrypt the entire socket level communication among all Hazelcast members. Encryption is based on Java Cryptography Architecture. In symmetric encryption, each node uses the same key, so the key is shared.

See the Docs for this feature

Authentication

The authentication mechanism for Hazelcast Client security works the same as cluster member authentication. To implement client authentication, configure a Credential and one or more LoginModules. The client side does not have and does not need a factory object to create Credentials objects like ICredentialsFactory. Credentials must be created at the client side and sent to the connected node during the connection process.

See the Docs for this feature

Authorization

Hazelcast client authorization is configured by a client permission policy. Hazelcast has a default permission policy implementation that uses permission configurations defined in the Hazelcast security configuration. Default policy permission checks are done against instance types (Map, Queue, etc.), instance names (Map, Queue, Name, etc.), instance actions (put, read, remove, add, etc.), client endpoint addresses, and client principal defined by the Credentials object. Instance and principal names and endpoint addresses can be defined as wildcards(*).

See the Docs for this feature

JAAS Module

Hazelcast has an extensible, JAAS based security feature you can use to authenticate both cluster members and clients, and to perform access control checks on client operations. Access control can be done according to endpoint principal and/or endpoint address.

See the Docs for this module

Security Interceptor

Hazelcast allows you to intercept every remote operation executed by the client. This lets you add a very flexible custom security logic.

See the Docs for this feature

Clients

.NET Client

You can use the native .NET Client to connect to Hazelcast nodes. All you need is to add HazelcastClient3x.dll into your .NET project references. The API is very so the Java native client.

Get this Client
See the Docs for this feature

Near Cache for .NET Client

Near Cache allows a subset of data to be cached locally in memory on the .NET Client.

Off-heap memory management is enabled in IMDG Enterprise HD via High-Density Memory Store.

C++ Client

You can use native C++ Client to connect to Hazelcast nodes and perform almost all operations that a node can perform. Clients differ from nodes in that clients do not hold data. The C++ Client knows where the data is and asks directly for the correct node. The features of C++ Clients are:

  • Access to distributed data structures (IMap, IQueue, MultiMap, ITopic, etc.)
  • Access to transactional distributed data structures (TransactionalMap, TransactionalQueue, etc.)
  • Ability to add cluster listeners to a cluster and entry/item listeners to distributed data structures
  • Distributed synchronization mechanisms with ILock, ISemaphore and ICountDownLatch
Get this Client
See the Docs for this feature

Near Cache for C++ Client

Near Cache allows a subset of data to be cached locally in memory on the C++ Client.

Off-heap memory management is enabled in IMDG Enterprise HD via High-Density Memory Store.

Java Client

Native Clients (Java, C#, C++) enable you to perform almost all Hazelcast operations without being a member of the cluster. It connects to one of the cluster members and delegates all cluster wide operations to it (dummy client), or it connects to all of them and delegates operations smartly. When the relied cluster member dies, the client will transparently switch to another live member.

The Java Client is the most full featured client. The main idea behind the Java client is to provide the same Hazelcast functionality by proxying each operation through a Hazelcast node. It can access and change distributed data, and it can listen to distributed events of an already established Hazelcast cluster from another Java application.

Get this Client
See the Docs for this feature

Near Cache for Java Client

Near Cache allows a subset of data to be cached locally in memory on the Java Client.

Off-heap memory management is enabled in IMDG Enterprise HD via High-Density Memory Store.

Portable Serialization

As an alternative to the existing serialization methods, Hazelcast offers a language/platform independent Portable Serialization that has the following advantages:

  • Supports multi-version of the same object type
  • Fetches individual fields without having to rely on reflection
  • Queries and indexing support without de-serialization and/or reflection

Portable Serialization is totally language independent and is used as the binary protocol between Hazelcast server and clients.

See the Docs for this feature

Pluggable Serialization

You need to serialize the Java objects that you put into Hazelcast because Hazelcast is a distributed system. The data and its replicas are stored in different partitions on multiple nodes. The data you need may not be present on the local machine, and in that case, Hazelcast retrieves that data from another machine. This requires serialization.

Hazelcast serializes all your objects into an instance of com.hazelcast.nio.serialization.Data. Data is the binary representation of an object. Serialization is used when:

  • Key/value objects are added to a map
  • Items are put in a queue/set/list
  • A runnable is sent using an executor service
  • An entry processing is performed within a map
  • An object is locked
  • A message is sent to a topic
See the Docs for this feature

Python Client

The Python Client is the reference implementation of the new Hazelcast Client Binary Protocol. Hazelcast’s robust in-memory data grid is now available to Python applications.

Get this Client

Near Cache for Python Client

Near Cache allows a subset of data to be cached locally in memory on the Python Client.

Off-heap memory management is enabled in IMDG Enterprise HD via High-Density Memory Store.

Node.js Client

You can use the Hazelcast Node.js Client to connect to Hazelcast nodes. You can install the client via Node Package Manager (npm).

Get this Client

Near Cache for Node.js Client

Near Cache allows a subset of data to be cached locally in memory on the Node.js Client.

Off-heap memory management is enabled in IMDG Enterprise HD via High-Density Memory Store.

Scala Client

The Scala API for Hazelcast is a “soft” API, i.e. it expands the Java API rather than replace it. The Scala API also adds built-in distributed aggregations, and IMap join capability.

Get this API

Near Cache for Scala Client

Near Cache allows a subset of data to be cached locally in memory on the Scala Client.

Off-heap memory management is enabled in IMDG Enterprise HD via High-Density Memory Store.

Big Data

Hazelcast Jet

Hazelcast Jet – a new open source, distributed, stream processing engine. Jet integrates with Hazelcast In-Memory Data Grid (IMDG) to process data in parallel across nodes in near real time. Hazelcast Jet uses directed acyclic graphs to model relationships between tasks in the data-processing pipeline. The system is built on one-record-per-time architecture, which allows Jet to process data immediately, rather than in batches.

Hazelcast Jet embodies all of the principles of the Hazelcast Way – simple, lightweight, and scalable.

Get Hazelcast Jet

Hazelcast Apache Spark Connector

Hazelcast Apache Spark Connector allows Hazelcast Maps and Caches to be used as shared RDD caches by Spark using the Spark RDD API. Both Java and Scala Spark APIs are supported.

Get this Plugin
See the Docs for this feature

Hazelcast Mesos Integration

Hazelcast Mesos Integration module gives you the ability to deploy Hazelcast on the Mesos cluster. Since it depends on Hazelcast Zookeeper module for discovery, the deployed version of Hazelcast on Mesos cluster should not be lesser than 3.6.

Get this Plugin
See the Docs for this feature

Hazelcast IMDG