Get Started

Get Started

These guides demonstrate the operational flexibility and speed of the Hazelcast In-Memory Computing Platform. Set-up in seconds, data in microseconds. Operations and developer friendly.

Hazelcast IMDG

Find out for yourself how to get a Hazelcast IMDG cluster up and running. In this Getting Started guide you’ll learn how to:

  • Create a Cluster of 3 Members.
  • Start the Hazelcast Management Center.
  • Add data to the cluster using a sample client in the language of your choice.
  • Add and remove some cluster members to demonstrate the automatic rebalancing of data and back-ups.

Hazelcast Jet

Learn how to run a distributed data stream processing pipeline in Java. In this Getting Started guide you’ll learn how to:

  • Start a Hazelcast Jet cluster in your JVM.
  • Build the Word Count application.
  • Execute the application wit Jet.
  • Push testing data to the cluster and explore results

As a next step, you will be able explore the code samples and extend your setup more features and connectors.

Hazelcast Jet

Open-Source Distributed Stream Processing

Jet allows you to build fault-tolerant data pipelines that run on a cluster of machines with just a few lines of Java code. It does all the heavy lifting of getting the data flowing through the cluster and allows you to focus on writing code that focuses purely on data transformations such as mapping, filtering, aggregations and joins. Jet can work with both bounded (batch) and unbounded (streaming) data and a single pipeline can contain multiple data sources or sinks. You can seamlessly add or remove nodes from the cluster and Jet will keep processing with exactly-once processing guarantees.

Hazelcast Jet Distributed Streaming
Simple
Simple

Hazelcast Jet is simple to set up. The nodes automatically discover each other to form a cluster. You can do the same locally, even on the same machine (your laptop, for example). This is great for quick testing, fast deployment, and easier ongoing maintenance.

Runs everywhere
Runs everywhere

Hazelcast Jet is delivered as a single JAR without dependencies that requires Java 8 to run with full functionality. It’s lightweight to run on small devices, and it’s cloud-native with Docker images and Kubernetes support. It can be embedded into an application for simpler packaging or deployed as a standalone cluster.

Consistent low-latency
Consistent low-latency

Hazelcast Jet uses a combination of a directed acyclic graph (DAG) computation model, in-memory processing, data locality, partition mapping affinity, SP/SC queues, and green threads to achieve high throughput with predictable latency.

In-memory speed
In-memory speed

Hazelcast Jet provides a highly available, distributed in-memory data store. You can cache your reference data and enrich the event stream with it, store the results of a computation, or even store the input data you're about to process with Jet.

Resilient, elastic data processing
Resilient, elastic data processing

With Hazelcast Jet, it's easy to build fault-tolerant and elastic data processing pipelines. Jet keeps processing data without loss even when a node fails, and you can add more nodes that immediately start sharing the computation load.

Rich set of connectors
Rich set of connectors

Integrate Hazelcast Jet with a variety of systems including Apache Kafka, Hadoop, S3, Kinesis, Hazelcast IMDG, sockets or files.

Use Cases

Data stream processing
Data stream processing

A data stream is a series of isolated records. Make it queryable with Hazelcast Jet. Cache recent values, correlate simple events with complex events or aggregate multiple values to build and maintain a queryable view of the streaming data. This reduces data access time for consumers and allows event-driven behavior.

Distributed compute
Distributed compute

Use Hazelcast Jet to speed up your MapReduce, Spark, or custom Java data processing jobs. Load data sets to a cluster cache and perform fast compute jobs on top of the cached data. You get significant performance gains by combining an in-memory approach and co-location of jobs and data with parallel execution.

Continuous data integration
Continuous data integration

Integrate data in real-time using continuous pipelines. Hazelcast Jet can connect to various systems (messaging, databases, caches, file systems, RPC services) and continuously move data from place to place by keeping exactly-once processing guarantees even during failures.

Application-level data processing
Application-level data processing

Package the Hazelcast Jet library with your application into a self-contained container for simple deployment with Docker or a JAR. Scale up the running cluster by starting another container.

Free Hazelcast Online Training Center

Whether you're interested in learning the basics of in-memory systems, or you're looking for advanced, real-world production examples and best practices, we've got you covered.

Open Gitter Chat