Brad Dickinson

Amazon ElastiCache for Redis Update – Sharded Clusters, Engine Improvements, and More

The content below is taken from the original (Amazon ElastiCache for Redis Update – Sharded Clusters, Engine Improvements, and More), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many AWS customers use Amazon ElastiCache to implement a fast, in-memory data store for their applications.

We launched Amazon ElastiCache for Redis in 2013 and have added snapshot exports to S3, a refreshed engine, scale-up capabilities, tagging, and support for Multi-AZ operation with automatic failover over the past year or so.

Today we are adding a healthy collection of new features and capabilities to ElastiCache for Redis. Here’s an overview:

Sharded Cluster Support – You can now create sharded clusters that can hold more than 3.5 TiB of in-memory data.

Improved Console – Creation and maintenance of clusters is now more straightforward and requires far fewer clicks.

Engine Update – You now have access to the features of the Redis 3.2 engine.

Geospatial Data – You can now store and process geospatial data.

Let’s dive in!

Sharded Cluster Support / New Console
Until now, ElastiCache for Redis allowed you to create a cluster containing a single primary node and up to 5 read replicas. This model limited the size of the in-memory data store to 237 GiB per cluster.

You can now create clusters with up to 15 shards, expanding the overall in-memory data store to more than 3.5 TiB. Each shard can have up to 5 read replicas, giving you the ability to handle 20 million reads and 4.5 million writes per second.

The sharded model, in conjunction with the read replicas, improves overall performance and availability. Data is spread across multiple nodes and the read replicas support rapid, automatic failover in the event that a primary node has an issue.

In order to take advantage of the sharded model, you must use a Redis client that is cluster-aware. The client will treat the cluster as a hash table with 16,384 slots spread equally across the shards, and will then map the incoming keys to the proper shard.

ElastiCache for Redis treats the entire cluster as a unit for backup and restore purposes; you don’t have to think about or manage backups for the individual shards.

The Console has been improved and I can create my first Scale Out cluster with ease (note that I checked Cluster Mode enabled (Scale Out) after I chose Redis as my Cluster engine):

The Console helps me to choose a suitable node type with a handy new menu:

You can also create sharded clusters using the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, the ElastiCache API, or via a AWS CloudFormation template.

Engine Update
Amazon ElastiCache for Redis is compatible with version 3.2 of the Redis engine. The engine includes three new features that may be of interest to you:

Enforced Write Consistency – the new WAIT command blocks the caller until all previous write commands have been acknowledged by the primary node and a specified number of read replicas. This change does not make Redis in to a strongly consistent data store, but it does improve the odds that a freshly promoted read replica will include the most recent writes to previous primary.

SPOP with COUNT – The SPOP command removes and then returns a random element from a set. You can now request more than one element at a time.

Bitfields – Bitfields are a memory-efficient way to store a collection of many small integers as a bitmap, stored as a Redis string. Using the BITFIELD command, you can address (GET) and manipulate (SET, increment, or decrement) fields of varying widths without having to think about alignment to byte or word boundaries.

Our implementation of Redis includes a snapshot mechanism that does not need to fork the server process into parent and child processes. Under heavy load, the standard, fork-based snapshot mechanism can lead to degraded performance due to swapping. Our alternative implementation comes in to play when memory utilization is above 50% and neatly sidesteps the issue. It is a bit slower, so we use it only when necessary.

We have improved the performance of the syncing mechanism that brings a fresh read replica into sync with its primary node. We made a similar improvement to the mechanism that brings the remaining read replicas back in to sync with the newly promoted primary node.

As I noted earlier, our engine is compatible with the comparable open source version and your applications do not require any changes.

Geospatial Data
You can now store and query geospatial data (a latitude and a longitude). Here are the commands:

Available Now
Sharded cluster creation and all of the features that I mentioned are available now and you can start using them today in all AWS regions.

Jeff;

Exit mobile version