21.2.0 Release Notes
We are pleased to announce the official release of EventStoreDB 21.2.0!
It is available for the following operating systems:
- Ubuntu 16.04
- Ubuntu 18.04
- Ubuntu 20.04
- CentOS 7 (Commercial version)
- Amazon Linux 2 (Commercial version)
- Oracle Linux 7 (Commercial version)
Where Can I Get the Packages?
Downloads are available on our website.
The packages can be installed using the following instructions.
Ubuntu 16.04/18.04/20.04 (via packagecloud)
curl -s https://packagecloud.io/install/repositories/EventStore/EventStore-OSS/script.deb.sh | sudo bash sudo apt-get install eventstore-oss=21.2.0
Windows (via Chocolatey)
choco install eventstore-oss -version 21.2.0
To upgrade a cluster from 20.10.0, a usual rolling upgrade can be done:
- Pick a node (start with follower nodes first, then choose the leader last).
- Stop the node, upgrade it and start it.
If you are upgrading a cluster from version 5.x and below, please read the upgrade procedure in the 20.6.0 release notes.
Gossip on Single Node
With the latest release, all nodes have gossip enabled by default. You can connect using gossip seeds regardless of whether you have a cluster or not.
Please note that the GossipOnSingleNode option has been deprecated in this version and will be removed in version 21.10.0.
Heartbeat Timeout Improvements
In a scenario where one side of a connection to EventStoreDB is sending a lot of data and the other side is idle, a false-positive heartbeat timeout can occur for the following reasons:
- The heartbeat request may need to wait behind a lot of other data on the send queue on the sender’s side or on the receive queue on the receiver’s side before it can be processed.
- The receiver does not schedule any heartbeat request to the sender as it assumes that the connection is alive.
- The sender’s heartbeat request can eventually take more time than the heartbeat timeout to reach the receiver and be processed causing a false-positive heartbeat timeout to occur.
In this release, we have extended the heartbeat logic by proactively scheduling a heartbeat request from the receiver to the sender to prevent the heartbeat timeout. This should lower the number of incorrect heartbeat timeouts that occur on busy clusters.
Please see the documentation for more information about heartbeats and how they work.
KeepAlives for gRPC
The server now supports gRPC KeepAlives, and has been configured to send a KeepAlive message over gRPC connections every 10 seconds by default. This means that gRPC clients will be able to discover if their connection has been dropped.
The interval and timeout for KeepAlives on the server can be configured with the KeepAliveInterval and KeepAliveTimeout settings. Please note that these may need to be configured on your gRPC client as well, please check your client’s release notes for more information.
Persistent subscriptions will get some of the focus for the coming releases. As a start on that we’ve made the parked message count available on the stats of a persistent subscription.
This allows you to check the number of parked messages without having to read the parked stream itself.
Content Type Validation for Projections
We want to make sure that projections are predictable.
To support coming changes, we have added content type validation for projections. This means the following:
- If the event is a json event, then it must have valid non-null json data.
- If the event is not a json event, then it may have null data.
- Null metadata is accepted in any scenario.
Events that don’t meet these requirements will be filtered out without erroring the projection.
This change only takes effect either when a projection is created on v21.2.0, or if a projection is stopped and started again. Projections that were created before the upgrade will not enforce content validation.
Read Index Cache Capacity
We’ve introduced a new option to allow customizing the Read Index cache capacity.
EventStoreDB caches the metadata for streams that have been read recently to improve read and write performance.
While the default capacity of 100,000 should be enough for most situations, there are cases where it can be beneficial to increase the cache capacity. For example, if you’re going to be performing a lot of reads and writes to the same 200,000 streams for a period of time.
We’ve added the “StreamInfoCacheCapacity” option to allow tuning this cache. Please be aware that increasing this number will cause EventStoreDB to use more memory.
If you encounter any issues, please don’t hesitate to open an issue on GitHub if there isn’t one already.
Additionally, there is a fairly active discuss channel, and an #eventstore channel on the DDD-CQRS-ES Slack community.
You can read more about the changes in this post be Oskar Dudycz.