We are pleased to announce the official release of EventStoreDB OSS & Commercial version 21.10.5 long-term support (LTS). This LTS release will be supported until October 2023. Read more about our versioning strategy here.
EventStoreDB 21.10.5 is available for the following operating systems:
- Ubuntu 18.04
- Ubuntu 20.04
- CentOS 7 (Commercial version)
- Amazon Linux 2 (Commercial version)
- Oracle Linux 7 (Commercial version)
Additionally, an experimental build for ARM64 processors can be found as a Docker image here.
Note: Ubuntu 16.04 is not supported anymore as of the 21.10.0 release, more information about Ubuntu’s release and support policy can be found here.
This release includes fixes in 21.10.3 & 21.10.4 that were specific to EventStore Cloud.
Where Can I Get the Packages?
Downloads are available on our website.
The packages can also be installed using the following instructions.
Ubuntu 18.04/20.04 (via packagecloud)
curl -s https://packagecloud.io/install/repositories/EventStore/EventStore-OSS/script.deb.sh | sudo bash
sudo apt-get install eventstore-oss=21.10.5
Windows (via Chocolatey)
choco install eventstore-oss -version 21.10.5
Docker (via docker hub)
docker pull eventstore/eventstore:21.10.5-focal
docker pull eventstore/eventstore:21.10.5-buster-slim
Highlights for this release:
Better support for Certificates when using DNS discovery
We've added support for wildcard certificates when using DNS discovery in a cluster.
This means that you don't have to include the IP address in the node's certificate when using DiscoverViaDns: true.
We will be publishing a blog post shortly after this release with more detailed instructions for configuring EventStoreDB in this way.
See the PR EventStore#3460 more for information.
Cluster Stability Fixes
We have fixed the following issues that could cause cluster instability:
Ensure no pending writes can be incorrectly acknowledged or published when going offline for truncation.
This could cause writes to be incorrectly acknowledged to clients even though they were not written successfully. Such writes would also be published to subscriptions.
The conditions for this to take place are:
- A Leader in the cluster is deposed, a new Leader is elected, and then communication is restored between the deposed Leader and the other nodes.
- The deposed Leader had pending writes that had not yet been replicated to the other nodes, and did not timeout while the deposed leader could not communicate with the other nodes.
- The new Leader has written to and replicated enough data that its replication checkpoint has moved beyond the position of the deposed Leader's pending writes.
- The deposed Leader attempts to resubscribe to the cluster, and is subsequently taken offline for truncation.
If all of these are true, then there is a possibility of the node running into a race condition between the replication checkpoint of the deposed Leader being updated (which acknowledges the pending writes) and the deposed Leader going offline for truncation (which truncates those same pending writes).
Retry establishing a TCP connection to the leader
We have fixed an issue that could cause a Follower node to get stuck in a state where it shows as alive in the gossip, but it is unable to replicate data from the Leader node. This issue was initially discovered due to a DNS lookup timeout on a Follower node, but it could have other causes.
If a Follower ran into an error after establishing a TCP connection to the Leader node but before subscribing to the Leader, then it was possible for the Follower to get stuck in a state where it could not replicate data.
You can read more about this in PR EventStore#3458.
We have improved state serialization speed and the way that projections handle null metadata values in this release.
However, these changes have introduced the following changes to the way state behaves:
- Adding functions to state objects directly is not supported and will error.
- Objects on shared state must be initialized before they can be used, otherwise the projection will fault with an error. Previously this would not have an effect.
We have also done a few other fixes for bloom filters and the UI.
You can find a list of all of the fixes in this release in the changelog.
To upgrade a cluster from 20.10.x or 21.10.x, a usual rolling upgrade can be done:
- Pick a node (start with follower nodes first, then choose the leader last).
- Stop the node, upgrade it and start it.
There is no way to perform a rolling upgrade between version 5.x and version 21.10.x due to changes in the replication protocols and the way nodes gossip and host elections.
As such, the upgrade process from 5.x is as follows:
- Take down the cluster
- Perform an in-place upgrade of the nodes, ensuring that the relevant configuration and certificates are set up
- Bring the nodes back online and wait for them to stabilize
Documentation and previous releases notes
If you encounter any issues, please don’t hesitate to open an issue on GitHub if there isn’t one already.
If you have any questions that aren't covered in these release notes or the docs, please feel free to reach out.