News

NodeJS gRPC client version 2.0.0 is out!

Oskar Dudycz  |  04 August 2021

Recently, we have been focused on making our NodeJS gRPC client even more robust and stable. To achieve this, we had to introduce a few breaking changes. Read on to learn about the enhancements and necessary migration steps.

nodejs-comb

Reading events are now returning a NodeJS stream instead of an array

The recommended way of dealing with streams in Event Sourcing is to keep them short-living. You can find examples of life cycles in many domains such as completing the books in accounting, cashier shift change, end of the day in the hospitality industry, etc. However, depending on your case, streams can get long. In the previous versions, we were materialising the whole stream into an array while reading the events. It was sufficient if the streams were short enough but could result in timeouts or an out of memory exception in edge cases.

To enable the performant way of reading long streams, we're now returning a readable stream from the readStream and readAll methods rather than a promise of an array. This is a breaking change.

In version 1.x you would have to await the complete read of the stream then act on the constructed array.

type OrderEvent = JSONEventType<"noodles-ordered", { noodlesCount: number }>;

const order = {
    totalNoodles: 0,
};

const events = await client.readStream('order-123');

for (const { event } of events) {

  if (event?.type !== 'noodles-ordered') continue;
  order.totalNoodles += event.data.noodlesCount;
}

With version 2.x, you can act directly on the events as they are read, allowing you to more easily and efficiently build entity state with a reduced memory footprint.

const order = {
    totalNoodles: 0,
};

const eventStream = client.readStream('order-123');

for await (const { event } of eventStream) {
  if (event?.type !== 'noodles-ordered') continue;
  order.totalNoodles += event.data.noodlesCount;
}

You can also use native NodeJS stream events:

const getOrder = (orderStream: string) =>
  new Promise((resolve, reject) => {
    const order = {
      totalNoodles: 0,
    };

    client
      .readStream(orderStream)
      .on("data", ({ event }) => {
        if (event?.type !== "noodles-ordered") return;
        order.totalNoodles += event.data.noodlesCount;
      })
      .on("error", (err) => {
        reject(err);
      })
      .on("end", () => {
        resolve(order);
      });
  });

or create custom transformers and pipe them:

client
  .readStream(orderStream)
  .pipe(new NoodleTransformer())
  .pipe(new NoodleMaker());

For those who prefer a functional style, there are also useful external NPM packages that helps in stream transformations, e.g. iter-tools:

import { asyncTakeLast } from 'iter-tools'; 

const eventStream = client.readStream('order-123');

const lastEvent = await asyncTakeLast(eventStream);

or RxJS

import { from } from "rxjs";
import { map, scan, filter } from "rxjs/operators";

const eventStream = client.readStream('order-123');

from(eventStream)
  .pipe(
    filter(({ event }) => event?.type === "noodles-ordered"),
    map(({ event }) => event.data.noodlesCount),
    scan((totalNoodles, noodlesCount) => totalNoodles + noodlesCount, 0)
  )
  .subscribe((totalNoodles) => console.log(totalNoodles));

See also:

There is also an open proposal for Iterator Helpers to ECMAScript standard. Once it's accepted, iterating a stream should be even more ergonomic. You can track its implementation in node here.

Added built-in reconnections on node loss

We added a built-in reconnection mechanism to improve the dev experience and make transient error handling more straightforward. Before, you had to implement such logic on your own. Thanks to changes introduced in v2.0.0:

  • Leader changeover,
  • Node connection loss will be handled internally by the gRPC client. The client will go through the proper discovery process based on the cluster node preferences.

We still recommend defining your retry policy tuned to your use case needs. The sample scenario for failover scenario:

  1. The gRPC client is connected to the leader of the cluster.
  2. The leader node goes offline, and another node is the elected leader.
  3. The client issues an append, which fails because the connection was closed.
  4. The application code retries the append operation.
  5. The client triggers cluster discovery again and connects to the new leader of the cluster.
  6. The append operation succeeds.

Node discovery enhancements

We added support to preferring Read-only replica while discovering cluster node connection. We also enhanced and standardised the discovery process.

Node preference can be specified using the nodePreference connection string parameter. You can use the following options:

  • leader Connect to a node in the leader state. If there is no leader node, select the first from the list of allowed nodes.
  • follower Connect to a node in the follower state. If there is no follower node, then try to select a leader. Otherwise, select the first allowed node.
  • read_only_replica Connect to a node in one of the ReadOnlyReplica states (listed below). Otherwise, try to connect to the leader, followed by the first allowed node.
  • random Connect to a random allowed node.

We recommend using leader preference. Other options can be used to, e.g. offload traffic from the leader node. Until you observe performance issues, the default should work correctly.

Minor performance improvements

V2 also brings an optimisation to the JSON event encoding, which improves the encoding speed by up to 4x. This results in, at best, an extra ten events written per second.

Removed deprecated ConnectionTypeOptions

In a previous version, we exported the individual connection types (DNSClusterOptions, GossipClusterOptions, SingleNodeOptions) to allow functions wrapping the client constructor to be correctly typed. With this change we also made the ConnectionTypeOptions obsolete. Version 2.0.0 removes this obsolete type. Use the specific options instead.

Installation

To use the gRPC client package, you need to install it either with NPM.

npm install --save @eventstore/db-client@2.x.x

or Yarn.

yarn add @eventstore/db-client@2.x.x

Connecting to the DB server You also need to have EventStoreDB running. The easiest way is to run it via docker:

docker run --name esdb-node -it -p 2113:2113 -p 1113:1113 \
    eventstore/eventstore:latest --insecure --run-projections=All

Note that we're using insecure mode here to speed up the setup. EventStoreDB is secure-by-default. For detailed instructions, check the installation guide and security recommendations.

Having EventStoreDB running, you can connect:

import { EventStoreDBClient } from "@eventstore/db-client";

const client = EventStoreDBClient.connectionString("esdb://localhost:2113?tls=false");

NodeJS Version support

We are officially supporting the Active LTS version. At the moment I write this post, it's v14. It should also work at least with v12, but we recommend that you always use the Active LTS.

Source code and documentation

NodeJS gRPC client is open-sourced and available under Apache 2.0 License in the GitHub Repository. You can find detailed documentation and samples in our documentation. We value the open-source community. Feel free to send us pull requests, issues or other forms of contribution.

If you have more questions, we're available and happy to help on our Discuss forum.


Photo of Oskar Dudycz

Oskar Dudycz Oskar is a Developer Advocate at Event Store. His focus is on helping to create application closer to business needs. He believes that Event Sourcing, CQRS and Event-Driven Design are enablers to achieve that. Oskar is an Open Source contributor, co-maintainer of Marten library and always focus on the practical, hands-on development experience. You can check Oskar's blog https://event-driven.io/ and follow him on Twitter at @oskar_at_net.


https://event-driven.io


Comment on this post