Complete a million+ requests a minute with gRPCs and Node.js
The goal of this article is that you, the reader, get a chance to build a gRPC channel from scratch. We will cover:
- Remote Procedure Calls
- Node.js EventEmitters
- Google’s Protocol Buffers
- How can you write your own gRPC channel using these pieces
- Load the protobuf into JavaScript
- Write a chattyMicroservice client (to invoke RPCs)
- Write a server (to execute RPCs)
- Run the files!
And before we begin, note that the file structure we will be using in this article looks like the following:
- your_workspace_folder/
- proto/
- exampleAPI.proto
- package.js
- clients/
- chattyMicroservice.js
- servers/
- server.js
REMOTE PROCEDURE CALL
Imagine a computer wants to invoke a function, and then execute that function on a different computer! This is a Remote Procedure Call, or RPC - one computer invokes or calls a function, and another computer executes that function.
How in the world can one computer remotely call a function on another computer? Generally, a channel is set up between the invoker and the executor. This channel relays a message to alert the executor when a function should be run, the executor runs the function, returning back to the invoker the result as a message.
Therefore to review, for an RPC to complete, one computer invokes the RPC, the remote computer executes the RPC, and the RPC returns a message back to the invoker.
NODE.JS EVENTEMITTER
Node.js features a non-blocking event-driven model. A key aspect of this event-driven model is the frequent use of EventEmitters. Simply put, an EventEmitter object that binds that binds together the ability to write or emit, events as well run functionality upon receiving a subscribed-to event. Essentially, each EventEmitter can both listen and speak, or rather -- read and write.
The most important methods of any EventEmitter are `.on` and `.write`. The `.on` method stores an EventListener mapped to an event named with a `string` and a queue of functions - callbacks - to run upon hearing that named event. The `.write` method emits a `’data’` event which an EventEmitter listening for `’data’` will hear and run the callbacks.
PROTOCOL BUFFERS
In Google’s own words, "Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily read and write your structured data to and from a variety of data streams."
A Protocol Buffer, or protobuf, is a way to define all the data that your entire application or distributed system will be sending and receiving. A protobuf acts as a schema for all of your services. In our proto3 protobuf below, we define the name of a package, the services within that package, the RPCs in each service, and the messages used by each RPC.
// proto/exampleAPI.proto
syntax = "proto3";
// define the name of the package
package exampleAPI;
// define the name of the service(s)
service ChattyMicroservice {
// define the rpc method and what it returns
// invocation execution
rpc BidiMath (stream Benchmark) returns (stream Benchmark);
}
// define the name of the message(s)
message Benchmark {
// define the type and the index of the field
// note that protobuf message fields are not 0-indexed, but start at 1!
double requests = 1;
double responses = 2;
}
/*
In our example, the RPC method BidiMath is fully bidirectional.
The Benchmark message will be an Object with the properties `requests` and `responses`.
The values of `requests` and `responses` will be `doubles`, or potentially very large numbers.
*/
TUTORIAL: HOW TO WRITE A GRPC CHANNEL IN NODE.JS
1) Load the protobuf into JavaScript
Now that we have our protobuf written, we have to do the work of loading this protobuf into our desired target language -- in this case, JavaScript. Just like any other file, the .proto file must be compiled into JavaScript before we can interact with it meaningfully.
But before we do that, we need to install our two dependencies. Open up a new workspace in your favorite IDE with Node.js installed, and let’s run the terminal commands:
npm init
npm install grpc @grpc/proto-loader
Now that we have the two npm libraries both installed and saved in our package.json, let’s write out the JavaScript to synchronously load our protobuf, and then further load the package definition into a descriptor Object, and from that descriptor Object extract the one package we wrote named exampleAPI!
// proto/package.js
const { loadSync } = require('@grpc/proto-loader');
const { loadPackageDefinition } = require( 'grpc' );
const PROTO_PATH = __dirname + '/exampleAPI.proto'
const CONFIG_OBJECT = {
longs: Number,
/* compiles the potentially enormous `double`s for our Benchmark
into a Number rather than a String */
}
// synchronously compiles and loads the .proto file into a definition
const definition = loadSync(PROTO_PATH, CONFIG_OBJECT);
// generates a descriptor Object from the loaded API definition
const descriptor = loadPackageDefinition(definition);
// descriptor Object contains a lot of data, all we need is the package
const package = descriptor.exampleAPI
// export the package we named in the .proto file
module.exports = package;
And now that we exported that package, the rest of our local codebase can require it in, or import it, as necessary.
2) Create a chattyMicroservice client (to invoke RPCs)
Now we can begin implementing our chatty microservice which will ping-ponging our eventual server with RPCs. The secret to using RPCs in Node.js - as you may have guessed - is EventEmitters!
In our chattyMicroservice.js file below, we first import two dependencies -- a credentials Object from the ‘grpc’ module, and we import the service ChattyMicroservice from the package we just exported from ‘package.js’. Next, we construct a gRPC channel Stub by invoking a new ‘ChattyMicroservice’, passing this channel Stub the server address to bind, and a credentials security level. Finally, we define our RPC EventEmitter and write out the logic for it to ‘ping pong’ -- we want this bidiClientEventEmitter to volley a Benchmark message back and forth to the Server, incrementing the requests and responses on each return.
// /clients/chattyMicroservice.js
const { credentials } = require( 'grpc' );
const { ChattyMicroservice } = require( '../proto/package.js' )
// the Stub is constructed from the package.ServiceName()
// the Stub has on it every RPC method
// the Stub is one half of a gRPC channel
const Stub = new ChattyMicroservice(
// binds it to the Server address
'localhost: 3000',
// defines the security level
credentials.createInsecure(),
);
// RPC invocations
/* the Stub has every RPC method, each of which, when invoked, returns an EventEmitter
with the ability to write messages to the server at the bound address - in this case,
‘localhost: 3000’ - and listen for returned messages from that server. Also in this case,
it is a bidirectional EventEmitter, able to both listen and write continuously. */
const bidiClientEventEmitter = Stub.BidiMath()
// Let’s initialize some mutable variables
let start;
let current;
let perResponse;
let perSecond;
// Client must write the first message to the server
bidiClientEventEmitter.write({requests: 1, responses: 0})
// adds a listener for metadata - metadata is sent only once at the beginning of a channel
bidiClientEventEmitter.on( 'metadata', metadata => {
// highly accurate Node.process nanosecond timer converted to an integer with Number()
start = Number(process.hrtime.bigint());
// returns the special metadata object as an Object
console.log(metadata.getMap())
})
// adds a listener for errors
bidiClientEventEmitter.on( 'error', (err) => console.error(err))
/* adds listener for message data, the benchmark message received is passed to the callback,
and the callback is run on every message received */
bidiClientEventEmitter.on( 'data', benchmark => {
// writes a message to Server
bidiClientEventEmitter.write(
// properties match the message fields for benchmark
{
requests: benchmark.requests + 1,
responses: benchmark.responses
}
)
// console logs every 100,000 invocations
if (benchmark.responses % 100000 === 0) {
// highly accurate Node.process nanosecond timer converted to an integer with Number()
current = Number(process.hrtime.bigint());
// nanoseconds to milliseconds averaging total responses
perResponse = ((current - start) / 1000000) / benchmark.responses;
// inverting milliseconds per response to responses per second
perSecond = 1 / (perResponse / 1000);
// adds new-lines with \n
console.log(
'\nRPC Invocations:',
'\nserver address:', bidiClientEventEmitter.getPeer(),
'\ntotal number of responses:', benchmark.responses,
'\navg millisecond speed per response:', perResponse,
'\nresponses per second:', perSecond,
)
}
});
3) Create a server (to execute RPCs)
And now that we created the client, we need a server to listen at localhost 3000. We can break our server-side code into two main pieces. The first is our BidiMathExecution function, which will run as soon as the client invokes the RPC for the first time. The second is our Server Object imported from grpc, which we will add the ChattyMicroservice service from our package, and then we will bind the server to listen on our designated socket, and start the server!
// /servers/server.js
const { Server, ServerCredentials } = require( 'grpc' );
const { ChattyMicroservice } = require( '../proto/package.js' );
// RPC executions, is passed an RPC-specific EventEmitter automatically
function BidiMathExecution(bidiServerEventEmitter) {
// highly accurate Node.process nanosecond timer converted to an integer with Number()
let start = Number(process.hrtime.bigint());
let current;
let perRequest;
let perSecond;
/* adds listener for message data, the benchmark message received is passed to the callback,
and the callback is run on every message received from Client */
bidiServerEventEmitter.on('data', benchmark => {
// writes a message back to Client
bidiServerEventEmitter.write(
// properties match the message fields for benchmark
{
requests: benchmark.requests,
responses: benchmark.responses + 1
}
);
// console logs every 100,000 executions
if (benchmark.requests % 100000 === 0) {
// highly accurate Node.process nanosecond timer converting to an integer with Number()
current = Number(process.hrtime.bigint());
// nanoseconds to milliseconds averaging total requests
perRequest = ((current - start) / 1000000) / benchmark.requests;
// inverting milliseconds per request to requests per second
perSecond = 1 / (perRequest / 1000);
// adds new-lines with \n
console.log(
'\nRPC Executions:',
'\nclient address:', bidiServerEventEmitter.getPeer(),
'\nnumber of requests:', benchmark.requests,
'\navg millisecond speed per request:', perRequest,
'\nrequests per second:', perSecond,
);
}
})
}
// creates a new instance of the Server Object
const server = new Server();
// adds a service as defined in the .proto, takes two Objects as arguments
server.addService(
// the service Object is the package.ServiceName.service
ChattyMicroservice.service,
/* the rpc method and it's attached function for execution - effectively this Object
is how we handle server routing, each property is like an endpoint */
{ BidiMath: BidiMathExecution }
);
// binds the server to a socket with a security level
server.bind('0.0.0.0: 3000', ServerCredentials.createInsecure())
// starts the server listening on the designated socket(s)
server.start();
4) Run the files!
And now the moment of truth! Let’s write the npm scripts inside our package.json to make it easier to run our server and client. In your package.json, scroll down to where your scripts are defined, and add the following to your package.json:
"scripts": {
"start": "node servers/server.js",
"client": "node clients/chattyMicroservice.js"
},
Now, open up a terminal inside your workspace folder, and enter the npm command -- npm start and your gRPC server will start listening on localhost 3000. Next, open up another terminal inside your workspace folder, and enter the npm command -- npm run client and your chattyMicroservice’s bidiMath EventEmitter will start immediately ping-ponging back and forth to the server. After about 10-20 seconds, you should see console.logs pop up in the client terminal resembling this message:
RPC Invocations:
server address: localhost: 3000
total number of responses: 100000
avg millisecond speed per response: 0.05811289428
responses per second: 17207.884969242663
And on the server terminal, you should see console.logs resembling this message
RPC Executions:
client address: ipv6:[::1]:53055
number of requests: 100000
avg millisecond speed per request: 0.05670221896
requests per second: 17635.99411701048
Congratulations! You’ve now written a fully functional gRPC channel that benchmarks itself on both ends! Bask in the glory of sending thousands of messages per second - depending on your computer, we can send 6000-18000 messages per second with RPCs! Just imagine how much faster it might be with an even more powerful computer - a cloud server within a distributed system, let’s say - and you quickly start to see the incredible power of gRPCs and Node.js!
As you may have noticed, this demo app you wrote has folders named clients and servers. This is your invitation to add more clients, add more servers! You can run multiple clients and servers on any number of sockets, even share sockets with multiple servers, as well as run multiple concurrent services, multiple concurrent RPCs, and use any number of message types. The HTTP/2 multiplex limit is the limit! For more information on gRPCs in Node.js, you can refer to Google’s own Node.js tutorials here and for the full API reference written by Google’s amazing engineers, go here.