Jan 10, 2022

Communicating between services with gRPC

Setting up the communication layer between services can be a cumbersome task, even if they’re written in the same language, built on top of the same framework, and running on the same infrastructure. Every minute spent debugging weird connection issues, malformed payloads, and other parts that occasionally break could have been spent on building the actual product.

There are numerous technologies that attempt to solve one or more of the problems outlined above, and one of the most widely used is gRPC. Initially developed by Google, gRPC is a remote procedure call (RPC) framework that runs in almost any environment and is used heavily in the cloud-native ecosystem.

Intro to gRPC

gRPC is built in a modular way that allows you to exchange some parts when you want to, but for the course of this guide, I’ll focus on the defaults. By this, I mean that gRPC transmits data over HTTP/2 and serializes it to binary data using Protobuf. Okay, that’s admittedly a lot, so let’s dig a bit deeper.

The first part is pretty simple, out of the possible choices of network layers, the default protocol gRPC uses for managing connections is HTTP/2. There are other options, such as a protocol for browser clients which cannot use the native gRPC protocol via the HTTP/2 layer.

This concludes establishing and managing connections, so let’s get to the part where we send data over the wire. Once again, the default choice for serializing and deserializing data is to use Protocol Buffers (Protobuf in short). With Protobuf, you define your data model once, then generate code for the language and framework you want to use.

Coupled with gRPC, this means that you start by defining your schema, then use a compiler plugin to generate the correct code for your language with gRPC-related types and functionality.

Let’s start by installing everything we’ll need.

Getting started

In this guide, we’ll be connecting a Go client to a Node.js server. For this, we’ll need both languages/runtimes installed on our system. Once this is set up, we’ll need to install the Protobuf compiler (protoc).

For Go, we’ll need to install the Protobuf compiler plugins via go install

$ go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.26
$ go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.1

We’ll set up a Go modules project with

go mod init grpc_example

For Node.js, we’ll use the protoc-gen-ts package.

npm i protoc-gen-ts @grpc/grpc-js google-protobuf @types/google-protobuf

We’ll create a protobuf directory close to our source code (in our case the relative directory path is src/protobuf), and add the api.proto file to define our API schema. To generate type-safe code, we’ll have to run the following command

protoc \
  --go_out=. --go_opt=paths=source_relative \
  --go-grpc_out=. --go-grpc_opt=paths=source_relative \
  --ts_out=. --ts_opt=unary_rpc_promise=true --plugin=protoc-gen-ts=./node_modules/.bin/protoc-gen-ts \
  src/protobuf/api.proto

Let’s continue by defining our service and adding some procedures!

Defining messages

There are two major versions of Protobuf in use today, proto2, and the newer proto3. The latter has some important language changes, so it’s not completely backward-compatible.

To understand a bit more about how we can structure our data model, I’ll start off with the definition that we’ll explore step by step.

syntax = "proto3";

// Assuming we used grpc_example as the root module
option go_package = "grpc_example/protobuf";

service APIService {
  rpc CreateOrder(CreateOrderRequest) returns (CreateOrderResponse);
}

message CreateOrderRequest {
	string itemId = 1;
	int32 quantity = 2;
}

message CreateOrderResponse {
	Order createdOrder = 1;
}

message Order {
  string id = 1;
  string itemId = 2;
  int32 quantity = 3;
}

At first, we’ll define our RPC service. The Protobuf compiler will detect the service and use a provided plugin to generate the necessary code for it, in our case a gRPC client and server implementation.

Then, we define a CreateOrder procedure in our service, which receives a message named CreateOrderRequest and returns a message called CreateOrderResponse.

Messages can have multiple fields, which require a data type, name, and field number. This numeric value must always stay constant, but the name of the field can be changed later on. Protobuf will use only the number for serialization, so a change in the number also represents a change in the data you’re dealing with.

There are different scalar value types you can use, including doubles, ints, strings, booleans, and bytes. You can also use other (nested) messages as data types.

Another important concept is that of default values. If you used Go before, you’ll know zero values, which comes quite close to it. Essentially, whenever values are missing, Protobuf will choose a suitable default. For strings, an empty string is used, for booleans, the default is false, and for numeric types, the default value is 0.

I haven’t talked about packages, importing other definitions, and other features yet, this would probably exceed the scope of this introductory post, but I might include those in future guides!

With our definitions set up, we can implement our server! After running

protoc \
  --go_out=. --go_opt=paths=source_relative \
  --go-grpc_out=. --go-grpc_opt=paths=source_relative \
  --ts_out=. --ts_opt=unary_rpc_promise=true --plugin=protoc-gen-ts=./node_modules/.bin/protoc-gen-ts \
  src/protobuf/api.proto

you should see a couple of new files being added to the protobuf directory, including api.ts, api.pb.go, and api_grpc.pb.go.

Let’s get to implementing our CreateOrder procedure!

Listening for procedures on the server

import { ServerUnaryCall, sendUnaryData } from '@grpc/grpc-js';
import {
  CreateOrderRequest,
  CreateOrderResponse,
  UnimplementedAPIServiceService,
  Order
} from './protobuf/api';
import * as grpc from '@grpc/grpc-js';

After importing everything we’ll need, let’s start by implementing the procedures first using the unimplemented abstract class generated for us. For this, we use the generated Protobuf classes that will take care of serializing our data.

class Server extends UnimplementedAPIServiceService {
  CreateOrder(
    call: ServerUnaryCall<CreateOrderRequest, CreateOrderResponse>,
    callback: sendUnaryData<CreateOrderResponse>
  ): void {
    const order = new Order({
      id: '1',
      itemId: call.request.itemId,
      quantity: call.request.quantity
    });
    callback(null, new CreateOrderResponse({ createdOrder: order }));
  }
}

Now that we have our procedure implemented, we can start up the server itself.

const server = new grpc.Server();

server.addService(UnimplementedAPIServiceService.definition, new Server());

server.bindAsync(
  '0.0.0.0:4884',
  grpc.ServerCredentials.createInsecure(),
  (err, port) => {
    if (err) {
      console.error(err);
      process.exit(1);
    }
    server.start();
    console.log(`gRPC server listening on port ${port}`);
  }
);

Now that our server is started, let’s send a message from the Go client!

Sending a procedure from the client

package main

import (
	"context"
	"google.golang.org/grpc"
	"google.golang.org/grpc/credentials/insecure"
	"grpc_example/src/protobuf"
)

func main() {
	conn, err := grpc.Dial("localhost:4884", grpc.WithTransportCredentials(insecure.NewCredentials()))
	if err != nil {
		panic(err)
	}
	defer conn.Close()

	client := protobuf.NewAPIServiceClient(conn)

	ctx := context.Background()

	resp, err := client.CreateOrder(ctx, &protobuf.CreateOrderRequest{
		ItemId:   "gopher-plushie",
		Quantity: 4,
	})
	if err != nil {
		panic(err)
	}

	println(resp.CreatedOrder.Id)
}

Running the client logs 1, the ID we set on the server. That’s pretty much it! For production use cases, you’d use TLS to encrypt your data in transit, and make use of advanced features, some of which I outlined below!

Advanced features

In addition to core features like the unary procedure calls we implemented, there are more ways to communicate between services including streaming, and bidirectional streaming RPCs. To have more control over running procedures, measuring performance, and debugging, there are additional features like deadlines/timeouts, procedure cancellation, and metadata.

If you’re interested, we’ll cover some of those topics in future guides! Just let me know.