userver: gRPC
Loading...
Searching...
No Matches
gRPC

Quality: Platinum Tier.

Introduction

🐙 userver provides a gRPC driver as userver-grpc library. It uses namespace ugrpc::client and namespace ugrpc::server.

The driver wraps grpcpp in the userver asynchronous interface.

See also:

Capabilities

  • Creating asynchronous gRPC clients and services;
  • Forwarding gRPC Core logs to userver logs;
  • Caching and reusing connections;
  • Timeouts;
  • Collection of metrics on driver usage;
  • Cancellation support;
  • Automatic authentication using middlewares;
  • Deadline propagation .

Installation

Generate and link a library from .proto schemas and link to it in your CMakeLists.txt:

userver_add_grpc_library will link userver-grpc transitively and will generate the usual .pb.h + .pb.cc files. For service definitions, it will additionally generate asynchronous interfaces foo_client.usrv.pb.hpp and foo_service.usrv.pb.hpp.

To create gRPC clients in your microservice, register the provided ugrpc::client::ClientFactoryComponent and add the corresponding component section to the static config.

To create gRPC services in your microservice, register the provided ugrpc::server::ServerComponent and add the corresponding component section to the static config.

gRPC clients

Client creation

In a component constructor, find ugrpc::client::ClientFactoryComponent and store a reference to its ugrpc::client::ClientFactory. Using it, you can create gRPC clients of code-generated YourServiceClient types.

Client creation in an expensive operation! Either create them once at the server boot time or cache them.

Client usage

Typical steps include:

  • Filling a std::unique_ptr<grpc::ClientContext> with request settings
    • gRPC documentation recommends using set_deadline for each RPC
    • Fill the authentication metadata as necessary
  • Stream creation by calling a client method
  • Operations on the stream
  • Depending on the RPC kind, it is necessary to call Finish or Read until it returns false (otherwise the connection will close abruptly)

Read the documentation on gRPC streams:

On errors, exceptions from userver/ugrpc/client/exceptions.hpp are thrown. It is recommended to catch them outside the entire stream interaction. You can catch exceptions for specific gRPC error codes or all at once.

TLS / SSL

May be enabled via

# yaml
components_manager:
components:
grpc-client-factory:
auth-type: ssl

Available values are:

  • insecure (default)
  • ssl

SSL has to be disabled in tests, because it requires the server to have a public domain name, which it does not in tests. In testsuite, SSL in gRPC clients is disabled automatically.

gRPC services

Service creation

A service implementation is a class derived from a code-generated YourServiceBase interface class. Each service method from the schema corresponds to a method of the interface class. If you don't override some of the methods, UNIMPLEMENTED error code will be reported for those.

To register your service:

Service method handling

Each method receives:

  • A stream controller object, used to respond to the RPC
    • Also contains grpc::ClientContext from grpcpp library
  • A request (for single-request RPCs only)

When using a server stream, always call Finish or FinishWithError. Otherwise the client will receive UNKNOWN error, which signifies an internal server error.

Read the documentation on gRPC streams:

On connection errors, exceptions from userver/ugrpc/server/exceptions.hpp are thrown. It is recommended not to catch them, leading to RPC interruption. You can catch exceptions for specific gRPC error codes or all at once.

Custom server credentials

By default, gRPC server uses grpc::InsecureServerCredentials. To pass a custom credentials:

  1. Do not pass grpc-server.port in the static config
  2. Create a custom component, e.g. GrpcServerConfigurator
  3. context.FindComponent<ugrpc::server::ServerComponent>().GetServer()
  4. Call WithServerBuilder method on the returned server
  5. Inside the callback, call grpc::ServerBuilder::AddListeningPort, passing it your custom credentials
    • Look into grpc++ documentation and into <grpcpp/security/server_credentials.h> for available credentials
    • SSL credentials are grpc::SslServerCredentials

Middlewares

The gRPC server can be extended by middlewares. Middleware is called on each incoming (for service) or outgoing (for client) RPC request. Different middlewares handle the call in the defined order. A middleware may decide to reject the call or call the next middleware in the stack. Middlewares may implement almost any enhancement to the gRPC server including authorization and authentication, ratelimiting, logging, tracing, audit, etc.

Middlewares to use are indicated in static config in section middlewares of ugrpc::server::ServiceComponentBase descendant component. Default middleware list for handlers can be specified in grpc-server.service-defaults.middlewares config section.

Example configuration:

components_manager:
components:
some-service-client:
middlewares:
- grpc-client-logging
- grpc-client-deadline-propagation
- grpc-client-baggage
grpc-server:
service-defaults:
middlewares:
- grpc-server-logging
- grpc-server-deadline-propagation
- grpc-server-congestion-control
- grpc-server-baggage
some-service:
middlewares:
# Completely overwrite the default list
- grpc-server-logging

Use ugrpc::server::MiddlewareBase and ugrpc::client::MiddlewareBase to implement new middlewares.

Generic API

gRPC generic API allows to call and accept RPCs with dynamic service and method names. The other side will see this as a normal RPC, it does not need to use generic API.

Intended mainly for use in proxies. Metadata can be used to proxy the request without parsing it.

See details in:

Full example showing the usage of both:

  • #pragma once
    // For testing purposes only, in your services write out userver:: namespace
    // instead.
    namespace samples {
    class ProxyService final : public ugrpc::server::GenericServiceBase::Component {
    public:
    static constexpr std::string_view kName = "proxy-service";
    ProxyService(const components::ComponentConfig& config,
    void Handle(Call& call) override;
    private:
    };
    } // namespace samples
  • #include <proxy_service.hpp>
    #include <grpcpp/client_context.h>
    #include <grpcpp/server_context.h>
    #include <grpcpp/support/byte_buffer.h>
    namespace samples {
    namespace {
    grpc::string ToGrpcString(grpc::string_ref str) {
    return {str.data(), str.size()};
    }
    void ProxyRequestMetadata(const grpc::ServerContext& server_context,
    grpc::ClientContext& client_context) {
    // Proxy all client (request) metadata,
    // add some custom metadata as well.
    for (const auto& [key, value] : server_context.client_metadata()) {
    client_context.AddMetadata(ToGrpcString(key), ToGrpcString(value));
    }
    client_context.AddMetadata("proxy-name", "grpc-generic-proxy");
    }
    void ProxyTrailingResponseMetadata(const grpc::ClientContext& client_context,
    grpc::ServerContext& server_context) {
    // Proxy all server (response) trailing metadata,
    // add some custom metadata as well.
    for (const auto& [key, value] : client_context.GetServerTrailingMetadata()) {
    server_context.AddTrailingMetadata(ToGrpcString(key), ToGrpcString(value));
    }
    server_context.AddTrailingMetadata("proxy-name", "grpc-generic-proxy");
    }
    } // namespace
    ProxyService::ProxyService(const components::ComponentConfig& config,
    : ugrpc::server::GenericServiceBase::Component(config, context),
    client_(context
    .FindComponent<ugrpc::client::SimpleClientComponent<
    ugrpc::client::GenericClient>>("generic-client")
    .GetClient()) {}
    void ProxyService::Handle(Call& call) {
    // In this example we proxy any unary RPC to client_, adding some metadata.
    grpc::ByteBuffer request_bytes;
    // Read might throw on a broken RPC, just rethrow then.
    if (!call.Read(request_bytes)) {
    // The client has already called WritesDone.
    // We expect exactly 1 request, so that's an error for us.
    call.FinishWithError(grpc::Status{grpc::StatusCode::INVALID_ARGUMENT,
    "Expected exactly 1 request, given: 0"});
    return;
    }
    grpc::ByteBuffer ignored_request_bytes;
    // Wait until the client calls WritesDone before proceeding so that we know
    // that no misuse will occur later. For unary RPCs, clients will essentially
    // call WritesDone implicitly.
    if (call.Read(ignored_request_bytes)) {
    call.FinishWithError(
    grpc::Status{grpc::StatusCode::INVALID_ARGUMENT,
    "Expected exactly 1 request, given: at least 2"});
    return;
    }
    auto client_context = std::make_unique<grpc::ClientContext>();
    ProxyRequestMetadata(call.GetContext(), *client_context);
    // Deadline propagation will work, as we've registered the DP middleware
    // in the config of grpc-server component.
    // Optionally, we can set an additional timeout using GenericOptions::qos.
    auto client_rpc = client_.UnaryCall(call.GetCallName(), request_bytes,
    std::move(client_context));
    grpc::ByteBuffer response_bytes;
    try {
    response_bytes = client_rpc.Finish();
    } catch (const ugrpc::client::ErrorWithStatus& ex) {
    // Proxy the error returned from client.
    ProxyTrailingResponseMetadata(client_rpc.GetContext(), call.GetContext());
    call.FinishWithError(ex.GetStatus());
    return;
    } catch (const ugrpc::client::RpcError& ex) {
    // Either the upstream client has cancelled our server RPC, or a network
    // failure has occurred, or the deadline has expired. See:
    // * ugrpc::client::RpcInterruptedError
    // * ugrpc::client::RpcCancelledError
    LOG_WARNING() << "Client RPC has failed: " << ex;
    call.FinishWithError(grpc::Status{grpc::StatusCode::UNAVAILABLE,
    "Failed to proxy the request"});
    return;
    }
    ProxyTrailingResponseMetadata(client_rpc.GetContext(), call.GetContext());
    // WriteAndFinish might throw on a broken RPC, just rethrow then.
    call.WriteAndFinish(response_bytes);
    }
    } // namespace samples
  • // For testing purposes only, in your services write out userver:: namespace
    // instead.
    #include <userver/ugrpc/client/middlewares/log/component.hpp>
    #include <userver/ugrpc/server/middlewares/log/component.hpp>
    #include <proxy_service.hpp>
    int main(int argc, char* argv[]) {
    const auto component_list =
    // Base userver components
    .Append<congestion_control::Component>()
    // HTTP client and server are (sadly) needed for testsuite support
    .Append<clients::dns::Component>()
    // gRPC client setup
    .Append<ugrpc::client::ClientFactoryComponent>()
    .Append<ugrpc::client::middlewares::log::Component>()
    ugrpc::client::GenericClient>>("generic-client")
    // gRPC server setup
    .Append<ugrpc::server::ServerComponent>()
    .Append<ugrpc::server::middlewares::deadline_propagation::Component>()
    .Append<samples::ProxyService>();
    return utils::DaemonMain(argc, argv, component_list);
    }
  • # yaml
    components_manager:
    components:
    # Base userver components
    logging:
    fs-task-processor: fs-task-processor
    loggers:
    default:
    file_path: '@stderr'
    level: debug
    overflow_behavior: discard
    testsuite-support:
    congestion-control:
    # HTTP client and server are (sadly) needed for testsuite support
    server:
    load-enabled: $testsuite-enabled
    listener:
    port: $server-port
    task_processor: main-task-processor
    listener-monitor:
    port: $monitor-port
    task_processor: monitor-task-processor
    http-client:
    load-enabled: $testsuite-enabled
    fs-task-processor: fs-task-processor
    dns-client:
    load-enabled: $testsuite-enabled
    fs-task-processor: fs-task-processor
    tests-control:
    load-enabled: $testsuite-enabled
    path: /tests/{action}
    method: POST
    task_processor: main-task-processor
    testpoint-timeout: 10s
    testpoint-url: mockserver/testpoint
    throttling_enabled: false
    # gRPC client setup (ClientFactoryComponent and SimpleClientComponent)
    grpc-client-factory:
    task-processor: grpc-blocking-task-processor
    middlewares:
    - grpc-client-logging
    - grpc-client-deadline-propagation
    grpc-client-logging:
    grpc-client-deadline-propagation:
    generic-client:
    endpoint: $grpc-generic-endpoint
    # gRPC server setup (ServerComponent and ProxyService)
    grpc-server:
    port: $grpc-server-port
    service-defaults:
    task-processor: main-task-processor
    middlewares:
    - grpc-server-logging
    - grpc-server-deadline-propagation
    - grpc-server-congestion-control
    grpc-server-logging:
    grpc-server-deadline-propagation:
    grpc-server-congestion-control:
    proxy-service:
    default_task_processor: main-task-processor
    task_processors:
    main-task-processor:
    worker_threads: 4
    monitor-task-processor:
    worker_threads: 1
    thread_name: mon-worker
    fs-task-processor:
    worker_threads: 2
    grpc-blocking-task-processor:
    worker_threads: 2
    thread_name: grpc-worker
  • testsuite-enabled: true
    server-port: 8080
    monitor-port: 8081
    grpc-server-port: 8090
    # "Magical" config_vars value that will cause testsuite to override it with
    # real grpc mockserver endpoint
    grpc-generic-endpoint: $grpc_mockserver
  • cmake_minimum_required(VERSION 3.14)
    project(userver-samples-grpc-generic-proxy CXX)
    find_package(userver COMPONENTS grpc REQUIRED)
    add_executable(${PROJECT_NAME}
    main.cpp
    src/proxy_service.cpp
    )
    target_include_directories(${PROJECT_NAME} PRIVATE src)
    target_link_libraries(${PROJECT_NAME} PRIVATE userver::grpc)
    # Actually unused in the service, only needed for testsuite tests.
    # We could generate just the Python bindings, but this approach is currently
    # not implemented in userver_add_grpc_library.
    userver_add_grpc_library(${PROJECT_NAME}-proto PROTOS samples/greeter.proto)
    add_dependencies(${PROJECT_NAME} ${PROJECT_NAME}-proto)
    userver_testsuite_add_simple()

Based on:

Metrics

  • Client metrics are put inside grpc.client.by-destination {grpc_destination=FULL_SERVICE_NAME/METHOD_NAME}
  • Server metrics are put inside grpc.server.by-destination {grpc_destination=FULL_SERVICE_NAME/METHOD_NAME}

These are the metrics provided for each gRPC method:

  • timings.1min — time from RPC start to finish (utils::statistics::Percentile)
  • status with label grpc_code=STATUS_CODE_NAME — RPCs that finished with specified status codes, one metric per gRPC status
  • Metrics for RPCs that finished abruptly without a status:
  • abandoned-error — RPCs that we forgot to Finish (always a bug in ugrpc usage). Such RPCs also separately report the status or network error that occurred during the automatic request termination
  • deadline-propagated — RPCs, for which deadline was specified. See also userver deadline propagation
  • rps — requests per second:
    sum(status) + network-error + cancelled + cancelled-by-deadline-propagation
  • eps — server errors per second
    sum(status if is_error(status))
    The status codes to be considered server errors are chosen according to OpenTelemetry recommendations
    • UNKNOWN
    • DATA_LOSS
    • UNIMPLEMENTED
    • INTERNAL
    • UNAVAILABLE
    • Note: network-error is not accounted in eps, because either the client is responsible for the server dropping the request (TryCancel, deadline), or it is truly a network error, in which case it's typically helpful for troubleshooting to say that there are issues not with the uservice process itself, but with the infrastructure
  • active — The number of currently active RPCs (created and not finished)

Unit tests and benchmarks