This page describes typical setups of the service that uses userver framework. The purpose of the page is not to provide an ideal solution with perfectly matching 3rd-party tools, but rather show different approaches and give a starting point for designing and configuring an environment.
A simple setup, usually used during development. The request comes directly into the service, it is processed, logs are written. During processing the service could do direct requests to other services.
Pros:
Cons:
This configuration is quite useful for testing. However, there's no need to configure it manually for tests, because testsuite does that automatically (starts databases, fills them with data, tunes logging, mocks other services). See Functional service tests (testsuite) for more info.
For non testing purposes the configuration could be also quite useful for services that should have a single instance and have small load on database. Internal services that require no reliability and chat bots are examples of such services. For a starting point on configuration see Production configs and best practices.
Note that for a longstanding runs the logs of the service should be cleaned up at some point. Configure the logrotate
like software to move/remove the old logs and notify the 🐙 Service
by SIGUSR1
signal:
A good setup for a production. The request comes into a service balancer, balancer routes the request into one of the instances of the setup. In the instance the request goes into the Nginx (or some other reverse proxy, or just goes directly to the 🐙 Service
). Nginx could serve static requests, terminate TLS, do some header rewrites and forward request to the 🐙 Service
.
The service uses dynamic configs service, writes logs.
Some Metrics Uploader
script is periodically called, it retrieves metrics from the 🐙 Service
and sends the results to Metrics Aggregateor
(could be Prometheus, Graphite, VictoriaMetrics). Metrics could be viewed in web interface, for example via Grafana .
Logs Collector
(for example logstash
, fluentd
, vector
) processes the logs file and uploads results to logs aggregator (for example Elasticsearch). Logs could be viewed in web interface, for example via Kibana.
Pros:
Cons:
Metrics Uploader
script could be implemented in bash or Python to request service monitor. Note that for small containers the script could eat up quite a lot of resources if it converts metrics from one format to another. Prefer using already supported metrics format and feel free to add new formats via Pull Requests to userver github.
To avoid TCP/IP overhead it is useful to configure the 🐙 Service
to interact with Nginx
via pipes. For HTTP this could be done by configuring the components::Server to use unix-socket
rather than port
static configuration option.
Beware of the Logs Collector
software. Some of those applications are not very reliable and could eat up a lot of dynamic memory. In general, it is a good idea to limit their memory consumption by cgroups like mechanism or adjust OOM-killer priorities to kill the Logs Collector
rather that the 🐙 Service
itself. Also, do not forget to configure logrotate
, as was shown in the first recipe.
See Production configs and best practices for more configuration tips and tricks.
A good production setup that tries to solve issues with balancers from previous setup.
The request comes into a Sidecar Proxy
(like Envoy) directly. It could do some work of routing, header rewrites and forwards request to the 🐙 Service
.
Sidecar Proxy
is configured via xDS
(service discovery service) and knows about all the instances of all the services. If the 🐙 Service
has to do a network request, it does it via Sidecar Proxy
, that routes the request directly into one of the instances of the service. See USERVER_HTTP_PROXY for an info on how to route all the HTTP client requests to the Sidecar Proxy
.
All other parts of the setup remain the same as in the previous approach.
The service uses dynamic configs service, writes logs.
Some Metrics Uploader
script is periodically called, it retrieves metrics from the 🐙 Service
and sends the results to Metrics Aggregateor
. Metrics could be viewed in web interface, for example via Grafana .
Logs Collector
(for example logstash
, fluentd
, vector
) processes the logs file and uploads results to logs aggregator (for example Elasticsearch). Logs could be viewed in web interface, for example via Kibana.
Pros:
Cons:
xDS
becomes a single point of failure if the Sidecar Proxy
could not start without it after container restart.The setup is very close to the setup from previous recipe (except for the USERVER_HTTP_PROXY). refer to it for more tips.
This setup is useful for implementing some supplementary functionality to the main service. Usually the same sidecar is used for multiple different services of different logic.
Otherwise, the sidecar could be used as a first step of decomposing the main service into smaller parts.
Pros:
In cases, where the sidecar is used to decompose main service to smaller parts the tips from previous recipes apply.
For the case of supplementary logic the sidecar usually does not do heavy logic and should be configured for minimal resource consumption. Remove the unused components from code and/or adjust the static config file:
Also, there's usually no need in very high responsiveness of the sidecar, so use the same main-task-processor
for all the task processors.