userver: Functional service tests (testsuite)
Loading...
Searching...
No Matches
Functional service tests (testsuite)

Getting started

🐙 userver has built-in support for functional service tests using Yandex.Taxi Testsuite. Testsuite is based on pytest and allows developers to test their services in isolated environment. It starts service binary with minimal database and all external services mocked then allows developer to call service handlers and test their result.

Supported features:

  • Database startup (Mongo, Postgresql, Clickhouse, ...)
  • Per-test database state
  • Service startup
  • Mocksever to mock external service handlers
  • Mock service time, utils::datetime::Now()
  • Testpoint
  • Cache invalidation
  • Logs capture
  • Service runner

CMake integration

CMake integration via userver_testsuite_add()

With userver_testsuite_add() function you can easily add testsuite support to your project. Its main purpose is:

  • Setup Python venv environment or use an existing one.
  • Create runner script that setups PYTHONPATH and passes extra arguments to pytest.
  • Register ctest target.
  • Add a start-* target that starts the service and databases with testsuite configs and waits for keyboard interruption to stop the service.

cmake/UserverTestsuite.cmake library is automatically added to CMake path after userver environment setup. Add the following line to use it:

# cmake
include(UserverTestsuite)

Then create testsuite target:

userver_testsuite_add(
SERVICE_TARGET ${PROJECT_NAME}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/tests
PYTHON_BINARY ${TESTSUITE_PYTHON_BINARY}
PYTEST_ARGS
--service-config=${CMAKE_CURRENT_SOURCE_DIR}/static_config.yaml
--service-binary=${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}
--service-config-vars=${CMAKE_CURRENT_SOURCE_DIR}/config_vars.yaml
)

Arguments

  • SERVICE_TARGET, required CMake name of the target service to test. Used as suffix for testsuite- and start- CMake target names.
  • WORKING_DIRECTORY, pytest working directory. Default is ${CMAKE_CURRENT_SOURCE_DIR}.
  • PYTEST_ARGS, list of extra arguments passed to pytest.
  • PYTHONPATH, list of directories to be prepended to PYTHONPATH.
  • REQUIREMENTS, list of requirements.txt files used to populate venv.
  • PYTHON_BINARY, path to existing Python binary.
  • PRETTY_LOGS, set to OFF to disable pretty printing.

Some of the most useful arguments for PYTEST_ARGS:

Argument Description
-v Increase verbosity
-s Do not intercept stdout and stderr
-x Stop on first error
-k EXPRESSION Filter tests by expression
--service-logs-pretty Enable logs coloring
--service-logs-pretty-disable Disable logs coloring
--service-log-level=LEVEL Set the log level for the service. Possible values: trace, debug, info, warning, error, critical
--service-wait With this argument the testsuite will wait for the service start by user. For example under gdb. Testsuite outputs a hint on starting the service
-rf Show a summary of failed tests

CMake integration via userver_testsuite_add_simple()

userver_testsuite_add_simple() is a version of userver_testsuite_add() that makes some assumptions of the project structure. It should be invoked from the service's CMakeLists.txt as follows:

userver_testsuite_add_simple(
PYTHON_BINARY "${TESTSUITE_PYTHON_BINARY}"
)

It supports the following file structure (and a few others):

  • configs/config.yaml
  • configs/config_vars.[testsuite|tests].yaml [optional]
  • configs/dynamic_config_fallback.json [optional]
  • configs/[secdist|secure_data].json [optional]
  • [testsuite|tests]/conftest.py

Python environment

You may want to create new virtual environment with its own set of packages. Or reuse existing one. That could be done this way:

  • If PYTHON_BINARY is specified then it is used.
  • Otherwise, a new test venv is created. Passed REQUIREMENTS, if any, are installed. Requirements of userver itself (based on selected USERVER_FEATURE_* flags) are installed as well.

Basic requirements.txt file may look like this:

yandex-taxi-testsuite[mongodb]

Creating per-testsuite virtual environment is a recommended way to go. It creates Python venv in the current binary directory:

${CMAKE_CURRENT_BINARY_DIR}/venv-testsuite-${SERVICE_TARGET}

Run with ctest

userver_testsuite_add() registers a ctest target with name testsuite-${SERVICE_TARGET}. Run all project tests with ctest command or use filters to run specific tests:

ctest -V -R testsuite-my-project # SERVICE_TARGET argument is used

Direct run

To run pytest directly userver_testsuite_add() creates a testsuite runner script that could be found in corresponding binary directory. This may be useful to run a single testcase, to start the testsuite with gdb or to start the testsuite with extra pytest arguments:

${CMAKE_CURRENT_BINARY_DIR}/runtests-testsuite-${SERVICE_TARGET}

You can use it to manually start testsuite with extra pytest arguments, e.g.:

./build/tests/runtests-testsuite-my-project -vvx ./tests -k test_foo

Please refer to testuite and pytest documentation for available options. Run it with --help argument to see the short options description.

./build/tests/runtests-testsuite-my-project ./tests --help

Debug

To debug the functional test you can start testsuite with extra pytest arguments, e.g.:

./build/tests/runtests-testsuite-my-project --service-wait ./tests -k test_foo

At the beginning of the execution the console will display the command to start the service, e.g.:

gdb --args /.../my-project/build/functional-tests --config /.../config.yaml

Now you can open a new terminal window and run this command in it or if you use an IDE you can find the corresponding CMake target and add arg --config /.../config.yaml. After that it will be possible to set breakpoints and start target with debug.

pytest_userver

By default pytest_userver plugin is included in python path. It provides basic testsuite support for userver service. To use it add it to your pytest_plugins in root conftest.py:

# Adding a plugin from userver/testsuite/pytest_plugins/
pytest_plugins = ['pytest_userver.plugins.core']

It requires extra PYTEST_ARGS to be passed:

userver_testsuite_add(
SERVICE_TARGET ${PROJECT_NAME}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/tests
PYTHON_BINARY ${TESTSUITE_PYTHON_BINARY}
PYTEST_ARGS
--service-config=${CMAKE_CURRENT_SOURCE_DIR}/static_config.yaml
--service-binary=${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}
--service-config-vars=${CMAKE_CURRENT_SOURCE_DIR}/config_vars.yaml
)

The plugins match the userver cmake targets. For example, if the service links with userver-core its tests should use the pytest_userver.plugins.core plugin.

CMake target Matching plugin for testsuite
userver-core pytest_userver.plugins.core
userver-grpc pytest_userver.plugins.grpc
userver-postgresql pytest_userver.plugins.postgresql
userver-clickhouse pytest_userver.plugins.clickhouse
userver-redis pytest_userver.plugins.redis
userver-mongo pytest_userver.plugins.mongo
userver-rabbitmq pytest_userver.plugins.rabbitmq
userver-mysql pytest_userver.plugins.mysql
See also
userver_libraries

Userver testsuite support

Userver has built-in support for testsuite.

In order to use it you need to register corresponding components:

Headers:

Add components to components list:

.Append<components::TestsuiteSupport>()
.Append<components::DynamicConfigClient>()

Add testsuite components to config.yaml:

tests-control:
load-enabled: $testsuite-enabled
path: /tests/{action}
method: POST
task_processor: main-task-processor
testpoint-timeout: 10s
testpoint-url: mockserver/testpoint
throttling_enabled: false
testsuite-support:
Warning
Please note that the variable testsuite-enabled must be disabled in production environment. Testsuite sets the testsuite-enabled variable into true when it runs the service. In the example above this variable controls whether tests-control component is loaded. This component must be disabled for production.

Features

The essential parts of the testsuite are pytest_userver.plugins.service_client.service_client and pytest_userver.plugins.service_client.monitor_client fixtures that give you access to the pytest_userver.client.Client and pytest_userver.client.ClientMonitor respectively. Those types allow to interact with a running service.

Testsuite functions reference could be found at Testsuite Python support.

Service config generation

pytest_userver modifies static configs config.yaml and config_vars.yaml passed to pytest before starting the userver based service.

To apply additional modifications to the static config files declare USERVER_CONFIG_HOOKS variable in your pytest-plugin with a list of functions or pytest-fixtures that should be run before config is written to disk. USERVER_CONFIG_HOOKS values are collected across different files and all the collected functions and fixtures are applied.

Example usage:

USERVER_CONFIG_HOOKS = ['prepare_service_config']
@pytest.fixture(scope='session')
def prepare_service_config(grpc_mockserver_endpoint):
def patch_config(config, config_vars):
components = config['components_manager']['components']
components['greeter-client']['endpoint'] = grpc_mockserver_endpoint
return patch_config

Service client

Fixture service_client is used to access the service being tested:

async def test_ping(service_client):
response = await service_client.get('/ping')
assert response.status == 200

When tests-control component is enabled service_client is pytest_userver.client.Client instance. Which supports special testsuite related methods.

On first call to service_client service state is implicitly updated, e.g.: caches, mocked time, etc.

Service environment variables

Use service_env fixture to provide extra environment variables for your service:

@pytest.fixture(scope='session')
def service_env(redis_sentinels):
secdist_config = {
'redis_settings': {
'taxi-tmp': {
'password': '',
'sentinels': redis_sentinels,
'shards': [{'name': 'test_master0'}],
},
},
}
return {'SECDIST_CONFIG': json.dumps(secdist_config)}

Extra client dependencies

Use extra_client_deps fixture to provide extra fixtures that your service depends on:

@pytest.fixture
def extra_client_deps(some_fixture_that_required_by_service, some_other_fixture):
pass

Note that auto_client_deps fixture already knows about the userver supported databases and clients, so usually you do not need to manually register any dependencies.

Mockserver

Mockserver allows to mock external HTTP handlers. It starts its own HTTP server that receives HTTP traffic from the service being tested. And allows to install custom HTTP handlers within testsuite. In order to use it all HTTP clients must be pointed to mockserver address.

Mockserver usage example:

@pytest.fixture(autouse=True)
def mock_translations(mockserver, translations, mocked_time):
@mockserver.json_handler('/v1/translations')
def mock(request):
return {
'content': translations,
'update_time': utils.timestring(mocked_time.now()),
}
return mock

To connect your HTTP client to the mockserver make the HTTP client use a base URL of the form http://{mockserver}/{service_name}/.

This could be achieved by patching static config as described in config hooks and providing a mockserver address using mockserver_info.url(path):

pytest_plugins = ['pytest_userver.plugins.core']
USERVER_CONFIG_HOOKS = ['userver_config_translations']
@pytest.fixture(scope='session')
def userver_config_translations(mockserver_info):
def do_patch(config_yaml, config_vars):
components = config_yaml['components_manager']['components']
components['cache-http-translations'][
'translations-url'
] = mockserver_info.url('v1/translations')
return do_patch

Mock time

Userver provides a way to mock internal datetime value. It only works for datetime retrieved with utils::datetime::Now(), see Mocked time section for details.

From testsuite you can control it with mocked_time plugin.

Example usage:

@pytest.mark.now('2019-12-31T11:22:33Z')
async def test_now(service_client, mocked_time):
response = await service_client.get('/now')
assert response.status == 200
assert response.json() == {'now': '2019-12-31T11:22:33+00:00'}
# Change mocked time and sync state
mocked_time.sleep(671)
await service_client.update_server_state()
response = await service_client.get('/now')
assert response.status == 200
assert response.json() == {'now': '2019-12-31T11:33:44+00:00'}

Example are available here:

Testpoint

Testpoints are used to send messages from the service to testcase and back. Typical use cases are:

  • Retrieve intermediate state of the service and test it
  • Inject errors into the service
  • Synchronize service and testcase execution

First of all you should include testpoint header:

It provides TESTPOINT() and family of TESTPOINT_CALLBACK() macros that do nothing in production environment and only work when run under testsuite. Under testsuite they only make sense when corresponding testsuite handler is installed.

All testpoints has their own name that is used to call named testsuite handler. Second argument is formats::json::Value() instance that is only evaluated under testsuite.

TESTPOINT() usage sample:

TESTPOINT("simple-testpoint", [] {
builder["payload"] = "Hello, world!";
return builder.ExtractValue();
}());

Then you can use testpoint from testcase:

async def test_basic(service_client, testpoint):
@testpoint('simple-testpoint')
def simple_testpoint(data):
assert data == {'payload': 'Hello, world!'}
response = await service_client.get('/testpoint')
assert response.status == 200
assert simple_testpoint.times_called == 1

In order to eliminate unnecessary testpoint requests userver keeps track of testpoints that have testsuite handlers installed. Usually testpoint handlers are declared before first call to service_client which implicitly updates userver's list of testpoint. Sometimes it might be required to manually update server state. This can be achieved using service_client.update_server_state() method e.g.:

@testpoint('injection-point')
def injection_point(data):
return {'value': 'injected'}
await service_client.update_server_state()
assert injection_point.times_called == 0

Accessing testpoint userver is not aware of will raise an exception:

with pytest.raises(
):
assert injection_point.times_called == 0

Logs capture

Testsuite can be used to test logs written by service. To achieve this the testsuite starts a simple logs capture TCP server and tells the service to replicate logs to it on per test basis.

Example usage:

async def test_select(service_client):
async with service_client.capture_logs(log_level='INFO') as capture:
response = await service_client.get('/logcapture')
assert response.status == 200
records = capture.select(
text='Message to capture', link=response.headers['x-yarequestid'],
)
assert len(records) == 1, capture.select()

Example on logs capture usage could be found here:

Testsuite tasks

Testsuite tasks facility allows to register a custom function and call it by name from testsuite. It's useful for testing components that perform periodic job not related to its own HTTP handler.

You can use testsuite::TestsuiteTasks to register your own task:

auto& testsuite_tasks = testsuite::GetTestsuiteTasks(context);
// Only register task for testsuite environment
if (testsuite_tasks.IsEnabled()) {
testsuite_tasks.RegisterTask("sample-task", [] {
TESTPOINT("sample-task/action", [] {
return builder.ExtractValue();
}());
});
} else {
// Proudction code goes here
}

After that you can call your task from testsuite code:

await service_client.run_task('sample-task')
assert task_action.times_called == 1

Or spawn the task asynchronously using context manager:

async with service_client.spawn_task('sample-task'):
await task_action.wait_call()

An example on testsuite tasks could be found here:

Metrics

Testsuite provides access to userver metrics written by utils::statistics::Writer and utils::statistics::MetricTag via monitor_client , see tutorial on configuration. It allows to:

Example usage:

For a metric tag that is defined as:

"sample-metrics.foo"};

and used like:

std::atomic<int>& foo_metric = metrics_->GetMetric(kFooMetric);
++foo_metric; // safe to increment conceurrently

the metrics could be retrieved and reset as follows:

async def test_reset(service_client, monitor_client):
# Reset service metrics
await service_client.reset_metrics()
# Retrieve metrics
metric = await monitor_client.single_metric('sample-metrics.foo')
assert metric.value == 0
assert not metric.labels

For metrics with labels, they could be retrieved in the following way:

async def test_engine_metrics(service_client, monitor_client):
metric = await monitor_client.single_metric(
'engine.task-processors.tasks.finished',
labels={'task_processor': 'main-task-processor'},
)
assert metric.value > 0
assert metric.labels == {'task_processor': 'main-task-processor'}
metrics_dict = await monitor_client.metrics(
prefix='http.', labels={'http_path': '/ping'},
)
assert metrics_dict
assert 'http.handler.cancelled-by-deadline' in metrics_dict
assert (
metrics_dict.value_at(
'http.handler.in-flight',
labels={
'http_path': '/ping',
'http_handler': 'handler-ping',
'version': '2',
},
)
== 0
)

The Metric python type is hashable and comparable:

# Checking for a particular metric
assert metrics.Metric({}, value=3) in values['sample']
# Comparing with a set of Metric
assert values['sample'] == {
metrics.Metric(labels={}, value=3),
metrics.Metric(labels={'label': 'b'}, value=2),
metrics.Metric(labels={'label': 'a'}, value=1),
}

Metrics Portability

Different monitoring systems and time series databases have different limitations. To make sure that the metrics of your service could be used on most of the popular systems, there is special action in server::handlers::TestsControl.

To use it you could just write the following test:

async def test_metrics_portability(service_client):
warnings = await service_client.metrics_portability()
assert not warnings

Note that warnings are grouped by type, so you could check only for some particular warnings or skip some of them. For example:

async def test_partial_metrics_portability(service_client):
warnings = await service_client.metrics_portability()
warnings.pop('label_name_mismatch', None)
assert not warnings, warnings

Service runner

Testsuite provides a way to start standalone service with all mocks and database started. This can be done by adding --service-runner-mode flag to pytest, e.g.:

./build/tests/runtests-my-project ./tests -s --service-runner-mode

Please note that -s flag is required to disable console output capture.

pytest_userver provides default service runner testcase. In order to override it you have to add your own testcase with @pytest.mark.servicetest:

@pytest.mark.servicetest
def test_service(service_client):
...