This is the page for the Quality Assurance team. It includes links for QA services, documentation, tests and any information related to the different QA processes in the project.

Weekly tests reports

The images are tested weekly using both manual and automated tests as documented on the test case site. Reports of the testing performed are available on the test report site.

Services

Tools

Tools used for QA tasks and infrastructure.

Robot Framework

Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax and it utilizes the keyword-driven testing approach. Its testing capabilities can be extended by test libraries implemented with either Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases. It is open source software released under Apache License 2. [Read More]

Test Data Reporting

Testing is a fundamental part of the project, but it is not so useful unless it goes along with an accurate and convenient model to report the results of such a testing. The QA Test Report is an application that has been developed to save and report the test results for the Apertis images. It supports both automated tests results executed by LAVA and manual tests results submitted by a tester. [Read More]

Test Definitions

The test cases, both manual and automated, are written in the LAVA test definition file format, which stores the instructions to run the automated tests in YAML files. Git is used as the data storage backend for all the test cases. The current Apertis tests can be found in the Apertis Test Cases repository. The test cases are versioned using Git branches to enable functionality change without breaking tests for older releases. [Read More]

Immutable Rootfs Tests

Testing on a immutable rootfs Tests should be self-contained and not require changes to the rootfs: any change to the rootfs causes the tested system to diverge from the one used in production, reducing the value of testing. Changing the rootfs is also impossible or very cumbersome to do with some deployment systems such as dm-verity or OSTree. Other systems may simply not ship package management systems like apt/dpkg due to size constraints, making package dependencies not viable. [Read More]

LQA

LQA is both a tool and API for LAVA quality assurance tasks. It stands for LAVA Quality Assurance and its features go from submitting LAVA jobs, collect tests results, and query most of the LAVA metadata. It is a tool developed by Collabora and it is the standard way to submit test jobs for Apertis images. Installation Fetch the latest code from the git repository and install using python pip as it will handle automatically the required dependencies:: [Read More]

Test Case Guidelines

The following are guidelines to create and edit test cases. Please follow them unless there is a compelling reason to make an exception. Workflow Before developing a new test case: Make sure the test case doesn’t exist by checking the Test Cases list. Make sure nobody is already working on a test case by checking the Phabricator tasks. Determine the main feature to focus test upon and set the test case identifier accordingly. [Read More]

Personal LAVA Tests

This tutorial explains how to submit a personal LAVA job for an Apertis test, which is very useful during development, either to debug a test, or to check that everything is working as expected with the test before its final integration. Running a personal test basically consists in adding the LAVA test definition file in a personal repository and submitting a LAVA job that will fetch this file from there to execute the test. [Read More]