At ELCE, Samsung’s Paweł Wieczorek explained how he automated Tizen testing on popular Linux hacker boards by integrating Jenkins, OBS, and other tools.
Demand is increasing for embedded software projects to support a variety of Linux hacker boards — and that requires time consuming hardware testing to prove that your software works reliably. Fortunately, you can integrate test automation tools into your software development process to streamline the task, as explained by release engineer Paweł Wieczorek at last October’s Embedded Linux Conference Europe.
In the talk, Wieczorek described how he and his colleagues at the Samsung R&D Institute Poland developed an Automated Testing Laboratory to streamline testing of Tizen Common on community-backed SBCs. Their test lab automates and integrates processes performed primarily in the open source Jenkins automation server and Open Build Service (OBS) binary package build and distribution service. The solution is applicable to many other Linux software platforms beyond Tizen.
At ELCE, Paweł Wieczorek shows a device node on Samsung’s Tizen Common test farm. Each node consists of six multiplexer boards (top and bottom) and dual USB hubs in the middle surrounded by pairs of Linux SBCs.
(click image to enlarge)
“For most developers, once a patch is merged to the Git repository, the work is done, but from the release engineer’s point of view, the journey has just begun,” said Wieczorek. “In our process, once the change is merged, the integrator must create a submit request — a simple tag linked to an object in Git. The tag is then read from the Gerrit event stream and passed to Jenkins, which orders a rebuild of a corresponding package in OBS. Then OBS contributes the package and all of its dependencies, and a new image is created so it can be tested, and then accepted or rejected in the next release.”
The test lab runs Tizen on multiple instances of the Linux-driven, community-backed MinnowBoard Turbot, Odroid-U3+, and 96Boards compatible HiKey SBCs. Although the process was partially automated, certain steps still required daily human interaction by the release engineer.
“When build failures occur, release engineers need to investigate the possible causes,” said Wieczorek. “They need to check whether new images introduce any regressions, such as undocumented changes in the API, or if there are changes in connectivity. These tasks are time consuming and monotonous, and yet require precision and focus.”
The process was especially tedious because Jenkins and OBS are not designed to interoperate easily. The release engineer was required to download multiple images for the target devices from the main Tizen server, and then flash all the targets and run the tests. To avoid this repetitive process, “we considered testing less frequently, or maybe only for major releases, or maybe run simpler tests,” said Wieczorek. “But we decided those steps would violate our principles.”
The only solution was to further automate the system, and that required modifications to the software, communications infrastructure, and hardware. In software, the key problem was that “OBS lacks an event mechanism in its default installation, and enabling one requires considerable configuration,” said Wieczorek. “Also, the naming conventions are designed to be easily readable by humans, so these needed to be parsed. For scheduling and queueing of tasks, “we experimented with some lighter alternatives like Task Spooler or Buildbot, but decided to stick with what we knew: Jenkins.”
The second challenge was to establish reliable automated communications with all devices in the testing farm. Wieczorek considered OpenSSH and serial console, but found they both had drawbacks. “OpenSSH depends on network services, and we would like to detect network connectivity before we try to communicate with an actual device,” said Wieczorek. “Serial console is much less flexible, and offers a lower rate of data transfer.”
Instead, the team turned to the Tizen SDK’s Smart Development Bridge (SDB) device management tool, which “combines the best of both worlds,” said Wieczorek. “It depends on a single service and it’s flexible like SSH, and provides us with decent file transfer rates.”
To automate the creation and maintenance of the test servers, the test team found that their simple Testlab-handbook for newcomers, based on the Python Sphinx tool, was not enough. “The pace of the changes was too high for maintaining a separate handbook so we decided to maintain a Git repository with configuration for our Testlab-host,” said Wieczorek.
To accomplish this, they chose the Ansible Python configuration management tool. The team also implemented a system to share and publish test results on Tizen.org wikis, based on MediaWiki. Here, they used MediaWiki’s Pywikibot tool, which automates editing and information gathering.
Developing a custom microSD demultiplexer
The biggest challenge was on the hardware side: automating the delivery of Tizen images onto target devices. “Most of the boards required different procedures, most of which were architecture specific,” said Wieczorek. “They were designed for a single device per host, not a build farm, and there were often conflicts if too many devices were connected.”
Wieczorek shows off design for custom microSD demultiplexer
(click image to enlarge)
The solution was to exploit the one common denominator on all the SBCs: bootable microSD cards. Wieczorek’s team custom designed a microSD card demultiplexer board that provides access between the testing host and the device. The board includes a power switch, as well as “ports for board control and connections for controlling the target device and the corresponding slots on the host.” The test farm comprises multiple device nodes, each of which consists of six multiplexer boards and several USB hubs.
The Tizen team published the schematics for the boards and connectors and posted them with sources on the Tizen.org Git repository” target=”new”>Tizen.org Git repository. Meanwhile, there are plans to monitor changes between tested images in a more detailed way and to enable the retrieval of partial information from failed test runs. Other plans call for improved resource management and a distributed setup scheme so testing won’t be bound to a single location.
In summing up, Wieczorek offered a few basic recommendations. “First, there’s no need to reinvent the wheel,” he said. “All the building blocks are already there — they just need configuration. Second, consider designing custom hardware to simplify tasks. Finally, remember that automation pays off in the long term.”
Watch the complete presentation below:
This article is copyright © 2017 Linux.com and was originally published here. It has been reproduced by this site with the permission of its owner. Please visit Linux.com for up-to-date news and articles about Linux and open source.