Jump to content

KEcoLab

From KDE Community Wiki

Quick Links

Getting Started

User Documentation

Developer Documentation

About Us

Introduction

The Remote Eco Lab project aims to provide a streamlined process for measuring software energy consumption remotely using a CI/CD pipeline. By automating the measurement process and integrating with the OSCAR tool, developers can make informed decisions to improve code efficiency and obtain software eco-certification with the Blue Angel. For further details on measuring the energy consumption of software, Use the links below to navigate to each section:

  • Getting Started : Learn what KEcolab is, why it matters, and the differences between remote and manual testing processes.
  • User Documentation: Detailed instructions for users on creating usage scenario scripts, submitting merge requests, and accessing test results.
  • Developer Documentation: Information on project structure, CI/CD integration, and contribution guidelines.
  • About Us: Background information on the project, contributors, contact details, and meeting notes.


Getting Started

What is KEcolab?

KEcolab is KDE’s dedicated platform for measuring the energy consumption of software remotely. Every piece of software we use operates on physical hardware that consumes energy—not only the energy used to charge our device but also the often-overlooked energy expenditures occurring in the background. KEcolab’s mission is to expose these hidden energy costs and provide users and developers with accurate and reliable data so they can optimize their software for better energy efficiency.

The Hidden Cost of Digital Technology

In our digital age, technology’s material footprint is often hidden by the compactness of devices and the “invisible” nature of the infrastructure (like data centers and underwater cables). Users underestimate the material footprint of digital technology due to this very nature. Every software application contributes to energy consumption because it runs on hardware that requires power—power that is generated using resources and often contributes to carbon emissions. As digital services become more ubiquitous, so does the cumulative energy consumption—and consequently, the environmental impact—grows, making sustainability a critical concern in software development.

Measuring the software

To support sustainable software development, a test lab was set up in Berlin to measure the energy consumption of software. Initially, developers had to send their software for on-site testing, which was slow and inefficient. To improve this, an automated system was introduced. Developers could now submit scripts through GitLab, which would run energy tests through an automated pipeline without needing to be on-site and analyze the results for accuracy and consistency. This streamlined process made energy measurements faster and more accessible, helping promote energy-efficient development practices and supporting sustainability goals.

User Documentation

Measuring an application's energy consumption isn’t as simple as just checking how much power your computer is using. There can be multiple different processes running simultaneously, consuming energy in some way or the other. So, to accurately measure the energy consumption of a certain application, it’s essential to recreate real-world user interactions of just that application in a controlled environment. This is done using Usage Scenario Scripts, a structured set of instructions that simulate different application states and user behaviours to get a clear picture of its power consumption. These scripts are executed multiple times to ensure that every test is performed consistently, eliminating variability that could affect energy measurement results.

But what are these instructions?

Before energy measurement begins, the exact instructions to be executed must be defined. These instructions are categorized into three key scripts:

1. Baseline Measurement Script - What’s the system doing when nothing is running?

Before we can measure how much power an application uses, we need a reference point—a baseline. This is just the system running without the application open. This script is executed first and provides a crucial comparison for later stages. Establishing this baseline isolates the energy consumption that is specifically due to the application, rather than the operating system or other processes running in the background.

2. Idle State Measurement Script - What happens when the app is open but you’re not using it?

The idle script measures the energy usage when the application is running but not actively being used. While this might seem like an unimportant scenario, applications often consume power even when sitting idle due to background tasks, UI rendering, and memory usage. This test helps identify whether an application is inefficient when idle. Ideally, applications should minimize energy consumption when not in use.

3. Standard Usage Scenario Script - Simulating how a real person uses the app

The standard usage script is the most comprehensive, simulating real-world interactions with the application. This script mimics how an actual user would navigate through and interact with the software, providing the most accurate picture of its energy consumption under typical use.

    The power meter records energy usage throughout the entire session, generating detailed data on power consumption during different operations;

Generating these Scripts

To create these scripts, you'll need an automation tool that can simulate user interactions without manual input. This ensures each test is performed consistently, providing accurate measurements. Some tools that can help with this are:

xdotool
A command-line tool for X11 that simulates keyboard input and mouse activity. It's powerful and widely used for automating tasks in Linux environments.
Actiona
A cross-platform automation tool that allows you to execute various actions on your computer, such as emulating mouse clicks, key presses, and more. It offers a simple editor and supports scripting for advanced customization.
kecotest
An automation tool designed for testing and automating tasks in KDE environments. It integrates well with KDE applications and provides a user-friendly interface for creating automation scripts.

These tools can help you automate the process of creating and running the scripts, post which it's important to organize them properly before submitting them for testing through a merge request on our Gitlab project.

Creating a Merge Request

Fork or clone the Remote Eco Lab repository to your GitLab account. Create a new branch in your fork/clone and add the usage scenario scripts in the path -
scripts/test_scripts/package_name/ (for example, scripts/test_scripts/org.kde.kate/log_sus.sh).
Push the changes and initiate a merge request, using the application package name as the title. For example, org.kde.kate.

Review and Approval

Sit back and relax while your proposed application is reviewed for any potential security risks.

Accessing the results:

The final Energy measurement report is available at Job Artifacts under the Result stage. Utilize the energy measurement report to analyze energy consumption of your software.


Developer Documentation

The test lab in berlin consists of -

  • A test PC (System Under Test) to run emulation scripts,
  • A power meter to measure the energy consumption of the test PC, and
  • Another PC (Data Aggregator & Evaluator) to collect and analyze the output data.

Initially to access this facility, developers relied on individuals in Berlin to test software for them on-site. But now it can be accessed through a CI/CD pipeline that is automatically triggered once a merge request is approved in our Repository.

The merge request must contain three scripts—one for baseline measurement, one for idle mode, and one for standard usage scenarios—along with a configuration file. The energy usage results can be analyzed and summarized using OSCAR (Open Source Software Consumption Analysis in R). This also ensures that each test runs in the exact same environment, leading to more reliable and reproducible results.

The CI/CD pipeline is divided into three stages. The first stage involves installing the Flatpak on the lab computer, followed by the energy measurement stage where energy consumption is measured using a predefined process. Finally, in the last stage, the results are available as an artifact.

About the CI/CD Pipeline

This project leverages GitLab’s CI/CD pipelines to automatically measure and analysis the software’s energy consumption through a set of process. This pipeline not only ensures the application's functionality but also provides valuable insights into its energy footprint under various usage scenarios.

Pipeline Triggers and Rules

The execution of a GitLab CI/CD pipeline is typically triggered by specific events. This configuration is written in .yaml file. For this project, the rules keyword is used in each stage (build, energy_measurement, and result) with the following condition: rules:

 - if: $CI_PIPELINE_SOURCE == 'merge_request_event'

This rule specifies that the stage (and all jobs within it) should only be executed when the pipeline is triggered by a merge request event.

Setup & Configuration

To execute the pipeline, certain configurations are need to be done.

Environmental Variables

Environmental variables are key-value pairs that are configured and used to execute the program effectively. There are two environmental variables that need to be set.

  1. LABPC_IP
    1. IP address of the machine where the application will be tested.
    2. It is set to "192.168.170.23"
  2. PM_IP
    1. IP address of the Power Meter used for energy measurements.
    2. It is set to "192.168.170.22"

Tags

Tags are keywords that specifies which runner(machines such as physical or virtual) should execute the pipeline. For this pipeline, EcoLabWorker is used.

Pipeline Stages

Different stages of pipeline is defined in order to execute the process of measuring and analysing the energy consumption smoothly and error free.
For this pipeline, we defined three stages: Build, Measurement and Result.

Build stage
First stage of the pipeline is Build. In this stage, the application that needs to be tested in LABPC is installed.

  1. Docker image - The build stage utilizes the alpine Docker image. Alpine Linux is a lightweight and security-focused distribution, making it an ideal choice for CI/CD environments where minimizing image size and maximizing efficiency are crucial.
  2. The before_script section contains commands that are executed before the main script section. Here, ‘echo $CI_MERGE_REQUEST_TITLE’ to print the title of the merge request.
  3. Script - The core of the build stage lies within the script section. This section defines the commands that perform the actual application installation. Once connected to the LABPC, the script uses the flatpak remote-add command to add the Flathub remote repository. This allows Flatpak to download and install applications from Flathub. flatpak install command installs the application specified by the merge request title.
  4. rules - The rules section defines when this stage should be executed. In this case, it ensures that the build stage only runs when the pipeline is triggered by a merge request event.
 # Build stage
 build:
 stage: build
 image: alpine
 tags:
   - EcoLabWorker
 before_script:
   - echo $CI_MERGE_REQUEST_TITLE
 script:
 # Flatpak command for installing test application based on merge request title from flathub
   - ssh -o StrictHostKeyChecking=no -i ~/.ssh/kecolab kecolab@$LABPC_IP "
     flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo &&
     flatpak install --user $CI_MERGE_REQUEST_TITLE -y "
 rules:
   - if: $CI_PIPELINE_SOURCE == 'merge_request_event'


Energy measurement stage
Once the application is installed, the energy measurement stage will commence. Its purpose is to quantify the energy consumption of the application under various usage scenarios.

  1. timeout - Energy measurements can sometimes take a long time, especially for complex applications or extended test scenarios. This timeout prevents the pipeline from getting stuck if a test runs longer than expected.
  2. Before the actual measurements begin, the before_script section prepares the LABPC for the tests bycopies the test scripts from the GitLab runner to the /tmp directory on the LABPC. The configuration.sh file performs specific configuration to be performed on the LABPC before the actual test scenarios are executed.
  3. The script executes three test scenarios such as baseline, suspended and idle. For each scenario it performs, power meter readings, hardware readings, scenario execution, process termination and data export.
  4. Artifacts section - The artifacts section defines which files generated during this stage should be passed on to the next stage (the result stage).
  5. Energy measurement stage

energy_measurement:

 stage: energy_measurement
 image: alpine
 timeout: 12h
 tags:
   - EcoLabWorker
 before_script: 
 # Copy Usage scenario scripts from test_scripts dir to the LABPC
  - scp -o StrictHostKeyChecking=no -r -i  ~/.ssh/kecolab scripts/test_scripts/$CI_MERGE_REQUEST_TITLE/* kecolab@$LABPC_IP:/tmp/
 # Check for configuration script for application under test
  - ssh -o StrictHostKeyChecking=no -i ~/.ssh/kecolab kecolab@$LABPC_IP 'export DISPLAY=:0 && export TERM=xterm && cd /tmp/ && if [ -f "configuration.sh" ]; then chmod +x configuration.sh; fi; exit'
 script:
  - export CURRENT_DATE=$(date +%Y%m%d)
  # Start taking PM Readings (Script 1)
  - cd /home/gitlab-runner/GUDEPowerMeter && nohup python3 check_gude_modified.py -i 1 -x 192.168.170.22 >> ~/testreadings1.csv 2>/dev/null &
  # Start taking Hardware readings using collectl (for script 1)

. . . Check full code [here]()

Result stage The result stage is the final stage in energy measurement pipeline. Its primary function is to process the raw data collected in the energy_measurement stage and generate meaningful reports that summarize the application's energy consumption characteristics. The script section defines the steps involved in analyzing the data and creating the reports: Data extraction: The raw data from measurement stage (.csv file) is extracted using gunzip. Data processing: The data is preprocessing using an R script ~/Preprocessing.R and performs necessary data cleaning, transformation, and aggregation. Report Generation: A set of R scripts are execute to generate reports for each scenarios such as ~/sus_analysis_script.R for report under suspended scenario and ~/idle_analysis_script.R for report under idle scenario. Artifacts section: The artifacts section specifies which files generated in this stage should be made available for download after the pipeline completes. This includes all the generated reports (SUS_Report.pdf, Idle_Report.pdf), LaTeX files, graphics directories, and supporting files. By defining these files as artifacts, they can be easily downloaded from the GitLab CI/CD interface, allowing developers to review the energy analysis results.

  1. Result Stage (To Generate Energy Measurement Report)

result:

 stage: result
 image: invent-registry.kde.org/sysadmin/ci-images/kecolab-analysis:latest
 dependencies:
   # Use Artifacts from Previous stage
   - energy_measurement
 script:
 - export CURRENT_DATE=$(date +%Y%m%d)
 - gunzip test1.csv-kecolab-$CURRENT_DATE.tab.gz
 - gunzip test2.csv-kecolab-$CURRENT_DATE.tab.gz
 - gunzip test3.csv-kecolab-$CURRENT_DATE.tab.gz
 # Preprocess Raw data for OSCAR Script
 - Rscript ~/Preprocessing.R test1.csv-kecolab-$CURRENT_DATE.tab test2.csv-kecolab-$CURRENT_DATE.tab test3.csv-kecolab-$CURRENT_DATE.tab $CI_PROJECT_DIR
 # Run OSCAR Analysis script to generate a report SUS
 - Rscript ~/sus_analysis_script.R
 - cp -r ~/SUS_Report.pdf ~/SUS_Report.tex ~/sus_graphics ~/SUS_Report_files $CI_PROJECT_DIR/
 # Run OSCAR Analysis script to generate a report for Idle Mode
 - Rscript ~/idle_analysis_script.R
 - cp -r ~/Idle_Report.pdf ~/Idle_Report.tex ~/idle_graphics ~/Idle_Report_files $CI_PROJECT_DIR/
 artifacts:
   paths:
    - SUS_Report.pdf
    - SUS_Report.tex
    - SUS_Report_files
    - sus_graphics
    - Idle_Report.pdf
    - Idle_Report.tex
    - Idle_Report_files
    - idle_graphics
 rules:
   - if: $CI_PIPELINE_SOURCE == 'merge_request_event'