Jump to content

GSoc/2023/StatusReports/KaranjotSingh: Difference between revisions

From KDE Community Wiki
Drquark1 (talk | contribs)
Week 7 and Week 8
Drquark1 (talk | contribs)
No edit summary
 
(15 intermediate revisions by the same user not shown)
Line 3: Line 3:
The Remote Eco Lab project aims to provide a streamlined process for measuring software energy consumption remotely using a CI/CD pipeline. By automating the measurement process and integrating with the OSCAR tool, developers can make informed decisions to improve code efficiency and work towards software eco-certification
The Remote Eco Lab project aims to provide a streamlined process for measuring software energy consumption remotely using a CI/CD pipeline. By automating the measurement process and integrating with the OSCAR tool, developers can make informed decisions to improve code efficiency and work towards software eco-certification


== Mentor ==
== Mentors ==


Volker Krause
Volker Krause
Line 13: Line 13:
== Merge Request ==
== Merge Request ==


The official repo of the project is : https://invent.kde.org/teams/eco/remote-eco-lab
The official repository for the project can be found at: https://invent.kde.org/teams/eco/remote-eco-lab


As this project required a lot of CI Testing / CI changes, so most of the work is done on this test repo : https://invent.kde.org/drquark/kecolabtestci
Migration from Remote-Test Repository to Remote-eco-lab:
https://invent.kde.org/teams/eco/remote-eco-lab/-/merge_requests/13


Most the changes related to testing on the CI are mentioned in : https://invent.kde.org/drquark/kecolabtestci/-/merge_requests/9
Due to the extensive need for Continuous Integration (CI) testing and CI-related modifications, the majority of the work has been carried out in this test repository: https://invent.kde.org/drquark/kecolabtestci


To download the latest artifacts and final report : https://invent.kde.org/drquark/kecolabtestci/-/pipelines/457169
The majority of the alterations related to CI testing can be found detailed in: https://invent.kde.org/drquark/kecolabtestci/-/merge_requests/9
 
For accessing the most recent artifacts and the final report, please visit: https://invent.kde.org/teams/eco/remote-eco-lab/-/jobs/1157464/artifacts/browse


== Blog Posts ==
== Blog Posts ==
Line 28: Line 31:


=== Week 1 and Week 2 ===
=== Week 1 and Week 2 ===
During the community bonding period, we set up a communication channel (https://matrix.to/#/#kde-eco-dev:kde.org) to foster open collaboration and maintain a public record of discussions. This also helped for other people to get involved in the project.
During the community bonding phase, we established a communication channel (accessible at ''https://matrix.to/#/#kde-eco-dev:kde.org'') to encourage transparent collaboration and maintain a public record of our discussions. This channel also served as a means to engage additional individuals in the project.
Also we set up a regular internal meetups (ie on Thursdays every week) and discussed about the project to the community in the monthly meetup.
 
During the week 1 and week 2, with the help of mentors we were able to set up the infrastructure involving setting up ssh keys, rasberry pi1, lab test PC etc. I worked upon configuring the X server on the lab test PC to run the gui application. In this week, I started working on the installing flatpak stage of the CI Pipeline as well.
Furthermore, we organized regular internal meetings, occurring weekly on Thursdays, to discuss project updates within our team. Additionally, we presented project progress to the wider community during our monthly meetups.
I also wrote a blog post to show my project to the larger community of KDE and helping them to get involved as well.
 
In the first and second weeks, with guidance from our mentors, we successfully set up the project infrastructure, which involved tasks such as configuring SSH keys, setting up Raspberry Pi1, and preparing the lab test PC. My specific contribution included configuring the X server on the lab test PC to enable the execution of graphical user interface (GUI) applications. During this timeframe, I started work on implementing the first stage ie installation of Flatpak as part of the Continuous Integration (CI) Pipeline.
 
I also authored a blog post to showcase our project to the broader KDE community.


=== Week 3 and Week 4 ===
=== Week 3 and Week 4 ===
I completed working on the installing flatpak stage. During these two weeks, I started working upon the energy measurement stage of the CI Pipeline. I got myself encountered with some errors related to copying files (scp) from the repository to the lab test PC, setting up environment variables. These were solved with the help of mentors and the solution was mainly straight forwarded and related to resolving all the commands in a single session. Coming to the end of the week 4 i was able to run and complete the energy measurement stage. We were able to generate the artifacts as well that involved power meter readings and hardware readings which are further passed to the next stage for analysing and generating a report.
I successfully finished the implementation of the Flatpak installation phase. Over the course of these two weeks, I also began tackling the energy measurement aspect of the CI Pipeline. However, I encountered some challenges related to copying files (via scp) from the repository to the lab test PC and configuring the necessary environment variables. With the help from the mentors, I was able to resolve these issues. The solutions primarily involved executing all the commands within a single session, simplifying the process.
 
As we reached the end of the fourth week, I had achieved the ability to execute and complete the energy measurement stage in the pipeline. This stage allowed us to generate artifacts, including power meter readings and hardware data, which are subsequently passed to the next stage for analysis and report generation.


=== Week 5 and Week 6 ===
=== Week 5 and Week 6 ===
I started working on the last stage of the measurement pipeline i.e the result stage. It required me to learn R language so during my 5th week, i learned about using R and then worked upon writing preprocessing scripts for the OSCAR tool (Open source Software Consumption Analysis in R developed by Umwelt-Campus Birkenfeld students). OSCAR is very challenging tool to work with when it comes to formatting so to fix the formatting issue, it was necessary to preprocess the scripts before providing it to OSCAR. In the Preprocessing script, I mainly worked upon taking arguments from the user on the script and also converting from epoch time and formatting the date and time for OSCAR.
 
Also I tried installing OSCAR and R on our Rasberry PI but as the process requires a lot of resources (Dependency and RAM issues) so it wasn't suitable so after discussing this with the mentors i settled of using the gitlab CI pipeline for installing and running OSCAR.
I began my work on the final phase of the measurement pipeline, which is the result stage. This phase demanded that I acquire proficiency in the R programming language. Thus, during my fifth week, I dedicated time to learning R and subsequently focused on creating preprocessing scripts tailored for the OSCAR tool(''Open-source Software Consumption Analysis in R, was developed by students from Umwelt-Campus Birkenfeld''). It's worth noting that OSCAR presents significant challenges, particularly in the area of data formatting. To address these formatting complexities, it became imperative to preprocess the scripts before feeding them into OSCAR.
 
Within the preprocessing script, my primary tasks revolved around handling user input as script arguments and performing conversions from epoch time, all while ensuring that date and time were correctly formatted for OSCAR.
 
Additionally, I attempted to install OSCAR and R on our Raspberry Pi. However, this proved impractical due to the substantial resource requirements, including issues related to dependencies and available RAM. Consequently, after talking with mentors, I made the decision to utilize the GitLab CI pipeline for both the installation and execution of OSCAR.


=== Week 7 and Week 8 ===
=== Week 7 and Week 8 ===
During this week, I modified the OSCAR script for using it on the gitlab CI Pipeline. This involved fixing issues related to formatting for eg. There was an issue with formatting of date ie the format DD-mm-yy is different from dd-mm-yy and this was causing an error in the final result. I debugged the code line by line and took help from Achim (Developed OSCAR) to fix issues as most of these issues cause the report to has a NA in various columns thus making it difficult to render it into a report file. I also solved an issue regarding oscar not being able to convert the date and time into the POSIX time which resulted the timestamp to be Null and one thing that is worth noticing is the fact, if any one of the value gets to be NULL it results in 0 value in every column and row, This issues was similar to above one on the formatting of date.
During this week, I made changes to the OSCAR script so that it could be used in the GitLab CI Pipeline. This mainly involved fixing problems with how data was organized and presented. For example, there was an issue with the way dates were formatted - the format "''%y-%m-%d''" is different from "''%Y-%m-%d''," and this was causing errors in the final results. To fix this, I carefully examined the code line by line and collaborated with Achim, who developed OSCAR, to resolve these issues. These issues were important because they caused the report to have "NA" (not available) values in various columns, making it hard to create a proper report file.
Finally after some errors with rendering and debugging it, i successfully was able to generate the report by the end of week 8. This report was generated for already tested tool Okular so I modified the script for a general input to generate report for any application and i was able to generate it for Kate
 
I also tackled a problem where OSCAR couldn't convert the date and time into the POSIX time format, resulting in null timestamps. An important thing to note is that if any value becomes null, it causes all columns and rows to have a value of 0. This problem was similar to the one above related to date formatting.
 
After dealing with rendering and debugging errors, I successfully managed to generate the report by the end of the eighth week. This report was initially created for the already tested tool Okular, but I adjusted the script to generate reports for any application, and I successfully generated one for Kate as well.
 
=== Week 9 and Week 10 ===
After testing the OSCAR R script locally on my computer, I set up the result stage for the GitLab CI pipeline and tested it using a tidyverse image that already had most of the necessary software installed and was lightweight. The script uses an updated OSCAR tool from the official repository. It takes the artifacts from the previous stage and runs the OSCAR tool with these updated artifacts to generate a report.
 
I faced a challenge related to automatically including the date in the filename for the report so that it could be saved as an artifact. The collectl output was saving the file as 'test1.csv-dd-mm-yy.tab', so to make it downloadable, I needed to dynamically know and replace the current date and time in the filename. I had to handle this for both the testing lab system and the GitLab CI pipeline.
 
To solve this, I defined variables for both situations. In the testing lab system, I copied the file with a generic name 'test1.csv', and in the GitLab pipeline, I used a variable to export it as an artifact with a name that included the date and time, like 'test1.csv-dd-mm-yy' (this specific format was needed because the preprocessing scripts used the filename to extract the date and time).
 
I also experimented with regex to export the artifact in a more complex format like "test1.csv-joseph-esprimop957-*.tab.gz". Another challenge was forwarding the artifacts from one job to another, which I solved by adding a dependencies field in the next job and by copying the artifacts to the required directory (since changing directories in a job could cause the loss of the default directory where the artifacts are placed).
 
With the guidance of mentors, I successfully resolved these issues, and all three jobs ran smoothly, allowing me to generate an energy measurement report in the end.
 
=== Week 11 and Week 12 ===
 
Right now, I'm moving the code from the test repository to the official remote eco lab repository. I'm also in the process of setting up GitLab custom runners for running these pipelines for long hours, and I am working with the sysadmin team at KDE for this.
 
Additionally, I'm working on creating documentation/Report that explain the work I've done and writing a clear guide (readme) for people who will test or contribute to the official repository in the future.
 
== Contact ==
 
* <b>IRC</b>: Karanjot Singh (@drquark:kde.org)
 
* <b>KDE identity</b>: [https://invent.kde.org/drquark drquark]
 
* <b>Email</b>: [email protected]

Latest revision as of 18:47, 29 August 2023

MEASURING ENERGY CONSUMPTION USING REMOTE LAB

The Remote Eco Lab project aims to provide a streamlined process for measuring software energy consumption remotely using a CI/CD pipeline. By automating the measurement process and integrating with the OSCAR tool, developers can make informed decisions to improve code efficiency and work towards software eco-certification

Mentors

Volker Krause

Benson muite

Nicolas Fella

Merge Request

The official repository for the project can be found at: https://invent.kde.org/teams/eco/remote-eco-lab

Migration from Remote-Test Repository to Remote-eco-lab: https://invent.kde.org/teams/eco/remote-eco-lab/-/merge_requests/13

Due to the extensive need for Continuous Integration (CI) testing and CI-related modifications, the majority of the work has been carried out in this test repository: https://invent.kde.org/drquark/kecolabtestci

The majority of the alterations related to CI testing can be found detailed in: https://invent.kde.org/drquark/kecolabtestci/-/merge_requests/9

For accessing the most recent artifacts and the final report, please visit: https://invent.kde.org/teams/eco/remote-eco-lab/-/jobs/1157464/artifacts/browse

Blog Posts

Google Summer of Code : KEcolab

Timeline

Week 1 and Week 2

During the community bonding phase, we established a communication channel (accessible at https://matrix.to/#/#kde-eco-dev:kde.org) to encourage transparent collaboration and maintain a public record of our discussions. This channel also served as a means to engage additional individuals in the project.

Furthermore, we organized regular internal meetings, occurring weekly on Thursdays, to discuss project updates within our team. Additionally, we presented project progress to the wider community during our monthly meetups.

In the first and second weeks, with guidance from our mentors, we successfully set up the project infrastructure, which involved tasks such as configuring SSH keys, setting up Raspberry Pi1, and preparing the lab test PC. My specific contribution included configuring the X server on the lab test PC to enable the execution of graphical user interface (GUI) applications. During this timeframe, I started work on implementing the first stage ie installation of Flatpak as part of the Continuous Integration (CI) Pipeline.

I also authored a blog post to showcase our project to the broader KDE community.

Week 3 and Week 4

I successfully finished the implementation of the Flatpak installation phase. Over the course of these two weeks, I also began tackling the energy measurement aspect of the CI Pipeline. However, I encountered some challenges related to copying files (via scp) from the repository to the lab test PC and configuring the necessary environment variables. With the help from the mentors, I was able to resolve these issues. The solutions primarily involved executing all the commands within a single session, simplifying the process.

As we reached the end of the fourth week, I had achieved the ability to execute and complete the energy measurement stage in the pipeline. This stage allowed us to generate artifacts, including power meter readings and hardware data, which are subsequently passed to the next stage for analysis and report generation.

Week 5 and Week 6

I began my work on the final phase of the measurement pipeline, which is the result stage. This phase demanded that I acquire proficiency in the R programming language. Thus, during my fifth week, I dedicated time to learning R and subsequently focused on creating preprocessing scripts tailored for the OSCAR tool(Open-source Software Consumption Analysis in R, was developed by students from Umwelt-Campus Birkenfeld). It's worth noting that OSCAR presents significant challenges, particularly in the area of data formatting. To address these formatting complexities, it became imperative to preprocess the scripts before feeding them into OSCAR.

Within the preprocessing script, my primary tasks revolved around handling user input as script arguments and performing conversions from epoch time, all while ensuring that date and time were correctly formatted for OSCAR.

Additionally, I attempted to install OSCAR and R on our Raspberry Pi. However, this proved impractical due to the substantial resource requirements, including issues related to dependencies and available RAM. Consequently, after talking with mentors, I made the decision to utilize the GitLab CI pipeline for both the installation and execution of OSCAR.

Week 7 and Week 8

During this week, I made changes to the OSCAR script so that it could be used in the GitLab CI Pipeline. This mainly involved fixing problems with how data was organized and presented. For example, there was an issue with the way dates were formatted - the format "%y-%m-%d" is different from "%Y-%m-%d," and this was causing errors in the final results. To fix this, I carefully examined the code line by line and collaborated with Achim, who developed OSCAR, to resolve these issues. These issues were important because they caused the report to have "NA" (not available) values in various columns, making it hard to create a proper report file.

I also tackled a problem where OSCAR couldn't convert the date and time into the POSIX time format, resulting in null timestamps. An important thing to note is that if any value becomes null, it causes all columns and rows to have a value of 0. This problem was similar to the one above related to date formatting.

After dealing with rendering and debugging errors, I successfully managed to generate the report by the end of the eighth week. This report was initially created for the already tested tool Okular, but I adjusted the script to generate reports for any application, and I successfully generated one for Kate as well.

Week 9 and Week 10

After testing the OSCAR R script locally on my computer, I set up the result stage for the GitLab CI pipeline and tested it using a tidyverse image that already had most of the necessary software installed and was lightweight. The script uses an updated OSCAR tool from the official repository. It takes the artifacts from the previous stage and runs the OSCAR tool with these updated artifacts to generate a report.

I faced a challenge related to automatically including the date in the filename for the report so that it could be saved as an artifact. The collectl output was saving the file as 'test1.csv-dd-mm-yy.tab', so to make it downloadable, I needed to dynamically know and replace the current date and time in the filename. I had to handle this for both the testing lab system and the GitLab CI pipeline.

To solve this, I defined variables for both situations. In the testing lab system, I copied the file with a generic name 'test1.csv', and in the GitLab pipeline, I used a variable to export it as an artifact with a name that included the date and time, like 'test1.csv-dd-mm-yy' (this specific format was needed because the preprocessing scripts used the filename to extract the date and time).

I also experimented with regex to export the artifact in a more complex format like "test1.csv-joseph-esprimop957-*.tab.gz". Another challenge was forwarding the artifacts from one job to another, which I solved by adding a dependencies field in the next job and by copying the artifacts to the required directory (since changing directories in a job could cause the loss of the default directory where the artifacts are placed).

With the guidance of mentors, I successfully resolved these issues, and all three jobs ran smoothly, allowing me to generate an energy measurement report in the end.

Week 11 and Week 12

Right now, I'm moving the code from the test repository to the official remote eco lab repository. I'm also in the process of setting up GitLab custom runners for running these pipelines for long hours, and I am working with the sysadmin team at KDE for this.

Additionally, I'm working on creating documentation/Report that explain the work I've done and writing a clear guide (readme) for people who will test or contribute to the official repository in the future.

Contact

  • IRC: Karanjot Singh (@drquark:kde.org)