Skip to content

Enrichment pipeline for CUR reports which adds energy and carbon data allowing to report and reduce the impact of the your cloud usage.

License

Notifications You must be signed in to change notification settings

DigitalPebble/spruce

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Spruce

SPRUCE

License Build Status

Spruce helps estimate the environmental impact of your cloud usage. By leveraging open source models and data, it enriches usage reports generated by cloud providers and allows you to build reports and visualisations. Having the greenops and finops data in the same place makes it easier to expose your costs and impacts side by side.

Spruce uses Apache Spark to read and write the usage reports (typically in Parquet format) in a scalable way and, thanks to its modular approach, splits the enrichment of the data into configurable stages.

A typical sequence of stages would be:

  • estimation of embedded emissions from resources used
  • estimation of energy used
  • application of PUE and other overheads
  • application of carbon intensity factors

Please note that this is currently a prototype which handles only CUR reports from AWS. Not all AWS services are covered.

One of the benefits of using Apache Spark is that you can use EMR on AWS to enrich the CURs at scale without having to export or expose any of your data.

Prerequisites

You will need to have CUR reports as inputs. Those are generated via DataExports and stored on S3 as Parquet files.

You need Docker to be installed on your machine in order to run the tests.

If you rely on the default configuration, the BoaviztAPI enrichment module requires to connect to an instance of the BoaviztAPI. By default, it connects to localhost:5000. The easiest way of launching it is with Docker:

docker run -p 5000:5000 --name boaviztapi ghcr.io/boavizta/boaviztapi:latest

Run Spruce

With Spark installed

You can copy the Jar from the latest release or alternatively, build from source, which requires Apache Maven and Java 17 or above.

mvn clean package

To run Spruce locally, you need Apache Spark installed and added to the $PATH (and the BoaviztAPI on localhost:5000):

spark-submit --class com.digitalpebble.spruce.SparkJob --driver-memory 8g ./target/spruce-*.jar -i ./curs -o ./output

If you downloaded a released jar, make sure the path matches the location of the file.

The -i parameter specifies the location of the directory containing the CUR reports in Parquet format. The -o parameter specifies the location of enriched Parquet files generated in output.

The option -c allows to specify a JSON configuration file to override the default settings.

With Docker

Pull the latest Docker image with

docker pull ghcr.io/digitalpebble/spruce

This retrieves a Docker image containing Apache Spark as well as the Spruce jar.

The command below processes the data locally by mounting the directories containing the CURs and output as volumes:

docker run -it -v ./curs:/curs -v ./output:/output --rm --name spruce --network host \
ghcr.io/digitalpebble/spruce \
/opt/spark/bin/spark-submit  \
--class com.digitalpebble.spruce.SparkJob \
--driver-memory 4g \
--master 'local[*]' \
/usr/local/lib/spruce.jar \
-i /curs -o /output/enriched

Explore the output

Using DuckDB locally or Athena on AWS:

create table enriched_curs as select * from 'output/**/*.parquet';

select line_item_product_code, product_servicecode,
       round(sum(operational_emissions_co2eq_g)/1000,2) as co2_usage_kg,
       round(sum(energy_usage_kwh),2) as energy_usage_kwh
       from enriched_curs where operational_emissions_co2eq_g > 0.01
       group by line_item_product_code, product_servicecode order by co2_usage_kg desc;

should give an output similar to

line_item_product_code product_servicecode co2_usage_kg energy_usage_kwh
AmazonEC2 AmazonEC2 5411.49 17501.57
AWSELB AWSDataTransfer 1.82 5.67
AmazonS3 AWSDataTransfer 1.42 4.6
AmazonEC2 AWSDataTransfer 0.7 2.36
AmazonECR AWSDataTransfer 0.07 0.28

To measure the proportion of the costs for which emissions where calculated

select
  round(covered * 100 / "total costs", 2) as percentage_costs_covered
from (
  select
    sum(line_item_unblended_cost) as "total costs",
    sum(line_item_unblended_cost) filter (where operational_emissions_co2eq_g is not null) as covered
  from
    enriched_curs
  where
    line_item_line_item_type like '%Usage'
);

License

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

Contributing

We welcome contributions to the project, see CONTRIBUTING.md for instructions on how to do so. Contributions are not only about code: by testing the project on your data, talking about it or asking questions, you will be contributing too!

About

Enrichment pipeline for CUR reports which adds energy and carbon data allowing to report and reduce the impact of the your cloud usage.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Contributors 2

  •  
  •