Skip to content

Polyglot SeBS #235

@mcopik

Description

@mcopik

In SeBS, we initially focused on Python and Node.js; not all benchmarks have a Node.js implementation. We have pending support for C++ (#145) and Java (#223). However, we want to have at least microbenchmarks in many other languages:

  • Finish Java support (Java benchmarks support #223)
  • Missing Node.js benchmarks
  • Add Node.js CRUD benchmark with NoSQL storage.
  • Missing C++ benchmarks (image processing, image recognition) + C++ benchmarks #94
  • .NET
  • Rust
  • Go
  • New runtimes for Java, e.g., GraalVM
  • New runtimes for Node.js, like Bun and LLRT
  • New runtimes for Python, like PyPy or CPython 3.13 with the experimental JIT flag (needs manual build)

We should be able to benchmark the impact of choosing alternative runtimes.

How to support new and custom runtimes?

  • AWS -> supported by default.
  • Azure - we can implement custom handlers via HTTP
  • GCP - Google Cloud Functions 2nd gen are built on Cloud Run, so we could run custom images (only Docker) on Cloud Run and then create triggers ourselves.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions