Skip to main content

Frequently Asked Questions

Do I need a supercomputer?

If your code takes too long to run or you're hitting resource limits, you can benefit from supercomputing resources.

Common signs you need more compute power:

  • Execution times are longer than acceptable
  • Out of memory (OOM) errors
  • Insufficient disk space, CPU, or GPUA Graphics Processing Unit (GPU) is a specialized electronic circuit designed for large-scale parallel processing of numerical, scientific, and graphical data. resources
  • Waiting prevents you from iterating quickly

Data pipelines, ML model training, inference workloads, and heavy computational tasks all benefit from distributed compute. With Eugo, you get this power without managing infrastructure.

How is Eugo fast?

Eugo combines distributed parallel computing, optimized software, and specialized hardware to maximize performance.

Your code runs on GPUsA Graphics Processing Unit (GPU) is a specialized electronic circuit designed for large-scale parallel processing of numerical, scientific, and graphical data., SIMDSingle Instruction, Multiple Data is a type of parallel processing that allows a single instruction to perform the same operation on multiple data points simultaneously.-enabled CPUs, and AI acceleratorsAI accelerators are specialized hardware designed to accelerate artificial intelligence workloads, such as neural network training and inference.. Eugo automatically parallelizes operations and distributes work across nodes. Libraries are compiled from source and optimized for the underlying hardware.

How does Eugo compare to Google Colab, AWS SageMaker, or Azure ML?

Eugo provides more compute power with less infrastructure management.

Scale — Eugo handles workloads that other tools can't, like nuclear fusion simulations or processing petabytes of data. You're not limited by instance sizes.

Zero configuration — You don't pick instance types, configure clusters, or manage scaling. Eugo allocates resources automatically.

No memory limits — Eugo scales memory based on your workload. No OOM errors from fixed instance sizes.

No dependency management — Libraries are pre-installed and optimized. You don't maintain environments or resolve version conflicts.

You write code. Eugo handles everything else.

Does Eugo work reliably?

Yes. Eugo is production-ready and runs real workloads at scale.

We built Eugo over several years and use it internally to develop Eugo itself. The platform handles everything from quick prototypes to large-scale production workloads.

Will Eugo work for my use case?

Eugo supports most data processing, scientific computing, and ML workloads.

The platform includes hundreds of pre-installed libraries in C, C++, Rust, and Python. This covers common data science, ML, and computational frameworks.

If you need a library that's not included, you can bring your own dependencies. For custom library installation, contact us and we can add it to the platform (fees may apply).

Who should use Eugo?

Anyone working with large datasets or computationally intensive workloads.

If you spend time waiting for code to run, managing infrastructure, or scaling clusters, Eugo eliminates those bottlenecks. Data scientists, ML engineers, researchers, and analysts all benefit from instant access to distributed compute.

Is Eugo just a wrapper around Ray or Spark?

No. Eugo uses Ray, Spark, OpenMPOpen Multi-Processing is a library widely used for developing parallel applications on multi-core processors., and CUDA as components of the platform, but provides much more.

You get automatic resource allocation, optimized library compilation, infrastructure management, and intelligent workload distribution. You can also use multiple frameworks (Ray, Spark, PyTorch) together in the same workspace, which isn't possible with standard installations.

Do you offer professional services?

Yes. We provide services ranging from initial setup to custom development.

Basic setup assistance is free. We charge for complex services like:

  • Custom code development
  • Migrating existing workloads to Eugo
  • Performance optimization consulting
  • Custom library integration

Contact us to discuss your needs.

Does Eugo work on cloud providers besides AWS?

Eugo currently runs on AWS. Support for other cloud providers requires custom integration.

We're working on broader cloud provider support. If you need Eugo on GCP, Azure, or another provider, contact us to discuss options.

Is Eugo more energy efficient than traditional supercomputers?

Yes. Eugo uses less energy than traditional on-premise supercomputers.

Serverless architecture — Clusters only exist when you're using them. You're not running idle hardware.

Arm-based infrastructure — Eugo runs entirely on Arm (aarch64) architecture, which provides better performance per watt than x86.

You only pay for compute you actually use, and that compute runs on efficient hardware.

Why does my notebook take time to start?

Cluster creation is fast (under 5 seconds), but the cluster needs time to become healthy before accepting workloads.

The startup time you see is the cluster initializing — allocating resources, starting services, and running health checks. This typically takes 1-2 minutes and varies based on cluster size and underlying infrastructure.

Once your cluster is healthy, subsequent notebook cells execute immediately.

How do I download my notebooks?

You can download notebooks from the Eugo platform or from within the EugoIDE.

From the platform:

  1. Navigate to the Workspaces tab
  2. Click Download Notebooks next to Open EugoIDE

From the IDE:

  1. Click File in the top-left menu
  2. Select Download as
  3. Choose your preferred format

Both methods give you a local copy of your notebooks.

How does Eugo handle network bandwidth compared to traditional supercomputers?

On-premise supercomputers have low latency because all nodes are colocated. Eugo clusters distribute across data centers, but we mitigate network overhead through several optimizations:

Placement groups — You can run Ray clusters within a placement group, keeping all nodes in the same physical rack for minimum latency.

GPU Direct and NCCL — NVIDIA GPUs communicate directly with each other across machines, bypassing CPU overhead. This works like EFA (Elastic Fabric Adapter) for GPU-to-GPU communication.

Jumbo frames — We enable jumbo frames by default when possible, increasing payload size per packet from 1500 to 9000 bytes. This reduces packet count and network protocol overhead.

Dedicated network adapters — Each node gets a non-virtualized ENA (Elastic Network Adapter) for higher throughput and lower latency.

EFA support — We're adding Elastic Fabric Adapter (EFA) and libfabric support for MPI communication. EFA bypasses the CPU and Linux kernel for inter-node communication, reducing round trips.

For most workloads, these optimizations make network performance comparable to colocated infrastructure.

What is EPM (Eugo Package Manager)?

EPM is Eugo's package manager for installing custom HPCHigh Performance Computing is the use of supercomputers and parallel processing techniques to solve complex computational problems.-optimized libraries.

HPCHigh Performance Computing is the use of supercomputers and parallel processing techniques to solve complex computational problems. libraries need to be built from source for optimal performance. EPM handles this build process and dependency management. It works like apt-get, dnf, or pip, but for optimized HPC packages.

Use EPM to install C/C++ packages (like libtool or onnxruntime), Rust packages (like hyper), or Python packages (like thriftpy2) that aren't in the default runtime.

Installing a Package with EPM

Here's how to install a custom package using aws-cdk-lib as an example:

1. Create the dependencies directory

Create a dependencies folder in your workspace root.

2. Create a library directory

Inside dependencies, create a directory for your library:

dependencies/aws_cdk_lib/

Naming rules:

  • Replace dashes (-) with underscores (_): aws-cdk-lib becomes aws_cdk_lib
  • Keep dots (.) as-is: jaraco.classes stays jaraco.classes
  • Match the name exactly as it appears in the meta.json file

3. Create meta.json

Create meta.json inside your library directory with the package configuration:

{
"kind": "python",
"name": "aws_cdk_lib",
"version": {
"kind": "pypi",
"should_auto_update": true,
"value": "2.185.0"
},
"dependencies": {
"runtime": {
"standalone": [
"python/aws_cdk.asset_awscli_v1",
"python/aws_cdk.asset_kubectl_v20",
"python/aws_cdk.asset_node_proxy_agent_v6",
"python/aws_cdk.aws_cdk.cloud_assembly_schema",
"python/cattrs",
"python/constructs",
"python/jsii",
"python/publication",
"python/typeguard"
]
}
},
"#comments": {
"requirements": "Requirements.txt",
"description": "This file is used to manage dependencies for aws_cdk_lib."
}
}

Your directory structure should look like this:

dependencies/
└── aws_cdk_lib/
└── meta.json

4. Add recursive dependencies

For each dependency listed in your meta.json, repeat steps 2 and 3.

Create a directory for each dependency (like aws_cdk.asset_awscli_v1) with its own meta.json file. EPM resolves the full dependency tree when you start your cluster.

Note: We're adding more documentation for advanced use cases (C/C++ libraries, Rust packages, etc.) soon.