Eugo - The Future of Supercomputing¶
Welcome to Eugo. Here you will find all the information on how to use Eugo, its features, and how to get started.
Overview¶
Eugo is building the future of supercomputing. We're on a mission to eliminate millions of wasted hours for data engineers, data scientists, and business analysts by significantly reducing run times. Faster run times means quicker feedback and shorter development cycles, resulting in cost savings and increased productivity. By streamlining code execution times, teams can focus more on building and less on waiting.
Eugo's HPC clusters provides thousands of optimized libraries right out of the box. Import popular Python libraries including Pandas, OpenAI and Pydantic, so you can use the tools you know and love. We also support hundreds of C/C++/Rust libraries, like OpenMP. Finally, Eugo supports custom libraries, so you can bring your own code and seamlessly integrate it with Eugo’s optimized runtime.
Who is Eugo For?¶
Eugo is for anyone who wants to run data intensive or heavy computational tasks with minimal setup, or who are tired of building things that don’t "make your beer taste better." Whether you’re a data scientist, software engineer, or business analyst, Eugo’s cloud-based HPC platform can help you run your code faster and more efficiently.
The use cases for Eugo are vast and varied, but here are the ones we hear most about from customers: protein folding, novel drug discovery, genomic analysis, raster and vector geospatial data analysis and processing, Large Language Models (LLMs), computational fluid dynamics, large scale financial modeling, high frequency trading, fraud analysis, particle physics and astrophysics simulations, nuclear fission and fusion simulations, image processing, national security and defense intelligence analysis, logistics optimization, general AI/ML model training, and general data processing.
From data processing and analysis to machine learning and AI model training, Eugo can help you run your code faster and more efficiently.
The following is just a toy example, but hopefully makes clear the potential value of instrumenting a more complex application.
Example — Using the EugoIDE
import numpy as np
from numpy.typing import NDArray
from scipy.linalg import norm
from aws_lambda_powertools.logging import Logger
logger = Logger(service="eugo_example.py")
def process_matrices(
*,
matrix1: NDArray[np.float64],
matrix2: NDArray[np.float64],
scalar: int | float
) -> NDArray[np.complex128]:
"""
Processes two matrices by performing a series of operations:
addition, scaling, Fast Fourier Transform (FFT), matrix norm calculation,
and returning the conjugate transpose.
Parameters
----------
matrix1 : numpy.ndarray
First matrix of shape (M, N) with float64 elements.
matrix2 : numpy.ndarray
Second matrix of shape (M, N) with float64 elements.
scalar : int or float
Scalar value to multiply the summed result by.
Returns
-------
numpy.ndarray
Conjugate transpose of the FFT result (complex128 array).
Raises
------
ValueError
If the input matrices do not have the same shape.
Notes
-----
This function combines several steps:
1. Adds `matrix1` and `matrix2`.
2. Scales the result by `scalar`.
3. Computes a 2D FFT on the scaled result.
4. Computes the Frobenius norm of the FFT result.
5. Returns the conjugate transpose of the FFT result.
Example
-------
>>> matrix1 = np.array([[1.0, 2.0], [3.0, 4.0]])
>>> matrix2 = np.array([[5.0, 6.0], [7.0, 8.0]])
>>> scalar = 2
>>> result = process_matrices(matrix1, matrix2, scalar)
>>> logger.info("Result:\n", result)
"""
# Ensure both matrices have the same shape
if matrix1.shape != matrix2.shape:
raise ValueError("Matrices must have the same shape.")
# Add the matrices
result = matrix1 + matrix2
# Scale the result by the scalar
scaled_result = result * scalar
# Perform a 2D Fast Fourier Transform
fft_result = np.fft.fft2(scaled_result)
# Compute the Frobenius norm of the FFT result
frobenius_norm = norm(fft_result, ord='fro')
logger.info(f"Frobenius norm of FFT result: {frobenius_norm}")
# Return the conjugate transpose of the FFT result
return np.conjugate(fft_result.T)
# Example usage:
matrix1 = np.array([[1.0, 2.0], [3.0, 4.0]])
matrix2 = np.array([[5.0, 6.0], [7.0, 8.0]])
scalar = 2
conjugate_transpose_result = process_matrices(matrix1=matrix1, matrix2=matrix2, scalar=scalar)
logger.info("Conjugate Transpose of FFT Result:\n", conjugate_transpose_result)
Learn more
See using the EugoIDE.
Platform¶
Eugo’s cloud-based supercomputing platform squeezes every ounce of performance out of state-of-the-art hardware and software, enabling best-in-class price-performance. By providing a managed runtime that utilizes a mix of distributed parallel compute, optimized software, and cutting-edge hardware (GPU, SIMD, and AI accelerators), our cloud-based superclusters allow users to run heavy computational tasks with minimal setup. By automating resource provisioning, runtime management, and performance optimization, our platform empowers both engineers and business professionals to focus on building models, running analysis, and extracting actionable insights.
Key Features¶
- Optimized for Data and Compute-Intensive Workloads: Scale clusters dynamically to thousands of vCPUs and terabytes of RAM.
- High Performance Computing Optimizations: Eugo provides over a thousand highly-optimized libraries for data analytics, AI/ML, and business intelligence tasks off-the-shelf that users know and love. Users can bring their own libraries and seamlessly integrate them with Eugo’s optimized runtime.
- Cost Efficiency: Utilizes energy efficient and price performant hardware to reduce costs and streamline resource usage, ultimately reducing operational expenses.
- Managed Infrastructure: Reduces the burden on Dev and MLOps teams by automating environment creation, freeing engineering resources from infrastructure-related tasks.
- Dynamic Scaling: Allocates resources based on workload demands, ensuring optimal performance without overprovisioning.
- Use Your Existing Frameworks: Eugo equips users with multiple distributed computing frameworks, such as Ray, Spark, or PyTorch, within a single cluster, ensuring flexibility and optimal performance for diverse workloads.
Sign Up¶
See Sign Up for more details on getting started.