Skip to main content

EugoIDE

EugoIDE is your browser-based development environment for running Python code on Eugo's distributed infrastructure.

Write code in a familiar notebook interface. Eugo automatically handles parallelization, resource allocation, and execution across its compute cluster. You don't configure infrastructure or manage dependencies — just write Python and run it.

Example - Using the EugoIDE
from pathlib import Path
from uuid import uuid4

from aws_lambda_powertools.logging import Logger
from fsspec import filesystem

svc_name = "eugo_example"
logger = Logger(svc_name)


class S3Data:
def __init__(
self, *, source_bucket: str, source_key_prefix: str, destination_bucket: str
) -> None:
self.source_bucket = source_bucket
self.source_key_prefix = source_key_prefix
self.destination_bucket = destination_bucket
self.s3fs = filesystem("s3")
self.source_path_remote_prefix = (
f"{self.source_bucket}/{self.source_key_prefix}"
)
self.source_local_path = f"/tmp/eugo/{self.source_path_remote_prefix}"

def get_data(self):
num_bytes = 1024

source_size_in_gigabytes = round(
float(self.s3fs.du(self.source_path_remote_prefix))
/ num_bytes
/ num_bytes
/ num_bytes,
2,
)
logger.info(
f"Using a folder w/ multiple datasets at '{self.source_path_remote_prefix}' w/ the size of {source_size_in_gigabytes}GB."
)

Path(self.source_local_path).parent.mkdir(parents=True, exist_ok=True)
self.s3fs.get(self.source_path_remote_prefix, self.source_local_path)

def put_data(self):
destination_key_prefix = f"interactive_session_tests/{uuid4()}"
logger.info(destination_key_prefix)
self.s3fs.put(
self.source_local_path,
f"{self.destination_bucket}/{destination_key_prefix}",
)


s3_data = S3Data(
source_bucket="example_read_remote_bucket",
source_key_prefix="example_key_prefix",
destination_bucket="example_read_remote_bucket",
)

s3_data.get_data()
s3_data.put_data()
+1
+2
+3
+4
+5
+6
  1. import any of the 1000s of libraries available in EugoIDE. In this case, we are using the aws_lambda_powertools.logging.Logger to log messages.
  2. Define the source bucket.
  3. Initialize the s3fs object to interact with an S3 bucket.
  4. Calculate the size of the source data and log it.
  5. Get the data from the source bucket and save it locally.
  6. Write the data to the destination bucket.

Code Examples

EugoIDE includes hundreds of demo notebooks to help you get started. Here are some common patterns:

Data Processing with Pandas and AWS Wrangler

Read and write data from S3 using AWS Wrangler and pandas:

import awswrangler as wr
import boto3
import pandas as pd
from datetime import date

boto3.setup_default_session(region_name="us-east-2")
wr.s3.does_object_exist("s3://noaa-ghcn-pds/fake", boto3_session=my_session)

bucket = "eugo-example-data"
path = f"s3://{bucket}/test/"

df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})

wr.s3.to_parquet(
df=df,
path=path,
dataset=True,
mode="overwrite"
)

data = wr.s3.read_parquet(path, dataset=True)
print(data)

Data Validation with Pydantic

Validate data structures using Pydantic's type system:

from datetime import datetime
from pydantic import ValidationError, BaseModel, PositiveInt

class User(BaseModel):
id: int
name: str = 'John Doe'
signup_ts: datetime | None
tastes: dict[str, PositiveInt]

external_data = {'id': 'not an int', 'tastes': {}}

try:
User(**external_data)
except ValidationError as error:
print(error.errors())

This raises a ValidationError with detailed information about what failed:

[
{
"type": "int_parsing",
"loc": ["id"],
"msg": "Input should be a valid integer, unable to parse string as an integer",
"input": "not an int",
"url": "https://errors.pydantic.dev/2/v/int_parsing"
},
{
"type": "missing",
"loc": ["signup_ts"],
"msg": "Field required",
"input": {"id": "not an int", "tastes": {}},
"url": "https://errors.pydantic.dev/2/v/missing"
}
]

Python Runtime

EugoIDE runs Python 3.12.6. We upgrade to newer Python versions once all pre-installed libraries are compatible.

All workspaces use the same Python version to ensure consistency across your organization.

IDE Features

Code completion — Context-aware autocomplete for Python code, libraries, and your own functions.

Syntax highlighting — Clear visual formatting for Python syntax to improve readability.

Type hints — Inline type information and validation as you write code. Catch type errors before running your notebook.

Amazon Q integration — Optional AI assistant integration for code suggestions and explanations. Enable this from workspace settings.

Eugo UmbrellaEugo Umbrella is a hyper-optimized computation engine designed to handle intense data processing and optimization tasks.

Eugo UmbrellaEugo Umbrella is a hyper-optimized computation engine designed to handle intense data processing and optimization tasks. is the compute engine that executes your code across Eugo's distributed infrastructure.

When you run a notebook cell, Umbrella analyzes your code and distributes it across available compute resources. It handles parallelization, data movement, and resource allocation automatically.

You don't interact with Umbrella directly — it works behind the scenes to optimize execution. Write standard Python code, and Umbrella determines the best way to run it.

Getting Started

Access EugoIDE from any deployed workspace:

  1. Navigate to Workspaces
  2. Select your workspace
  3. Click Open IDE

The IDE opens in a new browser tab with your workspace environment ready.

Next Steps