Free & Open Source (MIT)

GPU Computing. ML Notebooks. One IDE.

A desktop IDE with a multi-cloud GPU marketplace, integrated JupyterHub, HPC cluster management, and a full terminal computer. Stop switching between five tools to run one experiment.

MIT License
macOS, Windows, Linux
9 Cloud GPU Providers
JupyterHub Built In
Forge workflow editor in browser showing EE-Design-MAPO-Unified workflow with Tournament Judge, multi-agent scoring, and GraphRAG Store integration
GPU Compute

Compare GPU Pricing Across 9 Cloud Providers

Provision GPU compute from Hyperbolic, Google Cloud, Azure, Lambda Labs, CoreWeave, and more. See per-GPU pricing, configure clusters, and scale — all from one interface.

One Dashboard. Nine Providers.

Per-GPU pricing from Hyperbolic, Google Cloud, Thunder Compute, Azure, Hyperstack, Lambda Labs, CoreWeave, DataCrunch, and AWS. Select a provider, configure your cluster, and launch — without leaving the IDE.

HyperbolicGoogle CloudAzureLambda LabsCoreWeaveAWSDataCrunchHyperstackThunder Compute
Compute Options showing 9 cloud GPU providers — Hyperbolic, Google Cloud, Thunder Compute, Azure, Hyperstack, Lambda Labs, CoreWeave, DataCrunch, AWS — with per-GPU pricing
HPC

HPC Cluster Orchestration

Manage Slurm clusters, submit jobs, monitor workloads. Connect AWS ParallelCluster, on-premises clusters, or your local GPU — one control plane for all compute.

HPC Dashboard showing Connected Clusters, Running Jobs, Pending Jobs metrics with Local Compute connection, Manage Clusters, View Jobs, and recent job queue

Cluster Dashboard

Connected Clusters, Running Jobs, Pending Jobs — live metrics from your HPC infrastructure. Local Compute connection status, cluster management, job queue with per-job status tracking.

Local GPU in Three Commands

Install nexus-cli, start the compute agent, auto-detect hardware. Configure memory allocation limits and idle timeout preferences. Your local GPU joins the compute fabric.

Local Compute settings page with nexus-cli Quick Setup instructions, Compute Preferences for memory allocation and idle timeout configuration
Research Computing

JupyterHub with GPU Acceleration

GPU-accelerated ML notebooks integrated with the Nexus terminal, file browser, and HPC compute fabric. Persistent sessions, real-time output, full terminal alongside your notebooks.

GPU Notebooks + Terminal Side-by-Side

JupyterHub running GPU-accelerated notebooks alongside the Nexus Terminal Computer. GitHub file browser on the left, GPU compute output in the terminal, notebook in the center.

JupyterHub with EE Design MAPO GPU Accelerated Demo notebook alongside Nexus Terminal Computer with file browser and GPU compute output
JupyterHub notebook showing EE Design MAPO GPU Accelerated Demo with Hyperbolic Dedicated GPU, DeepSeek-R1 vLLM integration, Red Queen Co-Evolution, and MAP-Elites algorithms

Research-Grade GPU Compute

Hyperbolic Dedicated GPU, DeepSeek-R1 via vLLM, MAPO Tournament optimization with specialized experts, MAP-Elites Quality-Diversity Archive — production ML research workflows in JupyterHub.

Terminal Computer

Full Terminal with GitHub Integration

Integrated terminal with GitHub file browser, branch switching, and live workflow execution. Browse configs, notebooks, plugins, projects, and repos from the sidebar.

Forge IDE workflow editor alongside Nexus Terminal Computer with GitHub file browser, branch switching, and Python workflow testing

Terminal + GitHub File Browser

n8n workflow editor alongside the Nexus Terminal Computer with GitHub file browser — config, notebooks, plugins, projects, repos, and workflows all in the sidebar. Switch branches, execute code, manage files.

Live JSON Inspection

Real-time JSON output from workflow node execution directly in the terminal. Debug data pipelines, inspect API responses, and validate outputs without switching tools.

Forge IDE workflow editor alongside Nexus Terminal Computer displaying JSON node output from workflow execution
Nexus Dashboard Settings page showing AI Connections, AI Language, AI Provider routing, Map Provider, Local Compute GPU configuration, IDE Plugins, and BYOK key management

Configure Your Entire Stack

AI Connections (Claude, Cursor, Windsurf), provider routing via OpenRouter, local GPU compute for ML training, IDE plugin management, and bring-your-own-key support — all from one settings page.

Explore Every Capability

Real screenshots from production. Click any image to expand.

Compute Options showing 9 cloud GPU providers — Hyperbolic, Google Cloud, Thunder Compute, Azure, Hyperstack, Lambda Labs, CoreWeave, DataCrunch, AWS — with per-GPU pricing

Compute Options: Multi-cloud GPU marketplace

HPC Dashboard showing Connected Clusters, Running Jobs, Pending Jobs metrics with Local Compute connection, Manage Clusters, View Jobs, and recent job queue

HPC Dashboard: Cluster orchestration and job management

Add HPC Cluster wizard showing connection configuration for AWS ParallelCluster with SSH, Slurm, and Singularity container runtime settings

HPC Cluster Setup: SSH, Slurm, and container runtime configuration

Local Compute settings page with nexus-cli Quick Setup instructions, Compute Preferences for memory allocation and idle timeout configuration

Local Compute: Connect your own GPU with 3 commands

Start Building with GPU Compute

Download Nexus Forge — free, open source, MIT licensed. GPU marketplace, JupyterHub, HPC clusters, and terminal included.