Policy-based NVMe-oF storage with validated sub-millisecond latency. Choose from 5 performance tiers optimized for your workload. Deploy in minutes with Terraform.
Choose the right tier for your workload. All tiers deliver sub-millisecond latency with Active-Active HA.
Sub-millisecond latency using local NVMe from N2 and Z3 instances. Pool local SSDs for shared, high-performance block storage.
Both nodes serve I/O simultaneously. Sub-second failover with no downtime. Synchronous replication across zones.
No expensive SAN infrastructure required. Pool local NVMe from GCE instances. $10K starting price vs $200K+ traditional SAN solutions.
Runs on N2, N2D, and Z3 instances with local NVMe SSDs. Pool local SSD storage for persistent shared volumes.
Leverages GCP's high-bandwidth networking (50-100 Gbps) for maximum NVMe-oF throughput with minimal latency overhead.
Move databases from Cloud SQL to GCE for 10x better performance. Compatible with PostgreSQL, MySQL, SQL Server.
High-performance persistent volumes for stateful applications. CSI driver for dynamic NVMe-oF volume provisioning.
Fast storage for AI/ML training workloads. Low-latency data loading for GPU and TPU instances.
Full integration with Cloud Monitoring, Logging, and alerting for performance tracking and troubleshooting.
MayaScale transforms local SSD storage into persistent shared volumes using NVMe-over-Fabrics:
Dual NIC architecture with local NVMe SSDs for sub-millisecond latency. Both nodes actively serve I/O with synchronous RAID-1 replication across zones.
Move from Cloud SQL to GCE for 10x better performance and cost savings:
Cassandra, MongoDB, ScyllaDB with ultra-low latency, plus BigQuery staging:
High-performance storage for Vertex AI training and GPU workloads:
Persistent volumes for Kubernetes StatefulSets with high performance:
One-click deployment with Deployment Manager.
Infrastructure-as-Code for automation.
All tiers tested and validated on GCP with SNIA-compliant FIO benchmarks. October 2025.
| Tier | Instance | vCPUs | Local SSDs | Capacity | Read IOPS | Write IOPS | Latency | Use Case |
|---|---|---|---|---|---|---|---|---|
| Ultra | n2-highcpu-64 | 64 | 16x 375GB | 6.0 TB | 2.3M | 866K | 173µs | Maximum performance databases, ML training, analytics |
| High | n2-highcpu-32 | 32 | 8x 375GB | 3.0 TB | 900K | 350K | 830µs | High-performance databases, enterprise applications |
| Medium | n2-highcpu-16 | 16 | 4x 375GB | 1.5 TB | 700K | 200K | 822µs | Production databases, analytics workloads |
| Standard | n2-highcpu-8 | 8 | 2x 375GB | 750 GB | 380K | 130K | 938µs | General purpose databases, applications |
| Basic | n2-highcpu-4 | 4 | 1x 375GB | 375 GB | 100K | 75K | 630µs | Development, testing, budget deployments |
Note: All latency numbers are sub-millisecond (<1ms). IOPS are "Promised" marketing numbers backed by validated test results. Actual SLA performance typically exceeds promised values by 4-16%. See performance graphs above for detailed curves.
Choose your performance tier and deploy with Terraform in minutes. Get validated sub-millisecond latency and up to 2.3M IOPS with Active-Active HA.
View Performance Tiers Contact SalesMayaScale delivers 5-10x lower latency (sub-1ms vs 5-10ms) using local NVMe storage with NVMe-oF. Plus, you get Active-Active HA which Persistent Disk doesn't provide.
MayaScale uses synchronous replication across nodes (typically in different zones). If one node is terminated, the other node continues serving I/O with sub-second failover. No data loss.
Yes. Many customers migrate from Cloud SQL to self-managed databases on GCE with MayaScale storage for 10x better performance and lower costs.
MayaScale can be deployed in any GCP region that supports instances with local SSD storage (N2, N2D, Z3 families). Contact sales for region-specific availability.
MayaScale starts at $10K vs $200K+ for traditional SAN solutions—a 90% cost reduction. You only pay for the GCE instances and local SSD storage you use.
Yes. Contact our sales team to arrange a proof-of-concept deployment on your GCP project. We offer technical support during evaluation.