A Guide to EC2 Instance Types

Helping you through the minefield of the many different types to choose from.

Published 2025-02-23 • Updated 2025-02-24

Introduction

Amazon EC2 (Elastic Compute Cloud) instance types are virtual server configurations optimized for specific workloads, offering varying combinations of compute, memory, storage, and networking resources. These instances are not just used to create fully independent VMs which you control, but also for managed services like RDS, ElastiCache, Lambda, and SageMaker where you still need to understand the instance types to help you pick the right instance type for your workload and to avoid unnecessary costs.

EC2 instance types are grouped into families (e.g., t, c, r), with generations (e.g., 7 in c7g) and sizes (e.g., micro, xlarge) refining their capabilities. This guide covers instance families and explains when and how you'll need to choose them within AWS services—both when you manage them directly and when they're abstracted in managed services. There are however far more instance types and variations than what is listed here, starting with the full list of instance types would be overwhelming, the more specialized your workload, the more research you will need to do however this guide will help you understand the families and generations and easily cover the most common use cases and will also provide knowledge to help you through Associate level certifications.

Understanding Instance Naming

EC2 instance names follow this structure: <family>.<size>

  • Family, consisting of:
    • Series type: (e.g., t for burstable general-purpose, c for compute-optimized)
    • Generation: (e.g., 7 for the 7th generation of the series)
    • Options: Suffixes like g (Graviton processor), n (enhanced networking), or d (NVMe storage) add specificity.
  • Size: From nano (smallest) to metal (bare-metal, largest).

Examples:

  • t3.micro - Burstable general-purpose, 1st generation, Intel, nano size
  • c7g.2xlarge - Compute-optimized, 7th generation, Graviton, xlarge size
  • r6i.xlarge - Memory-optimized, 6th generation, Intel, xlarge size
  • m5.metal - General-purpose, 5th generation, Intel, metal size

When You Need to Choose EC2 Instance Types in AWS

As mentioned above, Choosing an EC2 instance type isn't limited to launching standalone VMs. AWS integrates instance type selection into various services, either directly (where you manage the instances) or indirectly (where AWS manages them but prompts you to pick a type). Below are key scenarios where this decision arises.

1. Launching Virtual Machines (VMs) Directly

  • When: You create an EC2 instance via the AWS Management Console, CLI, or SDK to host custom applications.
  • Context: Full control over the instance, including OS, software, and networking.
  • Examples:
    • Web Server: Launch a t3.medium (2 vCPUs, 4 GiB RAM) to host an Apache or Nginx site with moderate traffic.
    • Batch Processing: Use a c6i.4xlarge (16 vCPUs, 32 GiB RAM) for CPU-intensive data crunching.
    • In-Memory Cache: Deploy an r6i.xlarge (4 vCPUs, 32 GiB RAM) for a large Redis cluster.
  • Process: Select instance type during the "Choose an Instance Type" step in the EC2 launch wizard.
  • Considerations: Match the type to your workload (e.g., CPU, memory, I/O needs) and scale via Auto Scaling if demand fluctuates.

2. Load Balancing with Elastic Load Balancer (ELB)

  • When: You configure an Auto Scaling Group (ASG) behind an Application Load Balancer (ALB) or Network Load Balancer (NLB).
  • Context: Instances are managed by the ASG, and the load balancer distributes traffic. You don't pick instances for the balancer itself, but for the target VMs.
  • Examples:
    • Web App Cluster: Use t4g.small (2 vCPUs, 2 GiB RAM) instances in an ASG for a cost-effective, burstable web tier.
    • High-Traffic API: Deploy c5n.2xlarge (8 vCPUs, 21 GiB RAM) instances for low-latency, compute-heavy endpoints.
  • Process: Define the instance type in the ASG launch template or configuration.
  • Considerations: Optimize for cost (Spot Instances) or performance (e.g., n-suffix for enhanced networking).

3. Running Containers with ECS or EKS

  • When: You use Amazon Elastic Container Service (ECS) with EC2 launch type or Amazon Elastic Kubernetes Service (EKS) with self-managed nodes.
  • Context: Containers run on EC2 instances you provision, not fully managed by AWS (unlike Fargate).
  • Examples:
    • ECS Microservices: Launch m5.large (2 vCPUs, 8 GiB RAM) instances to host multiple lightweight containers.
    • EKS ML Workloads: Use g5.4xlarge (16 vCPUs, 64 GiB RAM, 1 NVIDIA GPU) for GPU-accelerated Kubernetes pods.
  • Process: Specify the instance type in the ECS cluster capacity provider or EKS node group configuration.
  • Considerations: Ensure enough vCPUs and memory for container density; Graviton (g-suffix) instances can reduce costs.

4. Managed Services with Abstracted Instances

In these services, you don't SSH into or manage the instances directly—AWS handles that—but you still choose an instance type or class that impacts performance and cost.

Amazon RDS (Relational Database Service)

  • When: You create or modify an RDS database instance (e.g., MySQL, PostgreSQL).
  • Context: RDS runs on EC2 under the hood, and instance type affects DB performance (e.g., queries per second, I/O).
  • Examples:
    • Small App DB: Pick db.t4g.medium (2 vCPUs, 4 GiB RAM) for a low-traffic PostgreSQL instance.
    • Analytics DB: Use db.r6i.4xlarge (16 vCPUs, 128 GiB RAM) for memory-intensive MySQL workloads.
  • Process: Select during RDS instance creation under "DB instance class" (e.g., db.m5, db.r6g).
  • Considerations: Burstable t-series for variable loads; r-series for memory-heavy queries.

Amazon ElastiCache

  • When: You deploy an in-memory cache (e.g., Redis, Memcached).
  • Context: Instance type determines cache capacity and throughput, abstracted from direct management.
  • Examples:
    • Session Store: Choose cache.t3.micro (2 vCPUs, 0.5 GiB RAM) for a small Redis cache.
    • Large Cache: Use cache.r7g.2xlarge (8 vCPUs, 64 GiB RAM) for high-throughput Memcached.
  • Process: Set during node type selection in the ElastiCache console.
  • Considerations: Prioritize memory-optimized (r-series) types for larger datasets.

AWS Lambda with Provisioned Concurrency

  • When: You enable Provisioned Concurrency for Lambda to reduce cold starts.
  • Context: Underneath, AWS uses EC2-like resources, and you indirectly influence this via memory allocation (not direct instance type selection).
  • Examples:
    • API Backend: Allocate 1024 MB (maps to ~0.5 vCPU equivalent) for a simple Node.js function.
    • Compute-Heavy: Set 3072 MB (~1.5 vCPUs) for image processing, aligning with compute-optimized resources.
  • Process: Adjust memory in the Lambda configuration; AWS maps it to abstracted instance types.
  • Considerations: Higher memory increases vCPU share, mimicking c-series behavior.

Amazon SageMaker

  • When: You train or host ML models with SageMaker instances.
  • Context: Instances power training jobs or inference endpoints, chosen based on workload (CPU, GPU, etc.).
  • Examples:
    • Training: Use ml.p4d.24xlarge (8 NVIDIA A100 GPUs) for large-scale deep learning.
    • Inference: Select ml.g5.xlarge (1 NVIDIA A10G GPU) for real-time predictions.
  • Process: Pick instance type in the SageMaker training job or endpoint configuration.
  • Considerations: GPU instances (p, g) for ML; CPU (c, m) for lighter tasks.

Main Instance Families

1. General-Purpose Instances

  • Purpose: Balanced resources for diverse workloads.
  • Technical Details:
    • T-Series: Burstable with CPU credits (e.g., Intel Xeon Platinum, Graviton2/3).
    • M-Series: Sustained performance (e.g., Intel Ice Lake, Graviton3).
  • Use Cases: Web servers, dev environments, small DBs.
  • Examples:
    • t4g.nano (2 vCPUs, 0.5 GiB, Graviton2)
    • t3.large (2 vCPUs, 8 GiB, Intel)
    • m7g.4xlarge (16 vCPUs, 64 GiB, Graviton3)
    • m5.metal (96 vCPUs, 384 GiB, Intel)

2. Compute-Optimized Instances

  • Purpose: High-performance CPUs for compute-heavy tasks.
  • Technical Details: Intel Ice Lake, Graviton3, up to 100 Gbps networking (c5n).
  • Use Cases: Batch processing, gaming servers, simulations.
  • Examples:
    • c7g.2xlarge (8 vCPUs, 16 GiB, Graviton3)
    • c6i.12xlarge (48 vCPUs, 96 GiB, Intel)
    • c5n.18xlarge (72 vCPUs, 192 GiB, Intel)

3. Memory-Optimized Instances

  • Purpose: Large memory for data-intensive tasks.
  • Technical Details: High RAM:vCPU ratios, NVMe SSDs, Graviton3 or Intel Sapphire Rapids.
  • Use Cases: In-memory DBs, big data analytics.
  • Examples:
    • r7g.8xlarge (32 vCPUs, 256 GiB, Graviton3)
    • r6i.24xlarge (96 vCPUs, 768 GiB, Intel)
    • x2gd.16xlarge (64 vCPUs, 1024 GiB, Graviton2)

4. Storage-Optimized Instances

  • Purpose: High I/O for storage-heavy workloads.
  • Technical Details: NVMe SSDs (up to 3.5M IOPS) or HDDs, Intel Xeon or Graviton.
  • Use Cases: NoSQL DBs, data warehousing.
  • Examples:
    • i4i.16xlarge (64 vCPUs, 512 GiB, 7.5 TB NVMe)
    • i3en.3xlarge (12 vCPUs, 96 GiB, 7.5 TB NVMe)
    • d3.8xlarge (32 vCPUs, 256 GiB, 24 TB HDD)

5. Accelerated Computing Instances

  • Purpose: GPUs/FPGAs for specialized tasks.
  • Technical Details: NVIDIA GPUs (A100, V100), Xilinx FPGAs, up to 400 Gbps networking.
  • Use Cases: ML training, video rendering, FPGAs.
  • Examples:
    • g5.12xlarge (48 vCPUs, 192 GiB, 4 A10G GPUs)
    • p4d.24xlarge (96 vCPUs, 1152 GiB, 8 A100 GPUs)
    • f1.16xlarge (64 vCPUs, 976 GiB, 4 FPGAs)

Choosing the Right Instance

  1. Workload Analysis: Match CPU, memory, I/O, or GPU needs to the family.
  2. Start Small: Test with t3.micro or t4g.nano, then scale.
  3. Cost Optimization: Use Spot Instances or the AWS Pricing Calculator.
  4. Region Check: Verify availability (e.g., c7g not in all regions).

Eager to get Started?

To experiment with EC2 instances with minimal cost:

  1. Use the Free Tier: AWS offers a free tier that includes 750 hours per month of Linux and Windows t2.micro or t3.micro instances for your first 12 months, ensure you pick the instance type that specified it in the free tier and this can vary by region.

  2. Quick Test Setup:

    • Launch a t3.micro instance (or t4g.micro for ARM)
    • Choose Amazon Linux 2023 (free-tier eligible)
    • Use default VPC and security group
    • Create or use an existing key pair
    • Start with 8GB GP3 storage (free tier includes 30GB)

Remember to terminate your instance when done to avoid unexpected charges!

Latest Trends (February 2025)

  • Graviton: ARM-based (t4g, c7g) for cost/performance gains.
  • Sustainability: Graviton aligns with AWS's carbon-neutral push.
  • New Generations: 7-series (Graviton3) and Intel Sapphire Rapids in 6i.

Detailed Quick Reference Table

Family Focus Use Case Sample Instances Key Specs (vCPUs, RAM, Extra)
T, M General Web apps, small DBs t3.micro, t4g.nano, m5.large, m7g.4xlarge t3.micro: 2 vCPUs, 1 GiB; m5.large: 2 vCPUs, 8 GiB
C Compute Batch jobs, simulations c6i.large, c7g.2xlarge, c5n.18xlarge c7g.2xlarge: 8 vCPUs, 16 GiB
R, X Memory In-memory DBs, analytics r6i.xlarge, r7g.8xlarge, x2gd.4xlarge r6i.xlarge: 4 vCPUs, 32 GiB
I, D Storage NoSQL, data warehouses i4i.large, i3.4xlarge, d3.8xlarge i4i.large: 2 vCPUs, 16 GiB, 937 GB NVMe
G, P, F Accelerated ML, rendering, FPGAs g5.4xlarge, p4d.24xlarge, f1.16xlarge p4d.24xlarge: 96 vCPUs, 1152 GiB, 8 A100 GPUs

© 2025 Goldnode. All rights reserved.