Partitioning & Sharding Fundamentals (System Design Deep Dive)
Jun 08, 2025
Why Partition?
Partitioning (including sharding) is a key scalability technique. By splitting a large dataset into independent segments, you can scale writes and reads horizontally beyond the limits of a single machine. Each partition lives on its own server or node, so the system can handle more load by adding partitions instead of relying on one beefy server. Partitioning also improves performance by reducing the working set per node – queries and indexes operate on a smaller volume of data, which boosts efficiency and cache locality.
Another benefit is failure isolation and availability. If one partition/node goes down, only that subset of data is affected, not your entire application. The rest of the partitions continue operating, avoiding a single point of failure. Partitioning is also useful for multi-tenant data isolation. For example, a software-as-a-service platform might partition data by customer tenant to prevent a heavy “noisy neighbor” tenant from degrading others’ performance. In summary, we partition to achieve higher scale, keep hot data sets manageable, contain failures, and isolate workloads or tenants as needed.
Horizontal vs. Vertical Partitioning
Partitioning can be done in different dimensions. Horizontal partitioning (sharding) breaks data by rows: each partition (shard) has the same schema but holds a subset of the rows. For instance, users with IDs 1-1,000,000 on one shard and 1,000,001-2,000,000 on another. This is great for scaling volume – each shard handles fewer rows, reducing load and improving query throughput by distributing data across nodes. It’s ideal when you have large datasets and need to spread out read/write traffic (e.g. partition users by region or customer segment). It also naturally improves fault tolerance: if a shard is offline, only its portion of rows is unavailable.
Vertical partitioning, on the other hand, breaks a dataset by columns or features. You might split a wide table into two tables with fewer columns – e.g. core profile info vs. verbose metadata – possibly even into different storage systems. This shines when different fields have different usage patterns. Frequently-used columns can live in a partition optimized for fast access, while seldom-used or bulky fields (like a blob or JSON) live separately, so queries don’t always pull in that extra payload. Vertical partitioning (and the related concept of functional partitioning by service/domain) is common if you’re breaking a monolith: for example, user accounts data vs. logging data handled by separate services and databases. In practice, both strategies can be combined – e.g. you first shard by user ID, then vertically split each shard’s tables by hot vs. cold fields, to get the benefits of both.
When to use which? Horizontal sharding is best for scaling out huge row counts or transaction volumes – it adds capacity by distributing load. Vertical partitioning is best for wide schemas or distinct access patterns, ensuring each query touches only the data it really needs. If a table has dozens of columns but different queries only need certain subsets, column-wise splits improve I/O and caching. Many systems evolve to use a mix: horizontal shards for scale, and vertical splits or microservices for modularity.
Sharding Strategies (Hash vs. Range vs. Geo vs. Directory)
Once you choose to shard (horizontal partition), the next question is how to distribute data. Range-based sharding is a simple strategy: each shard handles a contiguous range of values (e.g. users A–G on shard 1, H–Z on shard 2). It’s intuitive and keeps sorted data together (great for range queries or locality). However, it’s prone to hotspots if the data or access is skewed – e.g. time-series data with a date range shard means the latest time range gets all the writes. A well-known anti-pattern is using an auto-incrementing ID or timestamp as the shard key for range sharding – the “last” shard becomes a write bottleneck. Mitigations include periodically splitting growing ranges or using staggered ranges, but range sharding works best when the value domain is naturally evenly spread.
Hash-based sharding assigns data to shards by applying a hash function to some key (for example, shard = hash(user_id) mod N
for N shards). This tends to evenly distribute records across shards, avoiding the hotspot problem by randomizing placement. Hash sharding is great for uniform load, but it sacrifices locality – related records might go to different shards. For instance, hashing user_id spreads users evenly, but if you frequently need all data for one user, you might still be hitting multiple shards unless the user ID itself is the shard key. Also, range queries on the shard key become hard because the key order is lost in hashing. Hash sharding is a solid default choice for write-heavy workloads that need even distribution and don’t often fetch large contiguous ranges.
Geographic (zone) sharding partitions data based on location or region. For example, you might shard customers by country/region, and even host each shard’s node in that region (EU customers’ data on EU servers, US customers on US servers). The big advantage is lower latency and data locality – users interact with a database shard nearby, and you can comply with data residency laws by keeping data in-region. Large platforms often use geo-sharding to serve global users faster. The downside is uneven load or size: some regions may have far more users than others, causing shards of very different sizes. If one geography accounts for 70% of traffic, its shard will be “hotter” or larger than the rest. You also have to deal with users that span regions (e.g. what if a user travels or if data needs to be aggregated globally). Geo-sharding is excellent for use cases like content delivery, gaming, or multi-region apps where latency and data sovereignty are top priorities, but you must plan for balancing uneven partitions or splitting further by sub-region.
Directory-based sharding (also called lookup-based) uses an external index or service to map each data item to its shard. Instead of a simple rule, you maintain a table that says “Customer X is on shard 3” and so on. This extra indirection offers maximum flexibility: you can allocate shards arbitrarily (even move one customer to a different shard to balance load) and handle cases that hash or range can’t easily accommodate. It’s useful if your data has a few heavy hitters – e.g. 1% of users are 100× larger than others – because you could give each a dedicated shard via the directory, while hashing or ranging the rest. The trade-off is the complexity and overhead: the directory service becomes a critical component that must be fast and fault-tolerant. Clients have to query the directory to find where to route a request (introducing a lookup on each request unless cached). If the directory goes down or lags, the whole system suffers, so you’d replicate it and cache aggressively. In practice, some NoSQL systems (like older Dynamo-style systems or meta-databases) use this approach for its flexibility. But it’s usually used in combination with other strategies (e.g. directory maps a range of hash values to shards). Anti-patterns to avoid here include having a single static config file (which can become outdated or inconsistent) or not replicating the directory – those would undermine the benefits of sharding by adding a single point of failure.
In summary, there’s no one-size-fits-all sharding strategy. Range sharding preserves key order but must be designed to avoid skew. Hash sharding gives a uniform spread but loses locality. Geo sharding optimizes for latency by region, while directory-based sharding maximizes flexibility at the cost of an extra lookup service. Many systems combine methods; for example, you might hash shard most data but use a directory for a few special large customers or use range sharding within geographic regions. The key is to choose a strategy that aligns with your data access patterns and growth so you don’t paint yourself into a corner.
Shard Key Selection
Picking a good shard key is arguably the most critical decision in a sharded design. A shard key is the field (or fields) used to decide how data is partitioned. A well-chosen key will distribute data evenly and support your query patterns; a poor choice can create hotspots or make your application logic much harder.
High cardinality and even distribution are top priorities. The key should have a lot of possible values and spread records roughly uniformly. If too many records share the same key value (or fall into one range), that shard will overload. For example, sharding by “country” might be too low-cardinality for a global app (one shard holds all users from one huge country) – whereas sharding by user ID or some hash of it yields a huge number of distinct values, distributing load better. Aim to avoid any obvious skew: if one key value will correspond to 50% of data (like a default or null), it’s a bad shard key.
Avoid monotonically increasing keys for sharding, especially if using range partitioning. Auto-incrementing IDs or timestamps tend to funnel all new writes into the “last” partition, creating a hotspot. If your natural primary key is sequential, you might instead shard by a hashed version of it, or include a random component (we’ll discuss hot key mitigation later). Some databases (like MongoDB or Cassandra) even default to hashing such keys to prevent hotspotting.
Consider composite keys if a single field doesn’t meet all needs. Sometimes you combine fields into the shard key to achieve both even distribution and locality. For instance, you might use a composite of (UserRegion, UserID). This could ensure that users are grouped by region (data locality for regional queries) while still distributing within each region by user ID to avoid one region’s largest user dominating a shard. Composite shard keys can also help keep related data together. If two entities (say customers and their orders) are frequently joined, using the customer ID as part of the shard key for orders ensures all of a customer’s orders live on the same shard as the customer, enabling local joins.
Also, decide between natural vs. surrogate keys for sharding. A natural key is a domain field (like username, email, country) whereas a surrogate is an artificial identifier (like a UUID or auto-id). Natural keys can be convenient if they inherently partition the data (e.g. region or tenant id segments data by nature). But be cautious: if a natural key is too clustered (many records share it) or can change over time, it’s problematic. Surrogate keys (like UUIDs) often have nice distribution (UUIDs are basically random), but they might not align with any query usage and can be larger in size. A common approach is to use a surrogate that’s designed for distribution (like hashing a natural attribute or using GUIDs) when natural keys are unsuitable. For example, Twitter is said to hash tweet IDs to assign them to storage partitions. Immutability is important too: the shard key value ideally shouldn’t change for a given record, because changing it means moving the record to a different shard (an expensive operation). Fields like timestamps or statuses that update frequently are poor shard keys.
Finally, always evaluate your query patterns against the candidate shard key. The majority of queries should be able to use the shard key to target the correct shard; otherwise you end up needing to do scatter-gather queries across all shards. For example, if your application often looks up users by email, sharding by user ID won’t help those queries – you’d have to check every shard unless you have a mapping from email to user ID first. In such a case, either use email (if it’s high-cardinality) or ensure an auxiliary index exists. A rule of thumb: choose a shard key that you will always (or almost always) have in your query filters (for key-value lookups), and that splits load evenly (no “hot” values). It’s very hard to change a sharding scheme later, so spend time upfront to get the key right.
Shard Discovery & Routing
Once data is partitioned, how do we route client requests to the correct shard? There are a few common patterns for shard discovery and routing:
-
Client-side routing via a shard map or library: In this approach, the application (client) is aware of the sharding scheme. It might use a library or lookup table to determine which shard a given key belongs to. For example, if using range shards, the client might have a map of key ranges to shard hostnames. If using hash, the client might do the hash and mod calculation itself. This can be very efficient (no extra network hop for routing), but it requires distributing and updating the shard map logic to all clients. It works well in environments where you control the app servers and can push config changes when shards are added or removed.
-
Query routers or proxy layer: This is an intermediary between the application and the shards. The application just sends a normal request (maybe with the key included), and a router service figures out the right shard and forwards the request. Systems like Vitess for MySQL act as proxy routers. The benefit is that clients don’t need to know about the sharding at all; the routing logic is centralized. You can change the sharding scheme or move data without updating every application node – just update the router. The downside is an extra network hop and potential bottleneck at the router layer, so it must be scaled and made highly available. Many cloud database services use routers transparently (for example, MongoDB’s mongos router or Azure’s Cosmos DB gateway).
-
Service registry and discovery: Rather than hard-coding shard addresses, systems often use a service registry (like ZooKeeper, etcd, or a cloud service registry). Each shard registers itself (with its shard ID or key range) in the registry, and clients or routers consult this registry to find the current location of a shard. This allows dynamic membership – if you add a new shard or change a shard’s location, you update the registry and clients discover it. It’s especially useful in containerized or cloud environments where IPs may change. The registry approach can be combined with client-side or router: e.g. a router looks up in the registry, or clients cache the registry info for direct connects.
-
Consistent hashing rings: A specific technique often used in distributed caches and some databases is consistent hashing. Instead of a directory or fixed map, the system assigns each shard a position on a hash ring. Keys are hashed to that same space and mapped to the nearest shard on the ring. This provides automatic distribution and makes it easy to add/remove shards with minimal redistribution of data (only keys adjacent to the removed/added node move). Clients can perform the hash and find the shard, or a router can. This approach was popularized by Dynamo-style databases and is used in systems like Cassandra and Riak for routing. The big advantage is that when you add a new shard, you don’t have to re-map all keys – only a portion move (known as “minimal rebalance”). One caution is that consistent hashing alone doesn’t guarantee even load if the data keys aren’t uniform, so often virtual nodes or other techniques are used to balance it.
In practice, many architectures use a mix of these strategies. For example, a router may use consistent hashing under the hood. Or a client library might periodically pull the shard map from a service registry so it can route requests itself without extra hops. A best practice is to avoid hardcoding shard topology in application code – use an abstraction layer or service discovery so that you can change the mapping transparently. Also, replicate and make the routing metadata highly available – if using a directory service or registry, it should be fault-tolerant (no single point of truth that can bring the whole cluster down). Proper routing design ensures your sharded system remains extensible and resilient as you add more shards.
Rebalancing and Resharding Mechanics
Over time, you may need to rebalance shards – whether due to growth (one shard getting too large), load skew, or adding/removing nodes. Rebalancing refers to splitting, merging, or migrating partitions to redistribute data more evenly.
-
Splitting shards: If a single shard grows too large (in data size or traffic), you can split it into two or more shards. For example, a shard holding user IDs 1–1M might be split into 1–500k and 500k–1M. Some databases (like MongoDB, HBase) do this by identifying large ranges (chunks) and auto-splitting them. Splitting requires moving some data to a new shard. Ideally this is done online: the system begins copying part of the data to a new shard while still serving from the old one. Once copied, it starts directing new requests for that range to the new shard (this is the two-phase switch-over – first replicate, then cut over). During the cut-over, there may be a brief freeze or dual-writing to ensure no data is lost.
-
Merging shards: The opposite can happen too – if some shards are mostly empty or underutilized (perhaps after some tenants left or data was archived), you might merge them to save resources. This is essentially migrating all data from one shard into another and then decommissioning the empty shard. Merging is less common in practice (many systems just leave shards lightly used rather than incur the complexity of merging), but it’s useful for tidy-ups. It similarly requires moving data and updating routing metadata.
-
Chunk migration / rebalancing: In systems with lots of shards or finer partition units (like tablets or chunks), rebalancing might mean moving some subset of data from one node to another to even things out. For instance, Cassandra and Dynamo use consistent hashing but will move “token ranges” between nodes when you add a node, to balance load. MongoDB’s balancer process moves chunks of a sharded collection from overstuffed shards to emptier ones. The mechanics typically involve a background process copying the chunk (while tracking changes with a changelog or oplog), then flipping a metadata bit to say “this chunk is now on shard B instead of A”. During the move, the chunk might be temporarily unavailable or in read-only state to ensure consistency. A well-designed system tries to do this without downtime; for example, by using distributed transactions or locks to switch ownership atomically, or by accepting some temporary dual-read scenarios.
-
Dual-writing and ghost records: A common pattern for rebalancing with minimal downtime is dual-writes during transitions. That is, for a period of time, updates are written to both the old shard and the new shard (the “ghost” location) until the cut-over is complete. This ensures the new shard is fully caught up. Similarly, reads might be served from the old shard until a cut-over moment, or even served from both (whichever responds first or using a consistent checkpoint). “Ghost rows” or ghost tables refer to temporary duplicate data maintained during migration – once the new shard is authoritative, the ghosts on the old shard are cleaned up. This approach is used by tools like gh-ost for MySQL schema changes, and in sharding it’s a way to migrate data live. It does require careful orchestration to avoid confusion (for example, toggling a feature flag for reads from new location once dual-write is done).
Rebalancing is one of the hardest parts of sharding in production. It’s best if your shard key and initial design minimize how often you need to re-shard. But in long-lived systems, rebalancing is inevitable as data grows or usage patterns shift. Modern systems try to automate this – for example, auto-splitting hot shards and moving data in the background. However, automation doesn’t eliminate the need to understand it. Rebalancing can be resource-intensive (lots of data copying) and risky if not done carefully (you don’t want to lose or duplicate data). Strategies like toggling replicas can help – e.g. if shards are replicated, you can split a replica while the primary still serves traffic, then promote the new configuration. Some systems will put a shard in read-only mode during part of a split to avoid inconsistency, which might slow things temporarily. The key is to monitor the process and, if possible, perform rebalances during low-traffic periods or gradually.
In summary, rebalancing involves split, merge, migrate operations with careful orchestration. Often a resharding operation will be two-phase (copy then switch) and may involve dual-writes. Operationally, it’s complex – which is why turnkey solutions and cloud services put a lot of emphasis on simplifying resharding (as we’ll touch on with Aurora). A well-designed system and shard key can delay the need for rebalancing, but you should plan for it from the start (build tooling to move data, etc.).
Hot Keys & Skew Mitigation
No matter how well you choose a shard key, real-world data often has skews – cases where one key or one shard gets disproportionate traffic. These hot keys or hot partitions can negate the benefits of sharding (you end up with a mini bottleneck inside your distributed system). Fortunately, there are strategies to mitigate hotspots:
-
Salting / Key Suffixes: One simple but effective trick is to add a random (or calculated) suffix to the shard key for entities that could be hot. This is sometimes called write sharding in DynamoDB’s context. For example, say all users post to a popular forum thread ID
123
. Instead of storing all posts under keythread_123
on one shard, you could use keys likethread_123_1
,thread_123_2
, ...,thread_123_10
with a random suffix 1–10. These would map to different shards (since the key is slightly different), spreading the write load. To read the whole thread, you then query all 10 keys and merge results – a bit less efficient for reading, but it prevents any single shard from overloading. The number of suffixes (sometimes called “buckets”) can be tuned – e.g. choose 10 or 100 buckets based on expected concurrency. This technique trades off some read complexity to achieve much higher write throughput in hot-key scenarios. It’s commonly used in time-series databases and DynamoDB designs (where you might suffix a timestamp or ID with a random number). If using calculated suffixes (like a hash of some secondary attribute), you can sometimes make reads easier by deriving which suffix to read for a particular item. -
Adaptive Capacity / Auto-Rebalancing: Some database services have built-in magic to handle hotspots. For example, Amazon DynamoDB’s adaptive capacity can detect a hot partition and transparently split it into two behind the scenes (“split for heat”). Essentially, if one shard’s throughput is maxed out by a single hot key, the system can isolate that key in its own smaller partition so that the rest of the keys are not affected. Adaptive capacity also includes borrowing unused throughput from other partitions to serve the hot one, smoothing out spikes. The takeaway is that if you’re on a managed service that offers this, it can save you from some manual re-sharding in emergency. However, as a designer you shouldn’t rely solely on it – you still want a well-distributed key design.
-
Caching and Replication: Many hot key issues can be relieved by putting a cache in front of the database (like Redis or a CDN for blobs). If one item is extremely popular (read-heavy), a cache can serve most requests so the shard isn’t bombarded. For write-heavy hot keys, having read replicas can at least offload reads from the primary shard. For example, if one shard owns a celebrity user’s timeline that everyone is reading, adding replicas of that shard or caching their data can handle the read volume. This doesn’t solve write contention on the single shard, but it helps read-heavy skews.
-
Rate limiting / Queueing: In some scenarios, you might control the firehose by rate-limiting requests or queueing them. This is more of an application-level mitigation: e.g. if a single user or API client is hot, you throttle their requests to protect the backend. Or if a spike is overwhelming a shard, you buffer writes in a queue to smooth it out. This doesn’t “fix” the hot shard, but it prevents total meltdown by shedding load.
-
Request Hedging: Hot partitions can also manifest as unpredictable latency (if a shard is overloaded, requests to it get slow). Request hedging is a technique to reduce tail latency by sending duplicate requests: if one try is slow, you send another to a different replica or at a later time and take whichever returns first. In context of sharding, if you have replicas of a shard, the client could hedge a read request by querying two replicas in parallel, so even if one is slow due to load, the faster one responds. Hedging is generally more applicable to read scenarios (where it’s safe to do redundant reads). For writes, hedging would need careful idempotency or sequencing (the DynamoDB example suggests using timestamps to avoid out-of-order updates when hedging writes). Hedging essentially trades extra work for lower latency, and it’s used at companies like Google to cut the long-tail latencies. It can be useful if your shards sometimes get uneven performance – but be mindful of the extra load it generates.
-
Dynamic Scaling / Overprovisioning: A pragmatic angle is simply to recognize a hot key and allocate more resources to it. For example, if one tenant is extremely hot, you might isolate them to their own shard (we discussed that in multi-tenant partitioning) or move them to a bigger machine. Cloud environments with auto-scaling groups might detect high CPU on one shard node and spin up a larger instance or additional helper instances. Similarly, some distributed databases allow temporal rebalancing – e.g. temporarily spin up additional shard replicas to handle a flash crowd (a bit like adaptive capacity). Overprovisioning means you have spare capacity so that if one shard gets hot, it can burst without immediate throttling.
Mitigating hot keys often involves a combination of better key design (to avoid putting too much on one shard) and runtime strategies (caching, splitting, hedging) for the cases you didn’t foresee. It’s wise to build monitoring (next section) to detect skew early – e.g. if one shard is doing 10× the traffic of others, that’s a red flag to act on (maybe implement a salting for that key or add capacity). With good design and adaptive techniques, you can keep performance consistent even under uneven loads.
Observability in a Sharded System
Operating a sharded database requires careful observability. Because data is distributed, you need to monitor each shard’s health and usage to catch imbalances or issues. Some key signals and metrics include:
-
Per-shard throughput (QPS) – Track the queries per second or operations per second handled by each shard. In a perfectly balanced world, these would be roughly equal. In reality, you want to watch for one shard suddenly getting a much higher QPS than others, which could indicate a hot partition or a bad distribution of load. Sudden drops on a shard could indicate a failure or that it’s stuck on something. By monitoring per-shard QPS, you can proactively re-shard or optimize before a hotspot becomes an outage.
-
Latency percentiles per shard – It’s important to measure not just average latency, but tail latencies (p95, p99) for each shard. If one shard has p99 latencies an order of magnitude worse than others, it might be overloaded or have more data to scan. For example, you might find shard 7 has a p99 of 200ms while others are 50ms, pointing to an imbalance. Per-shard latency tracking helps pinpoint issues that only affect part of the cluster. It also helps in capacity planning – if all shards’ latencies are rising, maybe the whole cluster needs scaling, whereas if one is spiking, you focus your attention there.
-
Resource utilization and skew – Monitor disk usage per shard, CPU, memory, and IOPS. Disk or storage skew is a silent killer: if one shard’s dataset is much larger, that shard might eventually hit storage limits or have more pages to sift through for queries. IOPS (reads/writes per second to disk) is related to throughput, but if one shard’s disk is thrashing (perhaps due to less memory or bigger dataset), it can impact performance. Comparing resource metrics across shards lets you see if data is evenly distributed or if one shard needs attention (maybe it’s time to split it).
-
Rebalance progress and lag – If you are performing a rebalancing operation (chunk migration, adding a new shard), track the progress. Many systems provide metrics or logs for how much data has been moved, or how far a new shard is in catching up. For instance, when splitting a shard in Aurora Limitless, you can list the shard split jobs and see their status. In MongoDB, you can check the balancer status and if any chunk migrations are queued or stuck. It’s crucial to monitor that rebalancing is making progress and not impacting the live traffic too much. Rebalance lag could refer to how far behind a secondary copy is during migration – e.g. if you’re dual-writing, measure the delay between old and new. If that lag grows, you might be falling behind on replication during re-shard. Essentially, treat a reshard like a long-running batch job: instrument it so you know when it will finish or if it’s hung. This helps in deciding failover or rollback if needed.
-
Error rates and inconsistency indicators – Keep an eye on any errors that could indicate cross-shard issues, like “key not found” errors that might actually be due to looking in the wrong shard (could hint at a bad router map). If you have multi-shard transactions, monitor abort/commit rates. Also, if using a directory service for shard lookup, monitor its latency and failures – a slowdown there can affect every request. Observability isn’t just metrics; logs and traces are valuable too. Distributed tracing can show you if requests to a certain shard are slower. Some teams even tag metrics with shard identifiers so they can easily filter dashboards per shard.
In summary, treat each shard like a micro-service to be monitored: you want per-shard metrics for throughput, latency, and resource use. Compare them to catch any one shard deviating from the pack. Also monitor the control plane – the routing layer or metadata – since if that misroutes or slows down, it affects everything. Good observability will inform your capacity decisions (when to add shards), alert you to hotspots (so you can salt or re-shard), and ensure that when you do maintenance like rebalancing, you know how it’s progressing. Without these insights, a sharded system can be a black box of surprises; with them, you can achieve smooth operations at scale.
AWS Case Studies and Reference Points
To ground these concepts, let’s look at a few examples from AWS’s database offerings (not as endorsements, but to illustrate real implementations):
-
DynamoDB’s partition keys and limits: Amazon DynamoDB is a fully managed NoSQL database that shards your data under the hood. When you create a table, you specify a partition key (and optional sort key). DynamoDB hashes the partition key to determine which physical partition your item lives on. It’s effectively doing hash-based sharding automatically. Each partition is limited to ~10 GB of data, and if that is exceeded, DynamoDB will split the partition transparently. This is why a good partition key in DynamoDB needs high cardinality – to spread items across many partitions. If you choose something low-cardinality (say “status” with only a few values), you’ll get a hot partition. DynamoDB enforces a max of 3,000 read capacity or 1,000 write capacity units per partition, which is how it avoids any single partition from taking on unbounded load. The combination of partition key + sort key serves as a composite primary key; all items with the same partition key are stored together (sorted by the sort key) on the same partition. This is great for locality of related items, but also means you must be careful — e.g., if one partition key (one group of items) gets too large (over 10 GB or too hot), DynamoDB will split it into two physical partitions behind the scenes. Dynamo’s design highlights the importance of a well-chosen shard key: you want to maximize distribution while still enabling your access patterns. AWS provides guidance like using “write sharding” (adding random suffixes) if you anticipate hot keys, and indeed DynamoDB’s adaptive capacity will kick in to help with uneven workloads by moving data around as needed.
-
Adaptive capacity in DynamoDB: As mentioned, DynamoDB has an Adaptive Capacity feature. In practice, if one partition (identified by a set of partition key values) becomes very hot (e.g., a single key getting a ton of traffic), Dynamo will transparently split that partition (this is sometimes called “split for heat”). The hot key might end up isolated on its own partition, allowing other keys to use the original partition’s capacity. Furthermore, Dynamo can borrow capacity from less-used partitions to serve the hot one. This means short spikes can be absorbed without manual intervention. It’s a form of auto-rebalancing that operates under the hood. The takeaway for a system designer is that DynamoDB’s internal sharding alleviates some pain (you won’t immediately fall over due to one hot key), but it doesn’t mean you can ignore good partition key design. You still design for uniform workload, but the service adds resilience. This feature is a nice real-world example of the hot-spot mitigation strategies discussed earlier (automated splitting and load redistribution).
-
Aurora’s approach to sharding (Aurora Limitless): Amazon Aurora (a managed relational database) traditionally was a single-instance (or read-replica) system, not horizontally sharded. However, recently AWS introduced Aurora Limitless for PostgreSQL, which brings sharding to Aurora. In Aurora’s architecture, you have a concept of a shard group and a pool of “routers” that route queries to the correct shard. Aurora automates a lot of the pain points of sharding. For example, re-sharding (splitting a shard) is handled by the system with minimal fuss: if you add a new shard node to the cluster, Aurora can rebalance the table slices automatically across shards. As one AWS Hero described it, “Re-sharding to scale out can be complex, but Aurora Limitless simplifies this. Since tables are already divided into slices, re-sharding re-balances these slices when adding a new shard node.”. In practice, you issue a SQL command to split a shard, and Aurora takes care of redistributing the data in the background, while your application continues to use the cluster endpoint. This is a powerful illustration of the two-phase resharding we discussed: the database does the data copy and then switches over, all under the hood. Aurora’s solution also includes a transaction router tier to handle distributed SQL queries and ensure consistency across shards (so it’s not a simple key-value shard; it keeps ACID properties). Another touchpoint is global distribution: Aurora Global Database isn’t sharding but replicates a database across regions for reads. Combined with sharding, you could imagine a globally sharded Aurora where each region has shards for local data plus read access to others – though that gets complex, and Aurora Limitless ensures global consistency even across shards or regions.
In summary, AWS’s offerings show how partitioning concepts manifest in real services: DynamoDB hashes your keys and auto-splits hot partitions (but you still need to design keys carefully), and Aurora’s new sharded mode provides a near turn-key sharding with robust rebalancing and routing built-in. The principles we discussed – like choosing the right shard key, routing, and resharding – are exactly what these services implement under the hood, albeit with a lot of automation. When preparing for system design interviews or architecting your own system, it’s useful to remember these real examples. They demonstrate the trade-offs (e.g., Dynamo chooses hash sharding and needs workarounds for hot keys; Aurora chooses to automate re-sharding at the cost of a more complex system with routers). Partitioning and sharding are fundamental to scaling, and mastering these fundamentals will let you design systems that can grow while remaining performant and reliable.