MongoDB Limits and Thresholds
This document provides a collection of hard and soft limitations of the MongoDB system. The limitations on this page apply to deployments hosted in all of the following environments unless specified otherwise:
MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud
MongoDB Enterprise: The subscription-based, self-managed version of MongoDB
MongoDB Community: The source-available, free-to-use, and self-managed version of MongoDB
MongoDB Atlas Limitations
The following limitations apply only to deployments hosted in MongoDB Atlas. If any of these limits present a problem for your organization, contact Atlas support.
MongoDB Atlas Cluster Limits
Component | Limit |
---|---|
Shards in Cross-region network permissions for a multi-region cluster | 40. Additionally, a cluster in any project spans more than 40 regions, you can't create a multi-region cluster in this project. |
Electable nodes per replica set or shard | 7 |
Cluster tier for the Config server (minimum and maximum) |
|
MongoDB Atlas Connection Limits and Cluster Tier
MongoDB Atlas limits concurrent incoming connections based on the cluster tier and class. MongoDB Atlas connection limits apply per node. For sharded clusters, MongoDB Atlas connection limits apply per mongos router. The number of mongos routers is equal to the number of replica set nodes across all shards.
Your read preference also contributes to the total number of connections that MongoDB Atlas can allocate for a given query.
MongoDB Atlas has the following connection limits for the specified cluster tiers:
MongoDB Atlas Cluster Tier | Maximum Connections Per Node |
---|---|
| 500 |
| 500 |
| 500 |
| 1500 |
| 3000 |
| 3000 |
| 6000 |
| 16000 |
| 32000 |
| 96000 |
| 96000 |
| 128000 |
| 128000 |
MongoDB Atlas Cluster Tier | Maximum Connections Per Node |
---|---|
| 4000 |
| 16000 |
| 32000 |
| 64000 |
| 96000 |
| 128000 |
| 128000 |
| 128000 |
| 128000 |
MongoDB Atlas Cluster Tier | Maximum Connections Per Node |
---|---|
| 500 |
| 500 |
| 500 |
| 1500 |
| 3000 |
| 3000 |
| 6000 |
| 16000 |
| 32000 |
| 64000 |
| 96000 |
| 128000 |
| 128000 |
Note
MongoDB Atlas reserves a small number of connections to each cluster for supporting MongoDB Atlas services.
MongoDB Atlas Multi-Cloud Connection Limitation
If you're connecting to a multi-cloud MongoDB Atlas deployment through a private connection, you can access only the nodes in the same cloud provider that you're connecting from. This cloud provider might not have the primary node in its region. When this happens, you must specify the secondary read preference mode in the connection string to access the deployment.
If you need access to all nodes for your multi-cloud MongoDB Atlas deployment from your current provider through a private connection, you must perform one of the following actions:
Configure a VPN in the current provider to each of the remaining providers.
Configure a private endpoint to MongoDB Atlas for each of the remaining providers.
MongoDB Atlas Collection and Index Limits
While there is no hard limit on the number of collections in a single MongoDB Atlas cluster, the performance of a cluster might degrade if it serves a large number of collections and indexes. Larger collections have a greater impact on performance.
The recommended maximum combined number of collections and indexes by MongoDB Atlas cluster tier are as follows:
MongoDB Atlas Cluster Tier | Recommended Maximum |
---|---|
| 5,000 collections and indexes |
| 10,000 collections and indexes |
| 100,000 collections and indexes |
MongoDB Atlas Organization and Project Limits
MongoDB Atlas deployments have the following organization and project limits:
Component | Limit |
---|---|
Database users per MongoDB Atlas project | 100 |
Atlas users per MongoDB Atlas project | 500 |
Atlas users per MongoDB Atlas organization | 500 |
API Keys per MongoDB Atlas organization | 500 |
Access list entries per MongoDB Atlas Project | 200 |
Users per MongoDB Atlas team | 250 |
Teams per MongoDB Atlas project | 100 |
Teams per MongoDB Atlas organization | 250 |
Teams per MongoDB Atlas user | 100 |
Organizations per MongoDB Atlas user | 250 |
Linked organizations per cross-organization configuration | 250 |
Clusters per MongoDB Atlas project | 25 |
Projects per MongoDB Atlas organization | 250 |
Custom MongoDB roles per MongoDB Atlas project | 100 |
Assigned roles per database user | 100 |
Hourly billing per MongoDB Atlas organization | $50 |
Federated database instances per MongoDB Atlas project | 25 |
Total Network Peering Connections per MongoDB Atlas project | 50. Additionally, MongoDB Atlas limits the number of nodes per Network Peering connection based on the CIDR block and the region selected for the project. |
Pending network peering connections per MongoDB Atlas project | 25 |
AWS Private Link addressable targets per region | 50 |
Azure PrivateLink addressable targets per region | 150 |
Unique shard keys per MongoDB Atlas-managed Global Cluster project | 40. This applies only to Global Clusters with Atlas-Managed Sharding. There are no limits on the number of unique shard keys per project for Global Clusters with Self-Managed Sharding. |
| 1 |
MongoDB Atlas Service Account Limits
MongoDB Atlas service accounts have the following organization and project limits:
Component | Limit |
---|---|
Active tokens per MongoDB Atlas service account | 100 |
MongoDB Atlas Label Limits
MongoDB Atlas limits the length and enforces ReGex requirements for the following component labels:
Component | Character Limit | RegEx Pattern |
---|---|---|
Cluster Name | 64 [1] |
|
Project Name | 64 |
|
Organization Name | 64 |
|
API Key Description | 250 |
[1] | If you have peering-only mode enabled, the cluster name character limit is 23. |
[2] | MongoDB Atlas uses the first 23 characters of a cluster's name.
These characters must be unique within the cluster's project.
Cluster names with fewer than 23 characters can't end with a
hyphen (- ). Cluster names with more than 23 characters can't
have a hyphen as the 23rd character. |
[3] | (1, 2) Organization and project names can include any Unicode letter or
number plus the following punctuation: -_.(),:&@+' . |
Serverless Instance, Free Cluster, M2 and M5 Cluster, and Flex Cluster Limitations
Additional limitations apply to MongoDB Atlas serverless instances, free clusters, M2, M5, and Flex clusters. To learn more, see the following resources:
MongoDB Atlas Command Limitations
Some MongoDB commands are unsupported in MongoDB Atlas. Additionally, some commands are supported only in MongoDB Atlas free clusters. To learn more, see the following resources:
BSON Documents
- BSON Document Size
The maximum BSON document size is 16 mebibytes.
The maximum document size helps ensure that a single document cannot use an excessive amount of RAM or, during transmission, an excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. For more information about GridFS, see
mongofiles
and the documentation for your driver
- Nested Depth for BSON Documents
MongoDB supports no more than 100 levels of nesting for BSON documents. Each object or array adds a level.
Naming Restrictions
- Use of Case in Database Names
Do not rely on case to distinguish between databases. For example, you cannot use two databases with names like,
salesData
andSalesData
.After you create a database in MongoDB, you must use consistent capitalization when you refer to it. For example, if you create the
salesData
database, do not refer to it using alternate capitalization such assalesdata
orSalesData
.
- Restrictions on Database Names for Windows
For MongoDB deployments running on Windows, database names cannot contain any of the following characters:
/\. "$*<>:|? Also database names cannot contain the null character.
- Restrictions on Database Names for Unix and Linux Systems
For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:
/\. "$ Also database names cannot contain the null character.
- Restriction on Collection Names
Collection names should begin with an underscore or a letter character, and cannot:
contain the
$
.be an empty string (e.g.
""
).contain the null character.
begin with the
system.
prefix. (Reserved for internal use.)contain
.system.
.
If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the
db.getCollection()
method inmongosh
or a similar method for your driver.Namespace Length:
The namespace length limit for unsharded collections and views is 255 bytes, and 235 bytes for sharded collections. For a collection or a view, the namespace includes the database name, the dot (
.
) separator, and the collection/view name (e.g.<database>.<collection>
).
- Restrictions on Field Names
Field names cannot contain the
null
character.The server permits storage of field names that contain dots (
.
) and dollar signs ($
).MongodB 5.0 adds improved support for the use of (
$
) and (.
) in field names. There are some restrictions. See Field Name Considerations for more details.Each field name must be unique within the document. You must not store documents with duplicate fields because MongoDB CRUD operations might behave unexpectedly if a document has duplicate fields.
Naming Warnings
Warning
Use caution, the issues discussed in this section could lead to data loss or corruption.
MongoDB does not support duplicate field names
The MongoDB Query Language doesn't support documents with duplicate field names:
Although some BSON builders may support creating a BSON document with duplicate field names, inserting these documents into MongoDB isn't supported even if the insert succeeds, or appears to succeed.
For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion, or may result in an invalid document being inserted that contains duplicate fields. Querying those documents leads to inconsistent results.
Updating documents with duplicate field names isn't supported, even if the update succeeds or appears to succeed.
Starting in MongoDB 6.1, to see if a document has duplicate field names,
use the validate
command with the full
field set to
true
. In any MongoDB version, use the $objectToArray
aggregation operator to see if a document has duplicate field names.
Avoid Ambiguous Field Names
Do not use a field name that is the same as the
dot notation for an
embedded field. If you have a document with an embedded field
{ "a" : { "b": ... } }
, other documents in that collection should
not have a top-level field "a.b"
.
If you can reference an embedded field and a top-level field in the same
way, indexing and sharding operations happen on the embedded field.
You cannot index or shard on the top-level field "a.b"
while the
collection has an embedded field that you reference in the same way.
For example, if your collection contains documents with both an embedded
field { "a" : { "b": ... } }
and a top-level field "a.b"
,
indexing and sharding operations happen on the embedded field. It is not
possible to index or shard on the top-level field "a.b"
when your
collection also contains an embedded field { "a" : { "b": ... } }
.
Import and Export Concerns With Dollar Signs ($
) and Periods (.
)
Starting in MongoDB 5.0, document field names can be dollar ($
)
prefixed and can contain periods (.
). However,
mongoexport
may not work
as expected in some situations with field names that make use of these
characters.
MongoDB Extended JSON v2
cannot differentiate between type wrappers and fields that happen to
have the same name as type wrappers. Do not use Extended JSON
formats in contexts where the corresponding BSON representations
might include dollar ($
) prefixed keys. The
DBRef mechanism is an exception to this
general rule.
There are also restrictions on using mongoimport
and
mongoexport
with periods (.
) in field names. Since
CSV files use the period (.
) to represent data hierarchies, a
period (.
) in a field name will be misinterpreted as a level of
nesting.
Possible Data Loss With Dollar Signs ($
) and Periods (.
)
There is a small chance of data loss when using dollar ($
) prefixed
field names or field names that contain periods (.
) if these
field names are used in conjunction with unacknowledged writes
(write concern w=0
) on servers
that are older than MongoDB 5.0.
When running insert
, update
, and
findAndModify
commands, drivers that are 5.0 compatible
remove restrictions on using documents with field names that are
dollar ($
) prefixed or that contain periods (.
). These field
names generated a client-side error in earlier driver versions.
The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error.
Namespaces
Indexes
- Number of Indexed Fields in a Compound Index
There can be no more than 32 fields in a compound index.
- Queries cannot use both text and Geospatial Indexes
You cannot combine the
$text
query, which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine$text
query with the$near
operator.
- Fields with 2dsphere Indexes can only hold Geometries
Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. If you attempt to insert a document with non-geometry data in a
2dsphere
indexed field, or build a2dsphere
index on a collection where the indexed field has non-geometry data, the operation will fail.See also:
The unique indexes limit in Sharding Operational Restrictions.
- Limited Number of 2dsphere index keys
To generate keys for a 2dsphere index,
mongod
maps GeoJSON shapes to an internal representation. The resulting internal representation may be a large array of values.When
mongod
generates index keys on a field that holds an array,mongod
generates an index key for each array element. For compound indexes,mongod
calculates the cartesian product of the sets of keys that are generated for each field. If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits.indexMaxNumGeneratedKeysPerDocument
limits the maximum number of keys generated for a single document to prevent out of memory errors. The default is 100000 index keys per document. It is possible to raise the limit, but if an operation requires more keys than theindexMaxNumGeneratedKeysPerDocument
parameter specifies, the operation will fail.
- NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double
If the value of a field returned from a query that is covered by an index is
NaN
, the type of thatNaN
value is alwaysdouble
.
- Multikey Index
Multikey indexes cannot cover queries over array fields.
- Geospatial Index
Geospatial indexes can't cover a query.
- Memory Usage in Index Builds
createIndexes
supports building one or more indexes on a collection.createIndexes
uses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage forcreateIndexes
is 200 megabytes, shared between all indexes built using a singlecreateIndexes
command. Once the memory limit is reached,createIndexes
uses temporary disk files in a subdirectory named_tmp
within the--dbpath
directory to complete the build.You can override the memory limit by setting the
maxIndexBuildMemoryUsageMegabytes
server parameter. Setting a higher memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.For feature compatibility version (fcv)
"4.2"
and later, the index build memory limit applies to all index builds.
Index builds may be initiated either by a user command such as
createIndexes
or by an administrative process such as an initial sync. Both are subject to the limit set bymaxIndexBuildMemoryUsageMegabytes
.An initial sync populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set by
maxIndexBuildMemoryUsageMegabytes
.Tip
To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Rolling Index Builds on Replica Sets.
- Collation and Index Types
The following index types only support simple binary comparison and do not support collation:
2d indexes
Tip
To create a
text
or2d
index on a collection that has a non-simple collation, you must explicitly specify{collation: {locale: "simple"} }
when creating the index.
- Hidden Indexes
You cannot hidden index.
Sorts
Data
- Maximum Number of Documents in a Capped Collection
If you specify the maximum number of documents in a capped collection with
create
'smax
parameter, the value must be less than 2 31 documents.If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.
Replica Sets
- Number of Voting Members of a Replica Set
Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Non-Voting Members.
- Maximum Size of Auto-Created Oplog
If you do not explicitly specify an oplog size (i.e. with
oplogSizeMB
or--oplogSize
) MongoDB will create an oplog that is no larger than 50 gigabytes. [4][4] The oplog can grow past its configured size limit to avoid deleting the majority commit point
.
Sharded Clusters
Sharded clusters have the restrictions and thresholds described here.
Sharding Operational Restrictions
- Operations Unavailable in Sharded Environments
$where
does not permit references to thedb
object from the$where
function. This is uncommon in un-sharded collections.The
geoSearch
command is not supported in sharded environments.In MongoDB 5.0 and earlier, you cannot specify sharded collections in the
from
parameter of$lookup
stages.
- Covered Queries in Sharded Clusters
When run on
mongos
, indexes can only cover queries on sharded collections if the index contains the shard key.
- Single Document Modification Operations in Sharded Collections
To use
update
andremove()
operations for a sharded collection that specify thejustOne
ormulti: false
option:If you only target one shard, you can use a partial shard key in the query specification or,
You can provide the shard key or the
_id
field in the query specification.
- Unique Indexes in Sharded Collections
MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.
- Maximum Number of Documents Per Range to Migrate
By default, MongoDB cannot move a range if the number of documents in the range is greater than 2 times the result of dividing the configured range size by the average document size. If MongoDB can move a sub-range of a chunk and reduce the size to less than that, the balancer does so by migrating a range.
db.collection.stats()
includes theavgObjSize
field, which represents the average document size in the collection.For chunks that are too large to migrate:
The balancer setting
attemptToBalanceJumboChunks
allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Ranges that Exceed Size Limit for details.When issuing
moveRange
andmoveChunk
commands, it's possible to specify the forceJumbo option to allow for the migration of ranges that are too large to move. The ranges may or may not be labeled jumbo.