I have a 10M folders. Each folder contains 13 files.

All these folders I would like to put in one main folder (root).

Is there any limitation in Windows Server for that?


As far the theoretical capacities of NTFS are concerned, there is no problem.

The Microsoft article on Maximum Sizes on an NTFS Volume specifies that the maximum of files per volume is 4,294,967,295, and that should also be the maximum on folders. However, you would need an extremely fast computer with lots of RAM to be able to even view that folder in Explorer.

From my own experience, on a good computer of several years ago, viewing a folder with thousands of sub-folders took some dozen of seconds just to show the folder. I have no idea what would happen with 10 million sub-folders, but surely you would need a lot of patience even if the computer could handle it. Eventually.

I really suggest to rethink again your folder architecture.


This may be an X/Y problem. Perhaps what you are doing is better suited for a database rather than a filesystem. With a database, you can easily store and access many millions of records quickly and efficiently. The accepted answer is correct in saying NTFS is theoretically able to store this many records, but it won't be very fast. This is true for essentially all filesystems (e.g. NTFS, exFAT, ext4, HFS...). They simply aren't designed to be sufficiently scalable for what you're trying to do.

One of the main reasons for this is that most operating systems' filesystem API can only return the entire list of directory entries at once. There is no way to retrieve only directories that match a certain pattern in typical filesystems, for example. It would have to retrieve them all and then parse the (massive) output for the names you want. The same is true with other file/directory attributes in addition to name like size, creation and modification time, etc. This isn't the case with databases.


The number of files inside a folder has nothing to do with the OS. It's a feature of the file system although the system you use may in turn has lower limitations. Some file systems limit the number of files in a folder but some others just limit the total number of files in a volume, and some don't have any limits at all. See file systems' limits. Note that basically a directory is just a file whose content is a list of other files

If you use exFAT, the maximum number is 2 796 202 files per folder. In NTFS the limit is 232-1 files per volume. And if you use FAT then the limit depends on the FAT version

  • FAT12: 4 068 for 8 KiB clusters
  • FAT16: 65 460 for 32 KiB clusters
  • FAT32: 268 173 300 for 32 KiB clusters

Windows also natively support a few other file systems like ReFS, or you can install drivers for other non-native file systems. They may in turn have different limits

But in any case having a huge number of files in a folder is a very bad idea. The listing and operating speed depends on how the file system stores its metadata, for example in FAT it's a linear list so it's very slow. But even with an efficient way to list files like a B+tree in NTFS it's still slow. In general I avoid having more than 2000 files in a folder

The better solution in your case should be some kind of database. However if you really have to store the files directly in a drive then you need to distribute the files evenly to multiple smaller folders. The common way is to hash the file name or content and split into folders having part of that name. For example if the hash is 0xabcdef12 (32 bits) then store the file in ab/cd/ef/12, ab/cde/f12 or 2af/0de/f12 (each path component represents 8/8/8/8, 8/12/12 and 10/10/12 bits of the original value respectively). This way no folder should have too many or too few files. See

  • How to spread/hash multiple files on disk without storing more than 1000 per directory?
  • Storing a million images in the filesystem
  • Accessing thousands of files in hash of directories

This method is commonly used in git or docker

See also

  • Is it bad if millions of files are stored in one NTFS folder?
  • Performance implications of storing 600,000+ images in the same folder (NTFS)
  • Can file system performance decrease if there is a very large number of files in a single directory (NTFS)?
  • Having 1 million folder or have 1 million files in one folder?
  • How many files can you put in a Windows folder without a noticable performance degradation?
  • NTFS performance and large volumes of files and directories
  • How do you deal with lots of small files?
  • Millions of small graphics files and how to overcome slow file system access on XP
  • How many files in a directory is too many (on Windows and Linux)? (duplicate)
  • Millions of (small) text files in a folder
  • Performance associated with storing millions of files on NTFS