Any limitation for having many files in a directory in Mac OS X?
According to this Stack Overflow answer and specific details on Apple’s site, an individual folder can contain up to 2.1 billion items.
That said, just because it can hold up to 2.1 billion items doesn’t mean it can maintain performance at that level. According to Wikipedia; emphasis is mine:
The Catalog File, which stores all the file and directory records in a single data structure, results in performance problems when the system allows multitasking, as only one program can write to this structure at a time, meaning that many programs may be waiting in queue due to one program "hogging" the system. It is also a serious reliability concern, as damage to this file can destroy the entire file system.
So performance is naturally degraded thanks to the fact the catalog file can only be used by one program at a time. And if the directory grows in size, the risk/degradation caused by that issue will only escalate; more files means more of a chance for programs to access files in that one directory. Further confirmation of that idea here; again emphasis is mine:
The catalog file is a complicated structure. Because it keeps all file and directory information, it forces serialization of the file system—not an ideal situation when there are a large number of threads wanting to perform file I/O. In HFS, any operation that creates a file or modifies a file in any way has to lock the catalog file, which prevents other threads from even read-only access to the catalog file. Access to the catalog file must be single- writer/multireader.
Short Answer: Well, if you're reading 100,000 files, I might expect the script to be slow.
Long Answer: To answer this question more thoroughly, you have to look at the file system on a Mac. Macs use the HFS+ (Hierarchical File System Plus), which is a modern file system that has limitations, but only in extreme situations.
From my experience, it’s a lot like a Linux EXT journaling file system. It supports mounting directories, UNIX-like permissions, etc. It addressed files in a 32-bit format, making the maximum number of files that can be stored in a volume 4,294,967,295, according to this source.
The file system starts to break with files bigger than 8 EB on modern systems and up to 2.1 billion files and folders in one location as outlined here.
Given the way the HFS+—or really any file system is setup for that matter—having a lot of files in a folder should not do anything 'weird'.
Honestly, I don’t think there would be a performance improvement distributing the files across a more complex folder hierarchy. Actually, this technique might be less efficient because your script would have to make calls to change directories mid-process.