Most modern UNIX filesystems do not limit the number of files you can store in a single directory. However, stat operations such as listing files, following paths, or checking for the existence of a file will take longer as the number of entries in a directory increases. On some systems, I have seen performance decrease with as few as 1000 entries in a directory. But there is no magic number. The performance of a filesystem is dependent on many factors, and it can be influenced by the hardware it is running on. Because of my previous experience, I tend to play it safe, and limit the number of files in a single directory to 1000. Especially when developing open-source or public software where the deployment environment is unknown.
So this brings us to a typical problem: How can one store a large number of files while maintaining a high level of performance during access? One solution is file name hashing.