Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

7
  • 1
    You did notice that the NTFS performance article you quote from applies to NT 3.5, released 1994, right? Commented Nov 1, 2015 at 16:03
  • 1
    @AvnerShahar-Kashtan yep. Git was released in 2005. I know what I was using NTFS v1.2 based file systems in a corporate environment well into the early 2000s (at a tech company nonetheless). There is certainly overlap between the requirements of git and the file systems on commonly available systems at the time. Commented Nov 1, 2015 at 16:11
  • 1
    Perhaps it would be clearer if you stated that this might be a historical artifact of the state of technology when git was introduced, because as it stands, for a question asked in 2015, quoting a twenty year old technical limitation (taking up hald the answer) seems confusing. Commented Nov 1, 2015 at 16:33
  • To be fair, git's "pack" system mitigates a lot of these issues. Theoretically, git could be using only a single directory, and just repacking when the number of files in that directory exceeded a certain (possibly FS-dependent) limit. Commented Nov 1, 2015 at 17:37
  • 6
    @AvnerShahar-Kashtan if you read the linked SO article you can see that dealing with directories containing a large number of files is problematic on multiple file systems and operating systems, not just NT 3.5. File limits aside, even just listing the files can incur a large amount of overhead. Commented Nov 1, 2015 at 18:37