Page 1 of 2 12 LastLast
Results 1 to 10 of 16

Thread: is there some limit to total size of files in a directory?

  1. #1

    Default is there some limit to total size of files in a directory?

    Hello,

    I have an application that creates a number of files. The files are reasonably large at between 4 and 11MB. The entire directory tree may contain up to 1500 or so files of this size. There is another application that would add another 1500 files of about half the size. The script the runs this is failing because at some point in the process bash is not able to access files that should exist. Since the error arrise late in the process, I am making the assumption that the issue is the file size. There is pleanty of space in the partition and if I decrease the total number of files that are on the system at any one time, the errors go away.

    I am just trying to track down what the issue is and what the limitations may be. I am not that fimiluar with ext4 file systems.

    LMHmedchem

  2. #2

    Default Re: is there some limit to total size of files in a directory?

    Hi,

    There might be a limit but if speed is what you're after then the shell is not the tool you should be using. Care to tell why is that script creating a lot of files? (what type of files)
    "Unfortunately time is always against us" -- [Morpheus]

    .:https://github.com/Jetchisel:.

  3. #3
    Join Date
    Jun 2008
    Location
    Netherlands
    Posts
    25,240

    Default Re: is there some limit to total size of files in a directory?

    And please, when you got an error message, then copy/paste it here in a post (beteen CODE tags). When you want others to interprete such a message, they must see it and not read your vague interpretation of it.
    Henk van Velden

  4. #4
    Join Date
    Feb 2009
    Location
    Spain
    Posts
    25,547

    Default Re: is there some limit to total size of files in a directory?

    On 2015-01-14 02:46, LMHmedchem wrote:

    > I am just trying to track down what the issue is and what the
    > limitations may be. I am not that fimiluar with ext4 file systems.


    There is no directory size limit. It may become slower, yes. Other
    filesystem types cope better: you can easily place a million files in a
    single reiserfs directory at top speed.

    The limit is typically inside your script: there is a limit to the
    command line size you get when you try to get a directory listing. Like
    "ls *".


    --
    Cheers / Saludos,

    Carlos E. R.
    (from 13.1 x86_64 "Bottle" at Telcontar)

  5. #5
    Join Date
    Jun 2008
    Location
    Praha, Czech Republic
    Posts
    45

    Default Re: is there some limit to total size of files in a directory?

    actually sounds like a lack of Inodes to me - especially when you say that if you delete files it goes away.

    I assume that the filesystem is not full - but the idnodes are also a limiting factor especially when you have a lot of small files.

  6. #6
    Join Date
    Feb 2009
    Location
    Spain
    Posts
    25,547

    Default Re: is there some limit to total size of files in a directory?

    On 2015-01-14 16:06, KunzeS wrote:
    >
    > actually sounds like a lack of Inodes to me - especially when you say
    > that if you delete files it goes away.


    Doubtful, with less than 3000 files :-)

    Notice that when you delete files, also the directory list is smaller
    and can fit the command line buffer. I have seen this happen.

    "Just" changing the command so that it never tries to use things like
    "*", works.


    --
    Cheers / Saludos,

    Carlos E. R.
    (from 13.1 x86_64 "Bottle" at Telcontar)

  7. #7

    Default Re: is there some limit to total size of files in a directory?

    Quote Originally Posted by LMHmedchem View Post
    Hello,

    I have an application that creates a number of files. The files are reasonably large at between 4 and 11MB. The entire directory tree may contain up to 1500 or so files of this size. There is another application that would add another 1500 files of about half the size. The script the runs this is failing because at some point in the process bash is not able to access files that should exist. Since the error arrise late in the process, I am making the assumption that the issue is the file size. There is pleanty of space in the partition and if I decrease the total number of files that are on the system at any one time, the errors go away.

    I am just trying to track down what the issue is and what the limitations may be. I am not that fimiluar with ext4 file systems.

    LMHmedchem
    Hi,

    I tested creating 3000k files with more or less 11MB of size and using a glob * does not have any limit when counting accessing them.

    This creates a dummy file named DummyFile with more or less 11 MB size.

    Code:
    dd if=/dev/zero of=DummyFile bs=1M count=11
    Check the file size.

    Code:
    ls -sh DummyFile

    Create an empty directory just for testing purposes.
    Code:
    mkdir testing && cd $_
    Now to create 3000k files with more or less 11 MB in size.

    WARNING: Don't try to attempt this if you have a lot of workload going on or some mission critical work. (just to be safe )
    Code:
    for ((i=0;i<3000;i++)); do dd if=/dev/zero of=$i bs=1M count=11; done
    When it is done you can checkout the file size.

    Code:
    ls -sh *
    or counting them

    Code:
    total=(*); echo "${#total[@]}"
    so even the glob * does not have any issues with counting but let say you have a lot files that at some point you may need to use printf and wc

    Code:
    printf "%0.0sx" * | wc -c
    So that concludes my test case. In the end 3000k files with 11MB size is not an issue even for the shell, PEBCAK maybe?
    "Unfortunately time is always against us" -- [Morpheus]

    .:https://github.com/Jetchisel:.

  8. #8

    Default Re: is there some limit to total size of files in a directory?

    Hi,

    Hmmm, ok you might hit some brick wall if you add the & after the dd inside the for loop.

    WARNING: Don't try to attempt this if you have a lot of workload going on or some mission critical work.

    Code:
    for ((i=0;i<3000;i++)); do dd if=/dev/zero of=$i bs=1M count=11 & done

    That will run dd in an "asynchronous" mode ie it will not wait for other dd process to finish before it starts another, so imagine 3000 dd process running at the same time, that could send your system into the rabbit hole

    Again Kids don't try to do this at home
    "Unfortunately time is always against us" -- [Morpheus]

    .:https://github.com/Jetchisel:.

  9. #9
    Join Date
    Jun 2008
    Location
    Auckland, NZ
    Posts
    20,544
    Blog Entries
    1

    Default Re: is there some limit to total size of files in a directory?

    Best left as a thought experiment or inside a VM perhaps

  10. #10
    Join Date
    Feb 2009
    Location
    Spain
    Posts
    25,547

    Default Re: is there some limit to total size of files in a directory?

    On 2015-01-14 23:36, jetchisel wrote:

    > So that concludes my test case. In the end 3000k files with 11MB size is
    > not an issue even for the shell


    But your file names are short, just 1, 2, 3, if I got it right. If the
    length of the file names is 12 chars, 3000 of them make for a 39000
    chars. I think that the limit is 64KB, perhaps 128K. I don't remember.

    --
    Cheers / Saludos,

    Carlos E. R.
    (from 13.1 x86_64 "Bottle" at Telcontar)

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •