Sie sind auf Seite 1von 12

Max # of files allowed in directory under UFS & under VxFS (Veritas File System) in Solaris 10 Subscribe Hello,

What is the maximum number of files allowed in a directory under UFS in Solaris 10 ? What is the maximum number of files allowed in a directory under Veritas File System 5.0 in Solaris 10 ? I'd like to know where to locate this maximum number under UFS and Veritas File System 5.0 in Solaris 10. Is there any way that we could modify this maximum number ? Thanks, Bill 0 Reply underh20 5/18/2010 10:10:18 PM

On 05/19/10 10:10 AM, underh20 wrote: > Hello, > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? It depends. > What is the maximum number of files allowed in a directory under > Veritas File System 5.0 in Solaris 10 ? Pass. > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5.0 in Solaris 10. Is there any way that we could > modify this maximum number ? df -o i reports the inode data for a UFS filesystem. To increase the number, you have to recreate the filesystem. See df_ufs(1M) and newfs(1M). -Ian Collins 0 Reply Ian 5/18/2010 10:34:05 PM On 18-May-2010, underh20 <underh20.scubadiving@gmail.com> wrote: > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? It depends on the size of the file system as UFS uses inodes .. the bigger

the file system, the larger the number of inodes available. You can check on the inode usage by using "df -F ufs -o i" I dont think ZFS has a limit on the number of files in a directory... the limit for UFS is *quite* large .. it's in *at least* the 10's of thousands .... I know Ive had 20k files in a single directory .. Not sure about thet other's Im afraid... but ZFS is definitely good to go :D Cya 0 Reply Hugo 5/18/2010 10:47:32 PM underh20 wrote: > Hello, > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? > > What is the maximum number of files allowed in a directory under > Veritas File System 5.0 in Solaris 10 ? > > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5.0 in Solaris 10. Is there any way that we could > modify this maximum number ? > > Thanks, > > Bill If there is a limit, it's huge! It's far more files than it would be reasonable to catalog in a single directory. What problem are you trying to solve?? 0 Reply Richard 5/19/2010 12:37:36 AM In article <_dydncBiFoRGr27WnZ2dnUVZ_h2dnZ2d@giganews.com>, "Richard B. Gilbert" <rgilbert88@comcast.net> wrote: > > > > > > > > > > > > > > > > underh20 wrote: > Hello, > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? > > What is the maximum number of files allowed in a directory under > Veritas File System 5.0 in Solaris 10 ? > > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5.0 in Solaris 10. Is there any way that we could > modify this maximum number ? > > Thanks, > > Bill

> > If there is a limit, it's huge! It's far more files than it would be > reasonable to catalog in a single directory. > > What problem are you trying to solve?? No one in this thread has mentioned that it's a singularly Bad Idea(tm) to put a large number of files in a single directory. At some point, there is a internal directory cache that gets filled and name lookups get progressively longer (I may have the details of this wrong). I don't know if VxFS has this problem since, AFAIK, it doesn't use inodes and linearly searched directory files. As a sysadmin, I had to clean up many lazy developer's "dirty" implementations of a project. These are ones that leave lots of files in an application's defined temporary directory or much worse, the system's /tmp. On Solaris, that sucks up swap space and memory for systems that stay up long periods of time using swap rather than real file system space. Usually by the time I get involved the application has been running in production for a while and all of a sudden it starts slowing down, sucking memory up or taking forever to do certain things. The memory is from /tmp filling up. A simple rm can take for freak'in ever to go linearly through the directory to find the file's inode in the directory file (uses a linear search). find works better. But once the damage is cleaned up, a cron can be implemented to clean up after the lazy, evil developers. Rather than put lots of files in a single directory, name them in a specific way such that you can put them in at least 2 levels of hashed subdirectories. Netscape used to do this with the local cache. What's the magic number where things fall to crap? Dunno. More than 10,000 but probably less than 100,000. You can do performance measurements with mail using a /var/mail with 10,000 mboxes and see if things get better or worse. sendmail has to read /var/mail every time it delivers mail to a user's inbox. All this is pretty old based on Solaris 7. If things have improved in Solaris 10, I'm sure someone will jump in here and correct me. -DeeDee, don't press that button! DeeDee! NO! Dee... [I filter all Goggle Groups posts, so any reply may be automatically ignored]

0 Reply Michael 5/19/2010 3:09:52 AM On Tuesday 18 May 2010 21:09, Michael Vilain (vilain@NOspamcop.net) opined: > In article <_dydncBiFoRGr27WnZ2dnUVZ_h2dnZ2d@giganews.com>, > "Richard B. Gilbert" <rgilbert88@comcast.net> wrote: > >> underh20 wrote: >> > Hello, >> > >> > What is the maximum number of files allowed in a directory under UFS

>> > in Solaris 10 ? >> > >> > What is the maximum number of files allowed in a directory under >> > Veritas File System 5.0 in Solaris 10 ? >> > >> > I'd like to know where to locate this maximum number under UFS and >> > Veritas File System 5.0 in Solaris 10. Is there any way that we could >> > modify this maximum number ? >> > >> > Thanks, >> > >> > Bill >> >> If there is a limit, it's huge! It's far more files than it would be >> reasonable to catalog in a single directory. >> >> What problem are you trying to solve?? > > No one in this thread has mentioned that it's a singularly Bad Idea(tm) > to put a large number of files in a single directory. At some point, > there is a internal directory cache that gets filled and name lookups > get progressively longer (I may have the details of this wrong). I > don't know if VxFS has this problem since, AFAIK, it doesn't use inodes > and linearly searched directory files. > > As a sysadmin, I had to clean up many lazy developer's "dirty" > implementations of a project. These are ones that leave lots of files > in an application's defined temporary directory or much worse, the > system's /tmp. On Solaris, that sucks up swap space and memory for > systems that stay up long periods of time using swap rather than real > file system space. > > Usually by the time I get involved the application has been running in > production for a while and all of a sudden it starts slowing down, > sucking memory up or taking forever to do certain things. The memory is > from /tmp filling up. A simple rm can take for freak'in ever to go > linearly through the directory to find the file's inode in the directory > file (uses a linear search). find works better. But once the damage is > cleaned up, a cron can be implemented to clean up after the lazy, evil > developers. > > Rather than put lots of files in a single directory, name them in a > specific way such that you can put them in at least 2 levels of hashed > subdirectories. Netscape used to do this with the local cache. > > What's the magic number where things fall to crap? Dunno. More than > 10,000 but probably less than 100,000. You can do performance > measurements with mail using a /var/mail with 10,000 mboxes and see if > things get better or worse. sendmail has to read /var/mail every time > it delivers mail to a user's inbox. > > All this is pretty old based on Solaris 7. If things have improved in > Solaris 10, I'm sure someone will jump in here and correct me. > Yeah. Just because you CAN do something doesn't mean you SHOULD do it. Granted we have terabyte+ hard drives even on our home boxes these days, that doesn't mean we necessarily should store giga- or tera-bytes of whatever in a single directory. Conventional wisdom is for multiple directories, each containing a limited number of related files (once upon a time this was 4096 files max, but today? Who knows?). This applies not

just to Solaris but to any reasonably administered system. Bob Melson -Robert G. Melson | Rio Grande MicroSolutions | El Paso, Texas ----Nothing astonishes men so much as common sense and plain dealing. Ralph Waldo Emerson 0 Reply Bob 5/19/2010 6:09:37 AM Michael Vilain wrote: > In article <_dydncBiFoRGr27WnZ2dnUVZ_h2dnZ2d@giganews.com>, > "Richard B. Gilbert" <rgilbert88@comcast.net> wrote: > >> underh20 wrote: >>> Hello, >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? >>> >>> What is the maximum number of files allowed in a directory under >>> Veritas File System 5.0 in Solaris 10 ? >>> >>> I'd like to know where to locate this maximum number under UFS and >>> Veritas File System 5.0 in Solaris 10. Is there any way that we could >>> modify this maximum number ? >>> >>> Thanks, >>> >>> Bill >> If there is a limit, it's huge! It's far more files than it would be >> reasonable to catalog in a single directory. >> >> What problem are you trying to solve?? > > No one in this thread has mentioned that it's a singularly Bad Idea(tm) > to put a large number of files in a single directory. At some point, > there is a internal directory cache that gets filled and name lookups > get progressively longer (I may have the details of this wrong). I > don't know if VxFS has this problem since, AFAIK, it doesn't use inodes > and linearly searched directory files. > > As a sysadmin, I had to clean up many lazy developer's "dirty" > implementations of a project. These are ones that leave lots of files > in an application's defined temporary directory or much worse, the > system's /tmp. On Solaris, that sucks up swap space and memory for > systems that stay up long periods of time using swap rather than real > file system space. > > Usually by the time I get involved the application has been running in > production for a while and all of a sudden it starts slowing down, > sucking memory up or taking forever to do certain things. The memory is > from /tmp filling up. A simple rm can take for freak'in ever to go > linearly through the directory to find the file's inode in the directory > file (uses a linear search). find works better. But once the damage is > cleaned up, a cron can be implemented to clean up after the lazy, evil

> > > > > > > > > > > > > > >

developers. Rather than put lots of files in a single directory, name them in a specific way such that you can put them in at least 2 levels of hashed subdirectories. Netscape used to do this with the local cache. What's the magic number where things fall to crap? Dunno. More than 10,000 but probably less than 100,000. You can do performance measurements with mail using a /var/mail with 10,000 mboxes and see if things get better or worse. sendmail has to read /var/mail every time it delivers mail to a user's inbox. All this is pretty old based on Solaris 7. If things have improved in Solaris 10, I'm sure someone will jump in here and correct me.

I once had to deal with ~70,000 little files on a disk, all in one directory. Some idiot developer left something running while she went on vacation. It wasn't Solaris but NO O/S that I know of would handle the situation very well. If I had to do it over again, I'd have the developer killed and then initialize the disk and restore from backup. It took about three days to delete those files one at a time.

0 Reply Richard 5/19/2010 11:15:35 AM Michael Vilain wrote: > > No one in this thread has mentioned that it's a singularly Bad Idea(tm) > to put a large number of files in a single directory. At some point, > there is a internal directory cache that gets filled and name lookups > get progressively longer (I may have the details of this wrong). I > don't know if VxFS has this problem since, AFAIK, it doesn't use inodes > and linearly searched directory files. Right. On UFS the directory structure started out as a linked list so it could grow without limits until it took up all of the inodes on the device. I think in some release it was switched to a self balancing b-tree but I never tracked down for sure when or if. On VXFS it started out as a self balancing b-tree. In either case it's possible to fill the device's inode table before hitting a limit on files in a directory. But it remains a bad idea if there's any way out of it. > > > > > > As a sysadmin, I had to clean up many lazy developer's "dirty" implementations of a project. These are ones that leave lots of files in an application's defined temporary directory or much worse, the system's /tmp. On Solaris, that sucks up swap space and memory for systems that stay up long periods of time using swap rather than real file system space.

I had one case of a mount point with about 100K files in various dirs and one particular dir with 1.5 million plus files and growing by 1000+ per day by the time I was called in. I negotiated an aging policy but the initial "find ... | xargs ..." expression to do the initial clean up ran for hours. I set it in cron daily and it ran for 15 minutes. I eventually switched it to run weekly.

> Rather than put lots of files in a single directory, name them in a > specific way such that you can put them in at least 2 levels of hashed > subdirectories. Netscape used to do this with the local cache. It's *far* better to talk the developers into hashing into a directory tree any time there are 1000+ files in any one dir. There are libraries available to do that with just a library call and it reduces the overhead considerably. Sometimes it's not possible to dictate to the developers but at very least open up a feature request ticket. There's another fun aspect of directories in UFS, VXFS and for that matter HFS and so on - They do not track depth because they are just implemented in a tree. Once I got called in because an installation process had run all night on a developer's system and it was complaining about a full drive when he got back in the next morning. It turns out it was in a loop creating directories then cd-ing into them. It ran until the mount point hit 100%. It was so deep that "rm -rf *" did a core dump before it just to the bottom. I ended up doing a loop like while true ; do cd * done and I waited until the shell ran out of stack space and it failed out of the loop! At that point I moved * to lost+found, returned to the top, deleted the older of the two trees and ran the loop again. I put the nested loop in a acript and it ran for most of the day clearing out the directory chain. Even with only one "file" per directory things can get bad. 0 Reply Doug 5/19/2010 4:18:09 PM On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: > Hello, > > What is the maximum number of files allowed in a directory under UFS > in Solaris 10 ? 32767. See MAXLINK in sys/param.h. > What is the maximum number of files allowed in a directory under > Veritas File System 5.0 in Solaris 10 ? 32767. > I'd like to know where to locate this maximum number under UFS and > Veritas File System 5.0 in Solaris 10. Is there any way that we could > modify this maximum number ? For vxfs, add "set vxfs:vx_maxlink=65534" to /etc/system and reboot. For UFS, I don't know that you can. Ceri -That must be wonderful! I don't understand it at all. -- Moliere

0 Reply Ceri 5/19/2010 7:21:36 PM On 2010-05-19, Ceri Davies <ceri_usenet@submonkey.net> wrote: > On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >> Hello, >> >> What is the maximum number of files allowed in a directory under UFS >> in Solaris 10 ? > > 32767. See MAXLINK in sys/param.h. some people here wrote they've seen 70k+ files in single directory... anyway, MAXLINK seems to be the maximum number of (hard?)links to a file and also the limit of subdirectories, see Solaris Internals, Second Edition, Page 740-741 "ic_nlink" I'm still trying to figure out the max. number of files in a directory though... maybe someone else can shed some light on this :) 0 Reply Stefan 5/19/2010 9:00:46 PM On 2010-05-19, Stefan Krueger <stadtkind2@gmx.de> wrote: > On 2010-05-19, Ceri Davies <ceri_usenet@submonkey.net> wrote: >> On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >>> Hello, >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? >> >> 32767. See MAXLINK in sys/param.h. > > some people here wrote they've seen 70k+ files in single directory... Not on UFS. > > > > > > anyway, MAXLINK seems to be the maximum number of (hard?)links to a file and also the limit of subdirectories, see Solaris Internals, Second Edition, Page 740-741 "ic_nlink" I'm still trying to figure out the max. number of files in a directory though... maybe someone else can shed some light on this :)

Or I could, as I'd already looked before replying: From usr/src/uts/common/fs/ufs/ufs_dir.c: 804 * Write a new directory entry for DE_LINK, DE_SYMLINK or DE_RENAME operation s. 805 * If tvpp is non-null, return with the pointer to the target vnode. 806 */ 807 int 808 ufs_direnter_lr( .... 877 if (sip->i_nlink == MAXLINK) { 878 rw_exit(&sip->i_contents); 879 return (EMLINK); 880 }

MAXLINK is defined (and included from) sys/param.h as: #define MAXLINK 32767 /* max links */ Ceri -That must be wonderful! I don't understand it at all. -- Moliere 0 Reply Ceri 5/19/2010 9:14:48 PM On 2010-05-19, Ceri Davies <ceri_usenet@submonkey.net> wrote: > On 2010-05-19, Stefan Krueger <stadtkind2@gmx.de> wrote: >> On 2010-05-19, Ceri Davies <ceri_usenet@submonkey.net> wrote: >>> On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >>>> Hello, >>>> >>>> What is the maximum number of files allowed in a directory under UFS >>>> in Solaris 10 ? >>> >>> 32767. See MAXLINK in sys/param.h. >> >> some people here wrote they've seen 70k+ files in single directory... > > Not on UFS. > >> anyway, MAXLINK seems to be the maximum number of (hard?)links to a >> file and also the limit of subdirectories, see Solaris Internals, >> Second Edition, Page 740-741 "ic_nlink" >> >> I'm still trying to figure out the max. number of files in a >> directory though... maybe someone else can shed some light on this :) > > Or I could, as I'd already looked before replying: > > From usr/src/uts/common/fs/ufs/ufs_dir.c: > > 804 * Write a new directory entry for DE_LINK, DE_SYMLINK or > DE_RENAME operations. > 805 * If tvpp is non-null, return with the pointer to the target > vnode. > 806 */ > 807 int > 808 ufs_direnter_lr( > ... > 877 if (sip->i_nlink == MAXLINK) { > 878 rw_exit(&sip->i_contents); > 879 return (EMLINK); > 880 } > > MAXLINK is defined (and included from) sys/param.h as: > > #define MAXLINK 32767 /* max links */ "Write a new directory entry", directory != file and this basically proofs what I wrote, so thanks for that :-) anyway, to stop guessing I wrote a small shell script which just

touch'es files (on Solaris 10, UFS), I made it stop at 50.000, I hope that's ok $ ls | wc -l 50001 So... I think the max. num of files in a directory is only limited by the number of free inodes HTH 0 Reply Stefan 5/19/2010 9:44:52 PM On 19-May-2010, Stefan Krueger <stadtkind2@gmx.de> wrote: > > > > > $ ls | wc -l 50001 So... I think the max. num of files in a directory is only limited by the number of free inodes

Agreed... which is inadvertently determined by the size of the file system 8] 0 Reply Hugo 5/19/2010 11:14:09 PM Stefan Krueger wrote: > On 2010-05-19, Ceri Davies <ceri_usenet@submonkey.net> wrote: >> On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >>> Hello, >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? >> 32767. See MAXLINK in sys/param.h. > > some people here wrote they've seen 70k+ files in single directory... > > anyway, MAXLINK seems to be the maximum number of (hard?)links to a > file and also the limit of subdirectories, see Solaris Internals, > Second Edition, Page 740-741 "ic_nlink" > > I'm still trying to figure out the max. number of files in a > directory though... maybe someone else can shed some light on this :) I don't know what the maximum number of files that can be cataloged in a directory. I do know that it's a very poor idea to put thousands or tens of thousands of files in one directory. Performance, to put it bluntly, will suck! 0 Reply Richard 5/19/2010 11:54:56 PM Ceri Davies <ceri_usenet@submonkey.net> writes: >On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >> Hello, >>

>> What is the maximum number of files allowed in a directory under UFS >> in Solaris 10 ? >32767. See MAXLINK in sys/param.h. Not correct; that is the limit on the number of sub directories inside a single directory. You can created millions of files inside a directory but the performance (esp for UFS) will be poor. Casper -Expressed in this posting are my opinions. They are in no way related to opinions held by my employer, Sun Microsystems. Statements on Sun products included here are not gospel and may be fiction rather than truth. 0 Reply Casper 5/20/2010 8:45:36 AM Ceri Davies <ceri_usenet@submonkey.net> writes: >On 2010-05-19, Stefan Krueger <stadtkind2@gmx.de> wrote: >> On 2010-05-19, Ceri Davies <ceri_usenet@submonkey.net> wrote: >>> On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >>>> Hello, >>>> >>>> What is the maximum number of files allowed in a directory under UFS >>>> in Solaris 10 ? >>> >>> 32767. See MAXLINK in sys/param.h. >> >> some people here wrote they've seen 70k+ files in single directory... >Not on UFS. >> >> >> >> >> >> anyway, MAXLINK seems to be the maximum number of (hard?)links to a file and also the limit of subdirectories, see Solaris Internals, Second Edition, Page 740-741 "ic_nlink" I'm still trying to figure out the max. number of files in a directory though... maybe someone else can shed some light on this :)

>Or I could, as I'd already looked before replying: >From usr/src/uts/common/fs/ufs/ufs_dir.c: > 804 ons. > 805 > 806 > 807 > 808 >... > 877 > 878 > 879 > 880 * Write a new directory entry for DE_LINK, DE_SYMLINK or DE_RENAME operati * If tvpp is non-null, return with the pointer to the target vnode. */ int ufs_direnter_lr( if (sip->i_nlink == MAXLINK) { rw_exit(&sip->i_contents); return (EMLINK); }

>MAXLINK is defined (and included from) sys/param.h as:

>#define MAXLINK 32767 /* max links */ That's only for directories, not for files. Casper -Expressed in this posting are my opinions. They are in no way related to opinions held by my employer, Sun Microsystems. Statements on Sun products included here are not gospel and may be fiction rather than truth. 0 Reply Casper 5/20/2010 8:46:33 AM In article <4bf4f6b0$0$22938$e4fe514c@news.xs4all.nl>, Casper H.S. Dik <Casper.Dik@Sun.COM> writes: > Ceri Davies <ceri_usenet@submonkey.net> writes: > >>On 2010-05-18, underh20 <underh20.scubadiving@gmail.com> wrote: >>> Hello, >>> >>> What is the maximum number of files allowed in a directory under UFS >>> in Solaris 10 ? > >>32767. See MAXLINK in sys/param.h. > > Not correct; that is the limit on the number of sub directories inside > a single directory. You can created millions of files inside a directory > but the performance (esp for UFS) will be poor. Even if the filesystem and application does it efficiently, waiting for ls(1) to sort a million files into alphabetic order still makes it an admin's nightmare when they have to dive in to see what's gone wrong, and things like rm {some shell expression} will blow up with too many arguments. You just don't want to go there... -Andrew Gabriel [email address is not usable -- followup in the newsgroup]

Das könnte Ihnen auch gefallen