Linux: Find Out How Many File Descriptors Are Being Used
W hile administrating a box, you may wanted to find out what a processes is doing and find out how many file descriptors (fd) are being used. You will surprised to find out that process does open all sort of files:
=> Actual log file
=> Library files /lib /lib64
=> Executables and other programs etc
In this quick post, I will explain how to to count how many file descriptors are currently in use on your Linux server system.
Step # 1 Find Out PID
To find out PID for mysqld process, enter:
# ps aux | grep mysqld
OR
# pidof mysqld
Output:
Step # 2 List File Opened By a PID # 28290
Use the lsof command or /proc/$PID/ file system to display open fds (file descriptors), run:
# lsof -p 28290
# lsof -a -p 28290
OR
# cd /proc/28290/fd
# ls -l | less
You can count open file, enter:
# ls -l | wc -l
Tip: Count All Open File Handles
To count the number of open file handles of any sort, type the following command:
# lsof | wc -l
Sample outputs:
List File Descriptors in Kernel Memory
Type the following command:
# sysctl fs.file-nr
Sample outputs:
- 1020 The number of allocated file handles.
- 0 The number of unused-but-allocated file handles.
- 70000 The system-wide maximum number of file handles.
You can use the following to find out or set the system-wide maximum number of file handles:
# sysctl fs.file-max
Sample outputs:
More about /proc/PID/file & procfs File System
/proc (or procfs) is a pseudo-file system that it is dynamically generated after each reboot. It is used to access kernel information. procfs is also used by Solaris, BSD, AIX and other UNIX like operating systems. Now, you know how many file descriptors are being used by a process. You will find more interesting stuff in /proc/$PID/file directory:
- /proc/PID/cmdline : process arguments
- /proc/PID/cwd : process current working directory (symlink)
- /proc/PID/exe : path to actual process executable file (symlink)
- /proc/PID/environ : environment used by process
- /proc/PID/root : the root path as seen by the process. For most processes this will be a link to / unless the process is running in a chroot jail.
- /proc/PID/status : basic information about a process including its run state and memory usage.
- /proc/PID/task : hard links to any tasks that have been started by this (the parent) process.
See also: /proc related FAQ/Tips
/proc is an essentials file system for sys-admin work. Just browser through our previous article to get more information about /proc file system:
What are the number of open files for a user on linux and system wide for linux? [closed]
Want to improve this question? Update the question so it’s on-topic for Stack Overflow.
Closed 6 years ago .
Sorry, this question has several layers but all deal with the number of open files.
I’m getting a «Too many open files» message in my application log for the app we’re developing. Someone suggested to me to:
- Find the number of open files currently being used, system wide and per user
- Find what the limit for open files of the system and user are.
I ran ulimit -n and it returned 1024. I also looked at /etc/limits.conf and there isn’t anything special in that file. /etc/sysctl.conf is also not modified. I’ll list the contents of the files below. I also ran lsof | wc -l , which returned 5000+ lines (if I’m using it correctly).
So, my main questions are:
- How do I find the number of open files allowed per user? Is the soft limit the nofile setting found/defined in /etc/limits.conf? What is the default since I didn’t touch /etc/limits.conf?
- How do I find the number of open files allowed system-wide? Is it the hard limit in limits.conf? What’s the default number if limits.conf isn’t modified?
- What is the number that ulimit returns for open files? It says 1024 but when I run lsof and count the lines, it’s over 5000+ so something is not clicking with me. Are there other cmds I should run or files to look at to get these limits? Thanks in advance for your help.
Content of limits.conf
Content of sysctl.conf
1 Answer 1
There is no per-user file limit. The ones you need to be aware of are system-wide and per-process. The files-per-process limit multiplied by the processes-per-user limit could theoretically provide a files-per-user limit, but with normal values the product would be so large as to be effectively unlimited.
Also, the original purpose of lsof was to LiSt Open Files, but it has grown and lists other things now, like cwd and mmap regions, which is another reason for it to output more lines than you expect.
The error message «Too many open files» is associated with errno value EMFILE , the per-process limit, which in your case appears to be 1024. If you can find the right options to limit lsof to just displaying actual file descriptors of a single process, you’ll probably find that there are 1024 of them, or something very close.
The system-wide file descriptor limit rarely needs to be manually adjusted these days, since its default value is proportional to memory. If you need to, you can find it at /proc/sys/fs/file-max and information about current usage at /proc/sys/fs/file-nr . Your sysctl file has a value of 4096 for file-max , but it’s commented out so you shouldn’t take it seriously.
If you ever manage to hit the system-wide limit, you’ll get errno ENFILE , which translates to the error message «File table overflow» or «Too many open files in system».
Why is number of open files limited in Linux?
Right now, I know how to:
- find open files limit per process: ulimit -n
- count all opened files by all processes: lsof | wc -l
- get maximum allowed number of open files: cat /proc/sys/fs/file-max
My question is: Why is there a limit of open files in Linux?
3 Answers 3
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource — especially on embedded systems.
As root user you can change the maximum of the open files count per process (via ulimit -n ) and per system (e.g. echo 800000 > /proc/sys/fs/file-max ).
Please note that lsof | wc -l sums up a lot of duplicated entries (forked processes can share file handles etc). That number could be much higher than the limit set in /proc/sys/fs/file-max .
To get the current number of open files from the Linux kernel’s point of view, do this:
Example: This server has 40096 out of max 65536 open files, although lsof reports a much larger number:
I think it’s largely for historical reasons.
A Unix file descriptor is a small int value, returned by functions like open and creat , and passed to read , write , close , and so forth.
At least in early versions of Unix, a file descriptor was simply an index into a fixed-size per-process array of structures, where each structure contains information about an open file. If I recall correctly, some early systems limited the size of this table to 20 or so.
More modern systems have higher limits, but have kept the same general scheme, largely out of inertia.
How to get the count of open files by a user in linux
Is their any specific command or tool to get the count of open files by a user in linux?
2 Answers 2
lsof -u username will return all the open files for the user. If you pass the result to wc command you will have the count you need. So, if the username of the user is test
you can use lsof. this command is for find out what processes currently have the file open. if process opening the file, writing to it, and then closing it you can use auditing.
-w watch etc/myprogram/cofig.ini -p warx watch for write, attribute change, execute or read events -k config.ini-file is a search key. wait till the file change then use
Not the answer you’re looking for? Browse other questions tagged linux sysctl or ask your own question.
Related
Hot Network Questions
Subscribe to RSS
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. rev 2020.9.18.37632
linux-notes.org
Несколько раз я сталкивался с ошибкой «Too many open files»(Слишком много открытых файлов) на сервере с высокой нагрузкой. Это означает, что сервер исчерпывает ресурс на максимальный предел открытых файлов (max open file limit). Теперь вопрос в том, как я могу увеличить лимиты открытых файлов на Linux? Да все очень просто, в своей статье «Увеличить Max Open File Limit в Unix/Linux» покажу как это выполняется.
Увеличить Max Open File Limit в Unix/Linux
Приведу команды на различные Unix/Linux ОС
Увеличить Max Open File Limit в Linux
Для начала проверим какой предел установлен в ОС:
Увеличиваем данный предел в Linux
Мы можем увеличить лимиты для открытых файлов:
-===ВРЕМЕННО===-
Если есть необходимость увеличить лимит временно (для тестирования, например), то можно это сделать так:
Вот еще один пример:
-===ПОСТОЯННО===-
Если есть необходимость увеличить лимит навсегда, то можно это сделать так:
Эти настройки будут сохраняться даже после перезагрузки системы. После добавления конфигурации в файл, выполните следующую команду, чтобы изменения вступили в силу:
Настройка лимитов для каждого пользователя
Проверка установленных лимитов
Используйте следующую команду, чтобы увидеть максимальное чисто для открытых файлов:
Подключаемся от пользователя (у меня это nginx):
Проверяем параметры Hard лимитов :
В консоле, можно ввести данную команду (очень удобно отображает):
Проверяем параметры лимитов Soft :
Увеличить Max Open File Limit в Mac OS X
Выполним проверку лимитов с помощью:
- Первый аргумент — soft limit.
- Второй аргумент — hard limit.
Можно прописать в файл:
Увеличюем nginx worker_rlimit_nofile в nginx ( на уровне Nginx)
В nginx также можно увеличить лимиты с директивой worker_rlimit_nofile, которая позволяет увеличить этот лимит, если это не хватает данного ресурса на лету на уровне процесса:
И прописываем (редактируем):
После чего, проверяем конфигурацию nginx и перезапускаем его:
Save and close the file. Reload nginx web server, enter:
В комментариях писали что нельзя установить данные лимиты для Kali Linux. Вот, решил показать наглядный пример:
ulimit в Kali Linux
Включение ограничений на основе PAM в Unix/Lixux
Для Debian/Ubuntu
Редактируем файл (Debian/Ubuntu):
Открываем еще один файл:
И выполняем рестарт:
Для CentOS/RedHat/Fedora
Редактируем файл (Debian/Ubuntu):
Открываем еще один файл:
И выполняем рестарт:
У меня все! Статья «Увеличить Max Open File Limit в Unix/Linux», завершено.
3 thoughts on “ Увеличить Max Open File Limit в Unix/Linux ”
в kali linux не работает смена хард и софт лимитов. всё равно пишет 4096 и 1024 соответственно.
Можно! Что-то делаете не так. Я выложил скриншот в статье и показал что можно все сделать.
Мужикам на картинке к статье явно не повезло.
Свет приглушён, явно видно зелёный оттенок табличек аварийного освещения — либо дело за полночь, либо у них блекаут и работают от дизеля.
Сидят за консолью. По сети, оно, похоже, уже не отвечает. Плохо дело.
Стул притащили. Значит, давно не отвечает. Надо было прописать MAX OPEN FILE LIMIT заранее, а не когда уже навернулось.
Добавить комментарий Отменить ответ
Этот сайт использует Akismet для борьбы со спамом. Узнайте как обрабатываются ваши данные комментариев.