Sie sind auf Seite 1von 52

6/9/2019 debian9 netdata dashboard

System Overview
Overview of the key system metrics.

cpu
Total CPU utilization (all cores). 100% here means there is no CPU idle time at all. You can get per core
usage at the CPUs section and per application usage at the Applications Monitoring section.
Keep an eye on iowait ( 0.00%). If it is constantly high, your disks are a
bottleneck and they slow your system down.
An important metric worth monitoring, is softirq ( 0.00%). A constantly
high percentage of softirq may indicate network driver issues.
Total CPU utilization (system.cpu) Min, 09 Jun 2019
100.00 21.19.56
percentage
80.00 guest_nice 0.00
guest 0.00
60.00 steal 0.00
percentage

softirq 0.00
40.00 irq 0.00
user 0.00
20.00 system 0.51
nice 0.00
iowait 0.00
0.00
21.17.30 21.18.00 21.18.30 21.19.00

load
Current system load, i.e. the number of processes using CPU or waiting for system resources (usually
CPU and disk). The 3 metrics refer to 1, 5 and 15 minute averages. The system calculates this once
every 5 seconds. For more information check this wikipedia article
(https://en.wikipedia.org/wiki/Load_(computing))

netdata

disk

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 1/52
6/9/2019 debian9 netdata dashboard

Total Disk I/O, for all physical disks. You can get detailed information about each disk at the Disks
section and per application Disk usage at the Applications Monitoring section. Physical are all the disks
that are listed in /sys/block , but do not exist in /sys/devices/virtual/block .

netdata

Memory paged from/to disk. This is usually the total disk I/O of the system.

netdata

ram
System Random Access Memory (i.e. physical memory) usage.

netdata

swap

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 2/52
6/9/2019 debian9 netdata dashboard

System swap memory usage. Swap space is used when the amount of physical memory (RAM) is full.
When the system needs more memory resources and the RAM is full, inactive pages in memory are
moved to the swap space (usually a disk, a disk partition or a file).

netdata

Total Swap I/O. (netdata measures both in and out . If either of the metrics in or out is not shown
in the chart, the reason is that the metric is zero. - you can change the page settings to always render
all the available dimensions on all charts).

netdata

network
Total bandwidth of all physical network interfaces. This does not include lo , VPNs, network bridges,
IFB devices, bond interfaces, etc. Only the bandwidth of physical network interfaces is aggregated.
Physical are all the network interfaces that are listed in /proc/net/dev , but do not exist in
/sys/devices/virtual/net .
Physical Network Interfaces Aggregated Bandwidth (system.net) Min, 09 Jun 2019
21.19.56
200.0
kilobits/s
100.0
received 8.6
0.0 sent -53.7
-100.0
kilobits/s

-200.0
-300.0
-400.0
-500.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 3/52
6/9/2019 debian9 netdata dashboard

Total IP traffic in the system.


IP Bandwidth (system.ip) Min, 09 Jun 2019

200.0
21.19.56
kilobits/s
100.0
received 8.0
0.0 sent -53.0
-100.0
kilobits/s

-200.0
-300.0
-400.0
-500.0
21.17.30 21.18.00 21.18.30 21.19.00

Total IPv6 Traffic.


IPv6 Bandwidth (system.ipv6) Min, 09 Jun 2019
21.19.56
kilobits/s
500 received 0
sent 0
kilobits/s

-500

21.17.30 21.18.00 21.18.30 21.19.00

processes
System processes. Running are the processes in the CPU. Blocked are processes that are willing to
enter the CPU, but they cannot, e.g. because they wait for disk activity.
System Processes (system.processes) Min, 09 Jun 2019
6.00 21.19.56
5.00 processes
running 3.00
4.00 blocked 0.00
processes

3.00

2.00

1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

Number of new processes created.


Started Processes (system.forks) Min, 09 Jun 2019
21.19.56
8.00
processes/s
started 0.00
6.00
processes/s

4.00

2.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 4/52
6/9/2019 debian9 netdata dashboard

All system processes.


System Active Processes (system.active_processes) Min, 09 Jun 2019
127.00
21.19.56
processes
126.00 active 123.00
processes

125.00

124.00

123.00
21.17.30 21.18.00 21.18.30 21.19.00

Context Switches (https://en.wikipedia.org/wiki/Context_switch), is the switching of the CPU from one


process, task or thread to another. If there are many processes or threads willing to execute and very
few CPU cores available to handle them, the system is making more context switching to balance the
CPU resources among them. The whole process is computationally intensive. The more the context
switches, the slower the system gets.
CPU Context Switches (system.ctxt) Min, 09 Jun 2019
21.19.56
500.0 context switches/s
switches 224.3
450.0
context switches/s

400.0

350.0

300.0

250.0

21.17.30 21.18.00 21.18.30 21.19.00

idlejitter
Idle jitter is calculated by netdata. A thread is spawned that requests to sleep for a few microseconds.
When the system wakes it up, it measures how many microseconds have passed. The difference
between the requested and the actual duration of the sleep, is the idle jitter. This number is useful in
real-time environments, where CPU jitter can affect the quality of the service (like VoIP media
gateways).
CPU Idle Jitter (system.idlejitter) Min, 09 Jun 2019
21.19.57
30,000 microseconds lost/s
25,000 min 126
microseconds lost/s

max 6,216
20,000 average 638
15,000

10,000

5,000

0
21.17.30 21.18.00 21.18.30 21.19.00

interrupts

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 5/52
6/9/2019 debian9 netdata dashboard

Total number of CPU interrupts. Check system.interrupts that gives more detail about each interrupt
and also the CPUs section where interrupts are analyzed per CPU core.
CPU Interrupts (system.intr) Min, 09 Jun 2019
21.19.56
280.0
interrupts/s
260.0 interrupts 145.3
240.0
interrupts/s

220.0
200.0
180.0
160.0
140.0

21.17.30 21.18.00 21.18.30 21.19.00

CPU interrupts in detail. At the CPUs section, interrupts are analyzed per CPU core.
System interrupts (system.interrupts) Min, 09 Jun 2019
21.19.56
280.0 interrupts/s
260.0 timer_0 0.0
240.0
i8042_1 0.0
rtc0_8 0.0
220.0 i8042_12 0.0
200.0 ata_piix_15 1.3
180.0 snd_ens1… 0.0
ioc0_17 0.7
interrupts/s

160.0 uhci_hcd:… 0.0


140.0 ens33_19 9.8
120.0 vmw_vmc… 0.0
NMI 0.0
100.0 LOC 122.4
80.0 PMI 0.0
60.0 IWI 0.0
RES 11.1
40.0 CAL 0.0
20.0 TLB 0.0
0.0 MCP 0.0
21.17.30 21.18.00 21.18.30 21.19.00

softirqs
CPU softirqs in detail. At the CPUs section, softirqs are analyzed per CPU core.
System softirqs (system.softirqs) Min, 09 Jun 2019
350.0 21.19.57
300.0 softirqs/s
HI 0.0
250.0
TIMER 66.4
200.0 NET_TX 21.1
softirqs/s

NET_RX 33.1
150.0
BLOCK 2.4
100.0 TASKLET 0.0
50.0 SCHED 66.4
RCU 41.9
0.0
21.17.30 21.18.00 21.18.30 21.19.00

softnet
Statistics for CPUs SoftIRQs related to network receive work. Break down per CPU core can be found
at CPU / softnet statistics. processed states the number of packets processed, dropped is the number
packets dropped because the network device backlog was full (to fix them on Linux use sysctl to
increase net.core.netdev_max_backlog ), squeezed is the number of packets dropped because the
network device budget ran out (to fix them on Linux use sysctl to increase net.core.netdev_budget

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 6/52
6/9/2019 debian9 netdata dashboard

and/or net.core.netdev_budget_usecs ). More information about identifying and troubleshooting


network driver related issues can be found at Red Hat Enterprise Linux Network Performance Tuning
Guide
(https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf).

System softnet_stat (system.softnet_stat) Min, 09 Jun 2019


60.0 21.19.57
events/s
50.0
processed 19.6
40.0 dropped 0.0
squeezed 0.0
events/s

30.0 received_… 0.0


flow_limit… 0.0
20.0

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

entropy
Entropy (https://en.wikipedia.org/wiki/Entropy_(computing)), is a pool of random numbers (/dev/random
(https://en.wikipedia.org/wiki//dev/random)) that is mainly used in cryptography. If the pool of entropy
gets empty, processes requiring random numbers may run a lot slower (it depends on the interface
each program uses), waiting for the pool to be replenished. Ideally a system with high entropy demands
should have a hardware device for that purpose (TPM is one such device). There are also several
software-only options you may install, like haveged , although these are generally useful only in servers.
Available Entropy (system.entropy) Min, 09 Jun 2019
3,420.0 21.19.57
entropy
3,400.0 entropy 3,425.0
3,380.0
entropy

3,360.0

3,340.0

3,320.0

21.17.30 21.18.00 21.18.30 21.19.00

uptime
System Uptime (system.uptime) Min, 09 Jun 2019
21.19.57
00:28:20 time
uptime 00:28:41

00:27:30
time

00:26:40

21.17.30 21.18.00 21.18.30 21.19.00

ipc semaphores

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 7/52
6/9/2019 debian9 netdata dashboard

IPC Semaphores (system.ipc_semaphores) Min, 09 Jun 2019


1
21.19.57
semaphores
0.8 semaphor… 0
semaphores

0.6

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPC Semaphore Arrays (system.ipc_semaphore_arrays) Min, 09 Jun 2019


1
21.19.57
arrays
0.8 arrays 0

0.6
arrays

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

ipc shared memory


IPC Shared Memory Number of Segments (system.shared_memory_segments) Min, 09 Jun 2019
1 21.19.57
segments
0.8 segments 0

0.6
segments

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPC Shared Memory Used Bytes (system.shared_memory_bytes) Min, 09 Jun 2019


1
21.19.57
bytes
0.8
bytes 0

0.6
bytes

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

CPUs
Detailed information for each CPU of the system. A summary of the system for all CPUs can be found at the
System Overview section.

utilization

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 8/52
6/9/2019 debian9 netdata dashboard

Core utilization (cpu.cpu0) Min, 09 Jun 2019


100.00
21.19.57
percentage
80.00 guest_nice 0.00
guest 0.00
60.00 steal 0.00
percentage

softirq 0.00
40.00 irq 0.00
user 0.00
20.00 system 0.00
nice 0.00
iowait 0.00
0.00
21.17.30 21.18.00 21.18.30 21.19.00

Core utilization (cpu.cpu1) Min, 09 Jun 2019


100.00 21.19.57
percentage
80.00 guest_nice 0.00
guest 0.00
60.00 steal 0.00
percentage

softirq 1.01
40.00 irq 0.00
user 0.00
20.00 system 2.02
nice 0.00
iowait 0.00
0.00
21.17.30 21.18.00 21.18.30 21.19.00

interrupts
CPU0 Interrupts (cpu.cpu0_interrupts) Min, 09 Jun 2019
130.0 21.19.57
interrupts/s
120.0
timer_0 0.0
110.0 i8042_1 0.0
rtc0_8 0.0
100.0
i8042_12 0.0
90.0 ata_piix_15 0.0
snd_ens1… 0.0
80.0
ioc0_17 0.0
interrupts/s

70.0 uhci_hcd:… 0.0


ens33_19 0.0
60.0
vmw_vmc… 0.0
50.0 NMI 0.0
LOC 53.3
40.0
PMI 0.0
30.0 IWI 0.0
20.0 RES 9.9
CAL 0.0
10.0 TLB 0.0
0.0
MCP 0.0
21.17.30 21.18.00 21.18.30 21.19.00

CPU1 Interrupts (cpu.cpu1_interrupts) Min, 09 Jun 2019


21.19.57
interrupts/s
200.0 i8042_1 0.0
i8042_12 0.0
ata_piix_15 0.7
150.0
ioc0_17 2.0
uhci_hcd:… 0.0
interrupts/s

ens33_19 33.1
100.0 NMI 0.0
LOC 75.7
PMI 0.0
RES 4.9
50.0
CAL 0.0
TLB 0.0
MCP 0.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 9/52
6/9/2019 debian9 netdata dashboard

softirqs
CPU0 softirqs (cpu.cpu0_softirqs) Min, 09 Jun 2019
21.19.57
softirqs/s
150.0 TIMER 42.9
NET_TX 0.0
NET_RX 0.0
softirqs/s

100.0
BLOCK 0.0
TASKLET 0.0
50.0 SCHED 43.5
RCU 27.9

0.0
21.17.30 21.18.00 21.18.30 21.19.00

CPU1 softirqs (cpu.cpu1_softirqs) Min, 09 Jun 2019


21.19.57
200.0 softirqs/s
HI 0.0
150.0 TIMER 23.5
NET_TX 21.1
softirqs/s

100.0 NET_RX 33.1


BLOCK 2.4
TASKLET 0.0
50.0
SCHED 22.8
RCU 14.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

softnet
Statistics for per CPUs core SoftIRQs related to network receive work. Total for all CPU cores can be
found at System / softnet statistics. processed states the number of packets processed, dropped is
the number packets dropped because the network device backlog was full (to fix them on Linux use
sysctl to increase net.core.netdev_max_backlog ), squeezed is the number of packets dropped
because the network device budget ran out (to fix them on Linux use sysctl to increase
net.core.netdev_budget and/or net.core.netdev_budget_usecs ). More information about identifying
and troubleshooting network driver related issues can be found at Red Hat Enterprise Linux Network
Performance Tuning Guide
(https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf).

CPU0 softnet_stat (cpu.cpu0_softnet_stat) Min, 09 Jun 2019


21.19.57
4.00
events/s
processed 0.00
3.00 dropped 0.00
squeezed 0.00
events/s

2.00 received_… 0.00


flow_limit… 0.00
1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 10/52
6/9/2019 debian9 netdata dashboard

CPU1 softnet_stat (cpu.cpu1_softnet_stat) Min, 09 Jun 2019


60.0 21.19.57
events/s
50.0
processed 19.6
40.0 dropped 0.0
squeezed 0.0
events/s

30.0 received_… 0.0


flow_limit… 0.0
20.0

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

cpuidle
C-state residency time (cpu.cpu0_cpuidle) Min, 09 Jun 2019
21.19.57
100.4
percentage
C0 (active) 100
100.2
percentage

100

99.8

99.6

21.17.30 21.18.00 21.18.30 21.19.00

C-state residency time (cpu.cpu1_cpuidle) Min, 09 Jun 2019


21.19.57
100.4
percentage
C0 (active) 100
100.2
percentage

100

99.8

99.6

21.17.30 21.18.00 21.18.30 21.19.00

Memory
Detailed information about the memory management of the system.

system

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 11/52
6/9/2019 debian9 netdata dashboard

Available Memory is estimated by the kernel, as the amount of RAM that can be used by userspace
processes, without causing swapping.
Available RAM for applications (mem.available) Min, 09 Jun 2019
21.19.57
379.00 MiB
avail 378.57
378.50
MiB

378.00

377.50

21.17.30 21.18.00 21.18.30 21.19.00

Committed Memory, is the sum of all memory which has been allocated by processes.
Committed (Allocated) Memory (mem.committed) Min, 09 Jun 2019

440.0 21.19.57
MiB
420.0 Committe… 303.0
400.0
380.0
MiB

360.0
340.0
320.0

21.17.30 21.18.00 21.18.30 21.19.00

A page fault (https://en.wikipedia.org/wiki/Page_fault) is a type of interrupt, called trap, raised by


computer hardware when a running program accesses a memory page that is mapped into the virtual
address space, but not actually loaded into main memory. If the page is loaded in memory at the time
the fault is generated, but is not marked in the memory management unit as being loaded in memory,
then it is called a minor or soft page fault. A major page fault is generated when the system needs to
load the memory page from disk or swap memory.
Memory Page Faults (mem.pgfaults) Min, 09 Jun 2019
3,000 21.19.57
faults/s
2,500
minor 0
2,000 major 0
faults/s

1,500

1,000

500

0
21.17.30 21.18.00 21.18.30 21.19.00

kernel

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 12/52
6/9/2019 debian9 netdata dashboard

Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively
being written to disk.
Writeback Memory (mem.writeback) Min, 09 Jun 2019
15.0 21.19.57
MiB
Dirty 0.1
Writeback 0.0
10.0
FuseWrit… 0.0
NfsWriteb… 0.0
MiB

Bounce 0.0
5.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

The total amount of memory being used by the kernel. Slab is the amount of memory used by the
kernel to cache data structures for its own use. KernelStack is the amount of memory allocated for
each task done by the kernel. PageTables is the amount of memory decicated to the lowest level of
page tables (A page table is used to turn a virtual address into a physical memory address).
VmallocUsed is the amount of memory being used as virtual address space.
Memory Used by Kernel (mem.kernel) Min, 09 Jun 2019
50.0 21.19.57
MiB
40.0 Slab 48.5
KernelStack 1.0
30.0 PageTables 1.0
VmallocU… 0.0
MiB

20.0

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

slab
Reclaimable is the amount of memory which the kernel can reuse. Unreclaimable can not be reused
even when the kernel is lacking memory.
Reclaimable Kernel Memory (mem.slab) Min, 09 Jun 2019
50.0 21.19.57
MiB
40.0 reclaimable 37.6
unreclaim… 10.9
30.0
MiB

20.0

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Disks
Charts with performance information for all the system disks. Special care has been given to present disk
performance metrics in a way compatible with iostat -x . netdata by default prevents rendering performance
charts for individual partitions and unmounted virtual disks. Disabled charts can still be enabled by configuring
the relative settings in the netdata configuration file.

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 13/52
6/9/2019 debian9 netdata dashboard

sda
Amount of data transferred to and from disk.
Disk I/O Bandwidth (disk.sda) Min, 09 Jun 2019
0.00 21.19.57
MiB/s
-1.95 reads 0.00
writes -0.01
-3.91
MiB/s

-5.86

-7.81
21.17.30 21.18.00 21.18.30 21.19.00

Completed disk I/O operations. Keep in mind the number of operations requested might be higher,
since the system is able to merge adjacent to each other (see merged operations chart).
Disk Completed I/O Operations (disk_ops.sda) Min, 09 Jun 2019
0.0 21.19.57
operations/s
-10.0 reads 0.0
writes -2.4
-20.0
operations/s

-30.0

-40.0

-50.0

21.17.30 21.18.00 21.18.30 21.19.00

I/O operations currently in progress. This metric is a snapshot - it is not an average over the last
interval.
Disk Current I/O Operations (disk_qops.sda) Min, 09 Jun 2019
1 21.19.57
operations
0.8 operations 0

0.6
operations

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

Backlog is an indication of the duration of pending disk operations. On every I/O event the system is
multiplying the time spent doing I/O since the last update of this field with the number of pending
operations. While not accurate, this metric can provide an indication of the expected completion time of
the operations in progress.
Disk Backlog (disk_backlog.sda) Min, 09 Jun 2019
21.19.57
10.0 milliseconds
backlog 0.0
8.0
milliseconds

6.0

4.0

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 14/52
6/9/2019 debian9 netdata dashboard

Disk Utilization measures the amount of time the disk was busy with something. This is not related to its
performance. 100% means that the system always had an outstanding operation on the disk. Keep in
mind that depending on the underlying technology of the disk, 100% here may or may not be an
indication of congestion.
Disk Utilization Time (disk_util.sda) Min, 09 Jun 2019
2.50
21.19.57
% of time working
2.00 utilization 0.00
% of time working

1.50

1.00

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

The average time for I/O requests issued to the device to be served. This includes the time spent by the
requests in queue and the time spent servicing them.
milliseconds/operation

Average Completed I/O Operation Time (disk_await.sda) proc:/proc/diskstats


20.0 disk.await
milliseconds/operation
reads -
0.0 writes -

21.17.30 21.18.00 21.18.30 21.19.00

The average I/O operation size.


Average Completed I/O Operation Bandwidth (disk_avgsz.sda) proc:/proc/diskstats
0.0 disk.avgsz
KiB/operation

KiB/operation
-500.0 reads -
writes -

21.17.30 21.18.00 21.18.30 21.19.00

The average service time for completed I/O operations. This metric is calculated using the total busy
time of the disk and the number of completed operations. If the disk is able to execute multiple parallel
operations the reporting average service time will be misleading.
milliseconds/operation

Average Service Time (disk_svctm.sda) proc:/proc/diskstats


disk.svctm
5.00 milliseconds/operation
svctm -

0.00
21.17.30 21.18.00 21.18.30 21.19.00

The number of merged disk operations. The system is able to merge adjacent I/O operations, for
example two 4KB reads can become one 8KB read before given to disk.
Disk Merged Operations (disk_mops.sda) proc:/proc/diskstats
0.0
disk.mops
operations/s

-100.0 merged operations/s


merged

reads -
-200.0 writes -

21.17.30 21.18.00 21.18.30 21.19.00

The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk
is able to execute I/O operations in parallel.
Disk Total I/O Time (disk_iotime.sda) proc:/proc/diskstats
0.0 disk.iotime
milliseconds/s

-200.0 milliseconds/s
reads -
-400.0 writes -

21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 15/52
6/9/2019 debian9 netdata dashboard

/
Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user
from getting out of space.
Disk Space Usage for / [/dev/sda1] (disk_space._) diskspace
disk.space
70.0
GiB
60.0 avail -
50.0 used -
reserved f… -
40.0
GiB

30.0
20.0
10.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system
implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum
number of files the filesystem can hold. It is possible for a device to run out of inodes. When this
happens, new files cannot be created on the device, even though there may be free space available.
Disk Files (inodes) Usage for / [/dev/sda1] (disk_inodes._) diskspace
5,000,000 disk.inodes
inodes
4,000,000 avail -
used -
3,000,000 reserved f… -
inodes

2,000,000

1,000,000

0
21.17.30 21.18.00 21.18.30 21.19.00

/dev
Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user
from getting out of space.
Disk Space Usage for /dev [udev] (disk_space._dev) diskspace
disk.space
204.8 MiB
avail -
153.6 used -
reserved f… -
MiB

102.4

51.2

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 16/52
6/9/2019 debian9 netdata dashboard

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system
implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum
number of files the filesystem can hold. It is possible for a device to run out of inodes. When this
happens, new files cannot be created on the device, even though there may be free space available.
Disk Files (inodes) Usage for /dev [udev] (disk_inodes._dev) diskspace
60,000 disk.inodes
inodes
50,000
avail -
40,000 used -
reserved f… -
inodes

30,000

20,000

10,000

0
21.17.30 21.18.00 21.18.30 21.19.00

/dev/shm
Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user
from getting out of space.
Disk Space Usage for /dev/shm [tmpfs] (disk_space._dev_shm) diskspace
disk.space
204.8 MiB
avail -
used -
153.6
reserved f… -
MiB

102.4

51.2

0.0
21.17.30 21.18.00 21.18.30 21.19.00

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system
implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum
number of files the filesystem can hold. It is possible for a device to run out of inodes. When this
happens, new files cannot be created on the device, even though there may be free space available.
Disk Files (inodes) Usage for /dev/shm [tmpfs] (disk_inodes._dev_shm) diskspace
60,000 disk.inodes
inodes
50,000 avail -
40,000
used -
reserved f… -
inodes

30,000

20,000

10,000

0
21.17.30 21.18.00 21.18.30 21.19.00

/run

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 17/52
6/9/2019 debian9 netdata dashboard

Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user
from getting out of space.
Disk Space Usage for /run [tmpfs] (disk_space._run) diskspace
disk.space
41.0
MiB
avail -
used -
30.7
reserved f… -
MiB

20.5

10.2

0.0
21.17.30 21.18.00 21.18.30 21.19.00

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system
implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum
number of files the filesystem can hold. It is possible for a device to run out of inodes. When this
happens, new files cannot be created on the device, even though there may be free space available.
Disk Files (inodes) Usage for /run [tmpfs] (disk_inodes._run) diskspace
60,000 disk.inodes
inodes
50,000 avail -
40,000 used -
reserved f… -
inodes

30,000

20,000

10,000

0
21.17.30 21.18.00 21.18.30 21.19.00

/run/lock
Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user
from getting out of space.
Disk Space Usage for /run/lock [tmpfs] (disk_space._run_lock) diskspace
disk.space
MiB
4.10
avail -
used -
3.07
reserved f… -
MiB

2.05

1.02

0.00
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 18/52
6/9/2019 debian9 netdata dashboard

inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system
implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum
number of files the filesystem can hold. It is possible for a device to run out of inodes. When this
happens, new files cannot be created on the device, even though there may be free space available.
Disk Files (inodes) Usage for /run/lock [tmpfs] (disk_inodes._run_lock) diskspace
60,000 disk.inodes
inodes
50,000 avail -
40,000 used -
reserved f… -
inodes

30,000

20,000

10,000

0
21.17.30 21.18.00 21.18.30 21.19.00

Networking Stack
Metrics for the networking stack of the system. These metrics are collected from /proc/net/netstat , apply to
both IPv4 and IPv6 traffic and are related to operation of the kernel networking stack.

tcp
TCP connection aborts. baddata ( TCPAbortOnData ) happens while the connection is on FIN_WAIT1
and the kernel receives a packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). userclosed ( TCPAbortOnClose ) happens when the
kernel receives data on an already closed connection and responds with RST . nomemory
( TCPAbortOnMemory happens when there are too many orphaned sockets (not attached to an fd) and
the kernel has to drop a connection - sometimes it will send an RST , sometimes it won't. timeout
( TCPAbortOnTimeout ) happens when a connection times out. linger ( TCPAbortOnLinger ) happens
when the kernel killed a socket that was already closed by the application and lingered around for long
enough. failed ( TCPAbortFailed ) happens when the kernel attempted to send an RST but failed
because there was no memory available.
TCP Connection Aborts (ip.tcpconnaborts) proc:/proc/net/netstat
1
ip.tcpconnaborts
connections/s
0.8
baddata -
userclosed -
connections/s

0.6
nomemory -
timeout -
0.4 linger -
failed -
0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 19/52
6/9/2019 debian9 netdata dashboard

TCP Out-Of-Order Queue (ip.tcpofo) proc:/proc/net/netstat


1
ip.tcpofo
pps
0.8 inqueue -
dropped -
0.6
merged -
pruned -
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

broadcast
IP Broadcast Bandwidth (ip.bcast) proc:/proc/net/netstat
1.00 ip.bcast
kilobits/s
0.80 received -
sent -
0.60
kilobits/s

0.40

0.20

0.00
21.17.30 21.18.00 21.18.30 21.19.00

IP Broadcast Packets (ip.bcastpkts) proc:/proc/net/netstat


ip.bcastpkts
1.50
pps
received -
sent -
1.00
pps

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

ecn
Explicit Congestion Notification (ECN) (https://en.wikipedia.org/wiki/Explicit_Congestion_Notification) is
a TCP extension that allows end-to-end notification of network congestion without dropping packets.
ECN is an optional feature that may be used between two ECN-enabled endpoints when the underlying
network infrastructure also supports it.

IP ECN Statistics (ip.ecnpkts) proc:/proc/net/netstat


0.0
ip.ecnpkts
-10.0 pps
CEP -
-20.0 NoECTP -
ECTP0 -
-30.0
ECTP1 -
pps

-40.0

-50.0

-60.0
21.17.30 21.18.00 21.18.30 21.19.00

IPv4 Networking
192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 20/52
6/9/2019 g debian9 netdata dashboard

Metrics for the IPv4 stack of the system. Internet Protocol version 4 (IPv4) (https://en.wikipedia.org/wiki/IPv4) is
the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking
methods in the Internet. IPv4 is a connectionless protocol for use on packet-switched networks. It operates on a
best effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or
avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer
transport protocol, such as the Transmission Control Protocol (TCP).

sockets
IPv4 Sockets Used (ipv4.sockstat_sockets) proc:/proc/net/sockstat
123.00
ipv4.sockstat_sockets
122.00 sockets
used -
121.00
sockets

120.00

119.00

118.00

117.00
21.17.30 21.18.00 21.18.30 21.19.00

packets
IPv4 Packets (ipv4.packets) proc:/proc/net/snmp
60.0 ipv4.packets
40.0 pps
received -
20.0 sent -
forwarded -
0.0
delivered -
pps

-20.0

-40.0

-60.0
21.17.30 21.18.00 21.18.30 21.19.00

errors
IPv4 Errors (ipv4.errors) proc:/proc/net/snmp
1
ipv4.errors
pps
0.8
InDiscards -
OutDiscar… -
0.6
InHdrErrors -
OutNoRo… -
pps

0.4 InAddrErr… -
InUnknow… -
0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

icmp

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 21/52
6/9/2019 debian9 netdata dashboard

IPv4 ICMP Packets (ipv4.icmp) proc:/proc/net/snmp


1.00
ipv4.icmp
pps
0.50 received -
sent -

0.00
pps

-0.50

-1.00
21.17.30 21.18.00 21.18.30 21.19.00

IPv4 ICMP Errors (ipv4.icmp_errors) proc:/proc/net/snmp


1 ipv4.icmp_errors
pps
0.8
InErrors -
OutErrors -
0.6
InCsumEr… -
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPv4 ICMP Messages (ipv4.icmpmsg) proc:/proc/net/snmp


1.00 ipv4.icmpmsg
pps
0.80 InEchoReps -
OutEcho… -
0.60 InDestUnr… -
OutDestU… -
0.40 InRedirects -
OutRedir… -
0.20 InEchos -
OutEchos -
InRouter… -
0.00
pps

OutRoute… -
InRouterS… -
-0.20
OutRoute… -
InTimeEx… -
-0.40 OutTimeE… -
InParmPr… -
-0.60 OutParm… -
InTimesta… -
-0.80 OutTimes… -
InTimesta… -
-1.00 OutTimes… -
21.17.30 21.18.00 21.18.30 21.19.00

tcp
The number of established TCP connections (known as CurrEstab ). This is a snapshot of the
established connections at the time of measurement (i.e. a connection established and a connection
disconnected within the same iteration will not affect this metric).
IPv4 TCP Connections (ipv4.tcpsock) proc:/proc/net/snmp
7.00
ipv4.tcpsock
6.00 active connections
connectio… -
active connections

5.00

4.00

3.00

2.00

1.00
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 22/52
6/9/2019 debian9 netdata dashboard

IPv4 TCP Sockets (ipv4.sockstat_tcp_sockets) proc:/proc/net/sockstat


ipv4.sockstat_tcp_sockets
12.0
sockets
10.0 alloc -
orphan -
8.0
inuse -
sockets

6.0 timewait -

4.0

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

IPv4 TCP Packets (ipv4.tcppackets) proc:/proc/net/snmp


60.0 ipv4.tcppackets
40.0 pps
received -
20.0 sent -
0.0
pps

-20.0

-40.0

-60.0
21.17.30 21.18.00 21.18.30 21.19.00

active or ActiveOpens is the number of outgoing TCP connections attempted by this host. passive
or PassiveOpens is the number of incoming TCP connections accepted by this host.
IPv4 TCP Opens (ipv4.tcpopens) proc:/proc/net/snmp
2.50 ipv4.tcpopens
connections/s
2.00 active -
passive -
connections/s

1.50

1.00

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

InErrs is the number of TCP segments received in error (including header too small, checksum errors,
sequence errors, bad packets - for both IPv4 and IPv6). InCsumErrors is the number of TCP segments
received with checksum errors (for both IPv4 and IPv6). RetransSegs is the number of TCP segments
retransmitted.
IPv4 TCP Errors (ipv4.tcperrors) proc:/proc/net/snmp
1
ipv4.tcperrors
pps
0.8
InErrs -
InCsumEr… -
0.6
RetransS… -
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 23/52
6/9/2019 debian9 netdata dashboard

EstabResets is the number of established connections resets (i.e. connections that made a direct
transition from ESTABLISHED or CLOSE_WAIT to CLOSED ). OutRsts is the number of TCP segments
sent, with the RST flag set (for both IPv4 and IPv6). AttemptFails is the number of times TCP
connections made a direct transition from either SYN_SENT or SYN_RECV to CLOSED , plus the number of
times TCP connections made a direct transition from the SYN_RECV to LISTEN . TCPSynRetrans shows
retries for new outbound TCP connections, which can indicate general connectivity issues or backlog
on the remote host.
IPv4 TCP Handshake Issues (ipv4.tcphandshake) proc:/proc/net/snmp
ipv4.tcphandshake
0.50
events/s
EstabRes… -
0.40
OutRsts -
AttemptF… -
events/s

0.30
SynRetrans -
0.20

0.10

0.00
21.17.30 21.18.00 21.18.30 21.19.00

IPv4 TCP Sockets Memory (ipv4.sockstat_tcp_mem) proc:/proc/net/sockstat


ipv4.sockstat_tcp_mem
25.0 KiB
mem -
20.0

15.0
KiB

10.0

5.0

0.0
21.18.00 21.18.10 21.18.20 21.18.30 21.18.40 21.18.50 21.19.00 21.19.10 21.19.20 2

udp
IPv4 UDP Sockets (ipv4.sockstat_udp_sockets) proc:/proc/net/sockstat
ipv4.sockstat_udp_sock…
2.4
sockets
inuse -
2.2
sockets

1.8

1.6

21.17.30 21.18.00 21.18.30 21.19.00

IPv4 UDP Packets (ipv4.udppackets) proc:/proc/net/snmp


1
ipv4.udppackets
pps
0.8
received -
sent -
0.6
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 24/52
6/9/2019 debian9 netdata dashboard

IPv4 UDP Errors (ipv4.udperrors) proc:/proc/net/snmp


ipv4.udperrors
1.50
events/s
RcvbufEr… -
SndbufEr… -
1.00
InErrors -
events/s

NoPorts -
InCsumEr… -
0.50
IgnoredM… -

0.00
21.17.30 21.18.00 21.18.30 21.19.00

IPv4 UDP Sockets Memory (ipv4.sockstat_udp_mem) proc:/proc/net/sockstat


ipv4.sockstat_udp_mem
4.4
KiB
mem -
4.2

4
KiB

3.8

3.6

21.17.30 21.18.00 21.18.30 21.19.00

IPv6 Networking
Metrics for the IPv6 stack of the system. Internet Protocol version 6 (IPv6) (https://en.wikipedia.org/wiki/IPv6) is
the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification
and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the
Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion.
IPv6 is intended to replace IPv4.

packets
IPv6 Packets (ipv6.packets) proc:/proc/net/snmp6
10.0 ipv6.packets
pps
5.0 received -
sent -
forwarded -
0.0 delivers -
pps

-5.0

-10.0
21.17.30 21.18.00 21.18.30 21.19.00

IPv6 ECT Packets (ipv6.ect) proc:/proc/net/snmp6


10.0 ipv6.ect
pps
8.0 InNoECT… -
InECT1Pkts -
6.0 InECT0Pkts -
InCEPkts -
pps

4.0

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 25/52
6/9/2019 debian9 netdata dashboard

tcp6
IPv6 TCP Sockets (ipv6.sockstat6_tcp_sockets) proc:/proc/net/sockstat6
ipv6.sockstat6_tcp_sock…
3.4
sockets
inuse -
3.2
sockets

2.8

2.6

21.17.30 21.18.00 21.18.30 21.19.00

udp6
IPv6 UDP Sockets (ipv6.sockstat6_udp_sockets) proc:/proc/net/sockstat6
ipv6.sockstat6_udp_soc…
1.4
sockets
inuse -
1.2
sockets

0.8

0.6

21.17.30 21.18.00 21.18.30 21.19.00

IPv6 UDP Packets (ipv6.udppackets) proc:/proc/net/snmp6


1 ipv6.udppackets
pps
0.8 received -
sent -
0.6
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPv6 UDP Errors (ipv6.udperrors) proc:/proc/net/snmp6


1
ipv6.udperrors
events/s
0.8
RcvbufEr… -
SndbufEr… -
0.6
InErrors -
events/s

NoPorts -
0.4 InCsumEr… -
IgnoredM… -
0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

multicast6

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 26/52
6/9/2019 debian9 netdata dashboard

IPv6 Multicast Bandwidth (ipv6.mcast) proc:/proc/net/snmp6


1
ipv6.mcast
kilobits/s
0.8
received -
sent -
0.6
kilobits/s

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPv6 Multicast Packets (ipv6.mcastpkts) proc:/proc/net/snmp6


1 ipv6.mcastpkts
pps
0.8
received -
sent -
0.6
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

icmp6
IPv6 ICMP Messages (ipv6.icmp) proc:/proc/net/snmp6
1 ipv6.icmp
messages/s
0.8 received -
sent -
0.6
messages/s

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPv6 ICMP Errors (ipv6.icmperrors) proc:/proc/net/snmp6


1
ipv6.icmperrors
0.9 errors/s
0.8 InErrors -
OutErrors -
0.7
InCsumEr… -
0.6 InDestUnr… -
errors/s

0.5 InPktToo… -
0.4 InTimeEx… -
InParmPr… -
0.3
OutDestU… -
0.2 OutPktTo… -
0.1 OutTimeE… -
OutParm… -
0
21.17.30 21.18.00 21.18.30 21.19.00

IPv6 Neighbor Messages (ipv6.icmpneighbor) proc:/proc/net/snmp6


1
ipv6.icmpneighbor
messages/s
0.8
InSolicits -
OutSolicits -
0.6
messages/s

InAdverti… -
OutAdver… -
0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 27/52
6/9/2019 debian9 netdata dashboard

IPv6 ICMP MLDv2 Reports (ipv6.icmpmldv2) proc:/proc/net/snmp6


1
ipv6.icmpmldv2
reports/s
0.8
received -
sent -
0.6
reports/s

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

IPv6 ICMP Types (ipv6.icmptypes) proc:/proc/net/snmp6


1 ipv6.icmptypes
messages/s
0.8 InType1 -
InType128 -
0.6 InType129 -
messages/s

InType136 -
OutType1 -
0.4 OutType128 -
OutType129 -
0.2 OutType133 -
OutType135 -
OutType143 -
0
21.17.30 21.18.00 21.18.30 21.19.00

Network Interfaces
Performance metrics for network interfaces.

ens33
Bandwidth (net.ens33) proc:/proc/net/dev
net.net
200.0
kilobits/s
100.0
received -
0.0 sent -
-100.0
kilobits/s

-200.0
-300.0
-400.0
-500.0
21.17.30 21.18.00 21.18.30 21.19.00

Packets (net_packets.ens33) proc:/proc/net/dev


60.0 net.packets
40.0 pps
received -
20.0 sent -
multicast -
0.0
pps

-20.0

-40.0

-60.0
21.17.30 21.18.00 21.18.30 21.19.00

Applications
192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 28/52
6/9/2019 debian9 netdata dashboard

Per application statistics are collected using netdata's apps.plugin . This plugin walks through all processes
and aggregates statistics for applications of interest, defined in /etc/netdata/apps_groups.conf , which can be
edited by running $ /etc/netdata/edit-config apps_groups.conf (the default is here
(https://github.com/netdata/netdata/blob/master/collectors/apps.plugin/apps_groups.conf)). The plugin internally
builds a process tree (much like ps fax does), and groups processes together (evaluating both child and
parent processes) so that the result is always a chart with a predefined set of dimensions (of course, only
application groups found running are reported). The reported values are compatible with top , although the
netdata plugin counts also the resources of exited children (unlike top which shows only the resources of the
currently running processes). So for processes like shell scripts, the reported values include the resources used
by the commands these scripts run within each timeframe.

cpu
Apps CPU Time (200% = 2 cores) (apps.cpu) apps
13.0 apps.cpu
percentage
12.0
netdata -
11.0 apps.plugin -
python.d.… -
10.0 tc-qos-hel… -
go.d.plugin -
9.0 logs -
ssh -
8.0
dhcp -
cron -
percentage

7.0
X -
6.0 ksmd -
system -
5.0 kernel -
other -
4.0

3.0

2.0

1.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Apps CPU User Time (200% = 2 cores) (apps.cpu_user) apps


4.50 apps.cpu_user
percentage
4.00 netdata -
apps.plugin -
3.50
python.d.… -
3.00 tc-qos-hel… -
go.d.plugin -
percentage

2.50 logs -
ssh -
2.00
dhcp -
1.50 cron -
X -
1.00 ksmd -
system -
0.50
kernel -
0.00
other -
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 29/52
6/9/2019 debian9 netdata dashboard

Apps CPU System Time (200% = 2 cores) (apps.cpu_system) apps


8.00 apps.cpu_system
percentage
7.00 netdata -
apps.plugin -
6.00 python.d.… -
tc-qos-hel… -
5.00 go.d.plugin -
percentage

logs -
4.00 ssh -
dhcp -
3.00 cron -
X -
2.00
ksmd -
system -
1.00
kernel -
0.00 other -
21.17.30 21.18.00 21.18.30 21.19.00

disk
Apps Disk Reads (apps.preads) apps
apps.preads
35.2
MiB/s
33.2 netdata -
31.3 apps.plugin -
29.3 python.d.… -
tc-qos-hel… -
27.3
go.d.plugin -
25.4 logs -
23.4 ssh -
21.5 dhcp -
19.5 cron -
X -
MiB/s

17.6
ksmd -
15.6 system -
13.7 kernel -
11.7 other -
9.8
7.8
5.9
3.9
2.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 30/52
6/9/2019 debian9 netdata dashboard

Apps Disk Writes (apps.pwrites) apps


apps.pwrites
83.0
MiB/s
78.1 netdata -
73.2 apps.plugin -
68.4 python.d.… -
tc-qos-hel… -
63.5 go.d.plugin -
58.6 logs -
53.7 ssh -
dhcp -
48.8
cron -
43.9 X -
MiB/s

39.1 ksmd -
system -
34.2
kernel -
29.3 other -
24.4
19.5
14.6
9.8
4.9
0.0
21.17.30 21.18.00 21.18.30 21.19.00

Apps Disk Logical Reads (apps.lreads) apps


439.5 apps.lreads
MiB/s
390.6
netdata -
341.8 apps.plugin -
python.d.… -
293.0 tc-qos-hel… -
go.d.plugin -
244.1 logs -
MiB/s

ssh -
195.3
dhcp -
146.5 cron -
X -
97.7 ksmd -
system -
48.8 kernel -
0.0 other -
21.17.30 21.18.00 21.18.30 21.19.00

Apps I/O Logical Writes (apps.lwrites) apps


78.1 apps.lwrites
MiB/s
68.4 netdata -
apps.plugin -
58.6 python.d.… -
tc-qos-hel… -
48.8 go.d.plugin -
logs -
MiB/s

39.1 ssh -
dhcp -
29.3
cron -
X -
19.5
ksmd -
system -
9.8
kernel -
0.0
other -
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 31/52
6/9/2019 debian9 netdata dashboard

Apps Open Files (apps.files) apps


90.0
apps.files
80.0 open files
netdata -
70.0 apps.plugin -
python.d.… -
60.0 tc-qos-hel… -
go.d.plugin -
50.0
open files

logs -
40.0
ssh -
dhcp -
30.0 cron -
X -
20.0 ksmd -
system -
10.0
kernel -
0.0 other -
21.17.30 21.18.00 21.18.30 21.19.00

mem
Real memory (RAM) used by applications. This does not include shared memory.
Apps Real Memory (w/o shared) (apps.mem) apps
55.0 apps.mem
50.0 MiB
netdata -
45.0 apps.plugin -
40.0 python.d.… -
tc-qos-hel… -
35.0
go.d.plugin -
30.0 logs -
MiB

25.0 ssh -
dhcp -
20.0 cron -
15.0 X -
ksmd -
10.0
system -
5.0 kernel -
0.0
other -
21.17.30 21.18.00 21.18.30 21.19.00

Virtual memory allocated by applications. Please check this article


(https://github.com/netdata/netdata/tree/master/daemon#virtual-memory) for more information.
Apps Virtual Memory Size (apps.vmem) apps
1.37 apps.vmem
GiB
1.17 netdata -
apps.plugin -
0.98 python.d.… -
tc-qos-hel… -
go.d.plugin -
0.78
logs -
GiB

ssh -
0.59 dhcp -
cron -
0.39 X -
ksmd -
0.20 system -
kernel -
0.00
other -
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 32/52
6/9/2019 debian9 netdata dashboard

Apps Minor Page Faults (apps.minor_faults) apps


apps.minor_faults
5,000 page faults/s
4,500 netdata -
apps.plugin -
4,000 python.d.… -
3,500 tc-qos-hel… -
go.d.plugin -
page faults/s

3,000
logs -
2,500 ssh -
dhcp -
2,000
cron -
1,500 X -
ksmd -
1,000
system -
500 kernel -
0 other -
21.17.30 21.18.00 21.18.30 21.19.00

processes
Apps Threads (apps.threads) apps
apps.threads
120.0
threads
110.0 netdata -
100.0 apps.plugin -
90.0 python.d.… -
tc-qos-hel… -
80.0
go.d.plugin -
70.0 logs -
threads

60.0 ssh -
50.0 dhcp -
40.0 cron -
X -
30.0
ksmd -
20.0 system -
10.0 kernel -
0.0
other -
21.17.30 21.18.00 21.18.30 21.19.00

Apps Processes (apps.processes) apps


apps.processes
90.0 processes
netdata -
80.0
apps.plugin -
70.0 python.d.… -
tc-qos-hel… -
60.0 go.d.plugin -
processes

50.0 logs -
ssh -
40.0 dhcp -
cron -
30.0
X -
20.0 ksmd -
system -
10.0 kernel -
0.0 other -
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 33/52
6/9/2019 debian9 netdata dashboard

Apps Pipes (apps.pipes) apps


14.0
apps.pipes
open pipes
12.0 netdata -
apps.plugin -
10.0 python.d.… -
tc-qos-hel… -
8.0 go.d.plugin -
open pipes

logs -
ssh -
6.0
dhcp -
cron -
4.0 X -
ksmd -
2.0 system -
kernel -
0.0 other -
21.17.30 21.18.00 21.18.30 21.19.00

swap
Apps Swap Memory (apps.swap) apps
61.4 apps.swap
KiB
56.3
netdata -
51.2 apps.plugin -
46.1 python.d.… -
41.0 tc-qos-hel… -
go.d.plugin -
35.8
logs -
30.7
KiB

ssh -
25.6 dhcp -
20.5 cron -
X -
15.4
ksmd -
10.2 system -
5.1 kernel -
0.0
other -
21.17.30 21.18.00 21.18.30 21.19.00

Apps Major Page Faults (swap read) (apps.major_faults) apps


140.0 apps.major_faults
page faults/s
120.0 netdata -
apps.plugin -
100.0 python.d.… -
tc-qos-hel… -
go.d.plugin -
page faults/s

80.0
logs -
ssh -
60.0 dhcp -
cron -
40.0 X -
ksmd -
20.0 system -
kernel -
0.0
other -
21.17.30 21.18.00 21.18.30 21.19.00

net

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 34/52
6/9/2019 debian9 netdata dashboard

Apps Open Sockets (apps.sockets) apps


120.0
apps.sockets
110.0 open sockets
100.0 netdata -
apps.plugin -
90.0
python.d.… -
80.0 tc-qos-hel… -
70.0 go.d.plugin -
open sockets

logs -
60.0
ssh -
50.0 dhcp -
40.0 cron -
X -
30.0
ksmd -
20.0 system -
10.0 kernel -
0.0 other -
21.17.30 21.18.00 21.18.30 21.19.00

User Groups
Per user group statistics are collected using netdata's apps.plugin . This plugin walks through all processes
and aggregates statistics per user group. The reported values are compatible with top , although the netdata
plugin counts also the resources of exited children (unlike top which shows only the resources of the currently
running processes). So for processes like shell scripts, the reported values include the resources used by the
commands these scripts run within each timeframe.

cpu
User Groups CPU Time (200% = 2 cores) (groups.cpu) apps
13.0 groups.cpu
percentage
12.0
message… -
11.0 netdata -
debian9 -
10.0 systemd-t… -
root -
9.0

8.0
percentage

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 35/52
6/9/2019 debian9 netdata dashboard

User Groups CPU User Time (200% = 2 cores) (groups.cpu_user) apps


groups.cpu_user
4.00 percentage
message… -
3.00 netdata -
percentage

debian9 -
systemd-t… -
2.00
root -
1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

User Groups CPU System Time (200% = 2 cores) (groups.cpu_system) apps


8.00 groups.cpu_system
percentage
message… -
6.00
netdata -
percentage

debian9 -
4.00 systemd-t… -
root -
2.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

disk
User Groups Disk Reads (groups.preads) apps
groups.preads
35.2
MiB/s
33.2 message… -
31.3 netdata -
29.3 debian9 -
systemd-t… -
27.3
root -
25.4
23.4
21.5
19.5
MiB/s

17.6
15.6
13.7
11.7
9.8
7.8
5.9
3.9
2.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 36/52
6/9/2019 debian9 netdata dashboard

User Groups Disk Writes (groups.pwrites) apps


groups.pwrites
83.0
MiB/s
78.1 message… -
73.2 netdata -
68.4 debian9 -
systemd-t… -
63.5 root -
58.6
53.7
48.8
43.9
MiB/s

39.1
34.2
29.3
24.4
19.5
14.6
9.8
4.9
0.0
21.17.30 21.18.00 21.18.30 21.19.00

User Groups Disk Logical Reads (groups.lreads) apps


groups.lreads
390.6 MiB/s
message… -
293.0 netdata -
debian9 -
MiB/s

195.3 systemd-t… -
root -
97.7

0.0
21.17.30 21.18.00 21.18.30 21.19.00

User Groups I/O Logical Writes (groups.lwrites) apps


78.1 groups.lwrites
MiB/s
58.6 message… -
netdata -
debian9 -
MiB/s

39.1 systemd-t… -
root -
19.5

0.0
21.17.30 21.18.00 21.18.30 21.19.00

User Groups Open Files (groups.files) apps


groups.files
70.0 open files
60.0 message… -
50.0 netdata -
debian9 -
open files

40.0 systemd-t… -
30.0 root -
20.0
10.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

mem

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 37/52
6/9/2019 debian9 netdata dashboard

Real memory (RAM) used per user group. This does not include shared memory.
User Groups Real Memory (w/o shared) (groups.mem) apps
groups.mem
50.0 MiB
message… -
40.0
netdata -
30.0 debian9 -
systemd-t… -
MiB

20.0 root -

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Virtual memory allocated per user group. Please check this article
(https://github.com/netdata/netdata/tree/master/daemon#virtual-memory) for more information.
User Groups Virtual Memory Size (groups.vmem) apps
1.37 groups.vmem
1.17 GiB
message… -
0.98
netdata -
0.78 debian9 -
systemd-t… -
GiB

0.59
root -
0.39

0.20

0.00
21.17.30 21.18.00 21.18.30 21.19.00

User Groups Minor Page Faults (groups.minor_faults) apps


groups.minor_faults
5,000
page faults/s
4,000 message… -
netdata -
page faults/s

3,000 debian9 -
systemd-t… -
2,000 root -

1,000

0
21.17.30 21.18.00 21.18.30 21.19.00

processes
User Groups Threads (groups.threads) apps
120.0 groups.threads
threads
100.0 message… -
80.0
netdata -
debian9 -
threads

60.0 systemd-t… -
root -
40.0

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 38/52
6/9/2019 debian9 netdata dashboard

User Groups Processes (groups.processes) apps


groups.processes
processes
80.0
message… -
netdata -
60.0
debian9 -
processes

systemd-t… -
40.0
root -

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

User Groups Pipes (groups.pipes) apps


10.00 groups.pipes
open pipes
8.00
message… -
netdata -
6.00
open pipes

debian9 -
systemd-t… -
4.00 root -

2.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

swap
User Groups Swap Memory (groups.swap) apps
61.4 groups.swap
KiB
51.2 message… -
41.0 netdata -
debian9 -
30.7 systemd-t… -
KiB

root -
20.5

10.2

0.0
21.17.30 21.18.00 21.18.30 21.19.00

User Groups Major Page Faults (swap read) (groups.major_faults) apps


140.0 groups.major_faults
120.0 page faults/s
message… -
100.0 netdata -
page faults/s

80.0 debian9 -
systemd-t… -
60.0
root -
40.0
20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

net

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 39/52
6/9/2019 debian9 netdata dashboard

User Groups Open Sockets (groups.sockets) apps


100.0 groups.sockets
open sockets
80.0 message… -
netdata -
open sockets

60.0 debian9 -
systemd-t… -
40.0 root -

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users
Per user statistics are collected using netdata's apps.plugin . This plugin walks through all processes and
aggregates statistics per user. The reported values are compatible with top , although the netdata plugin counts
also the resources of exited children (unlike top which shows only the resources of the currently running
processes). So for processes like shell scripts, the reported values include the resources used by the commands
these scripts run within each timeframe.

cpu
Users CPU Time (200% = 2 cores) (users.cpu) apps
13.0 users.cpu
percentage
12.0
message… -
11.0 netdata -
debian9 -
10.0 systemd-t… -
root -
9.0

8.0
percentage

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users CPU User Time (200% = 2 cores) (users.cpu_user) apps


users.cpu_user
4.00 percentage
message… -
3.00 netdata -
percentage

debian9 -
systemd-t… -
2.00
root -
1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 40/52
6/9/2019 debian9 netdata dashboard

Users CPU System Time (200% = 2 cores) (users.cpu_system) apps


8.00 users.cpu_system
percentage
message… -
6.00
netdata -
percentage

debian9 -
4.00 systemd-t… -
root -
2.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

disk
Users Disk Reads (users.preads) apps
users.preads
35.2
MiB/s
33.2 message… -
31.3 netdata -
29.3 debian9 -
systemd-t… -
27.3
root -
25.4
23.4
21.5
19.5
MiB/s

17.6
15.6
13.7
11.7
9.8
7.8
5.9
3.9
2.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users Disk Writes (users.pwrites) apps


users.pwrites
83.0
MiB/s
78.1 message… -
73.2 netdata -
68.4
debian9 -
systemd-t… -
63.5 root -
58.6
53.7
48.8
43.9
MiB/s

39.1
34.2
29.3
24.4
19.5
14.6
9.8
4.9
0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 41/52
6/9/2019 debian9 netdata dashboard

Users Disk Logical Reads (users.lreads) apps


users.lreads
390.6 MiB/s
message… -
293.0 netdata -
debian9 -
MiB/s

195.3
systemd-t… -
root -

97.7

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users I/O Logical Writes (users.lwrites) apps


78.1 users.lwrites
MiB/s
58.6 message… -
netdata -
debian9 -
MiB/s

39.1 systemd-t… -
root -
19.5

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users Open Files (users.files) apps


80.0 users.files
open files
60.0 message… -
netdata -
debian9 -
open files

40.0 systemd-t… -
root -
20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

mem
Real memory (RAM) used per user. This does not include shared memory.
Users Real Memory (w/o shared) (users.mem) apps
users.mem
50.0 MiB
message… -
40.0
netdata -
30.0 debian9 -
systemd-t… -
MiB

20.0 root -

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 42/52
6/9/2019 debian9 netdata dashboard

Virtual memory allocated per user. Please check this article


(https://github.com/netdata/netdata/tree/master/daemon#virtual-memory) for more information.
Users Virtual Memory Size (users.vmem) apps
1.37 users.vmem
1.17 GiB
message… -
0.98
netdata -
0.78 debian9 -
systemd-t… -
GiB

0.59
root -
0.39

0.20

0.00
21.17.30 21.18.00 21.18.30 21.19.00

Users Minor Page Faults (users.minor_faults) apps


users.minor_faults
5,000
page faults/s
4,000 message… -
netdata -
page faults/s

3,000 debian9 -
systemd-t… -
2,000 root -

1,000

0
21.17.30 21.18.00 21.18.30 21.19.00

processes
Users Threads (users.threads) apps
120.0 users.threads
threads
100.0 message… -
80.0 netdata -
debian9 -
threads

60.0 systemd-t… -
root -
40.0

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users Processes (users.processes) apps


users.processes
processes
80.0
message… -
netdata -
60.0
debian9 -
processes

systemd-t… -
40.0
root -

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 43/52
6/9/2019 debian9 netdata dashboard

Users Pipes (users.pipes) apps


users.pipes
10.0
open pipes
8.0
message… -
netdata -
open pipes

6.0 debian9 -
systemd-t… -
4.0 root -

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

swap
Users Swap Memory (users.swap) apps
61.4 users.swap
KiB
51.2 message… -
41.0 netdata -
debian9 -
30.7 systemd-t… -
KiB

root -
20.5

10.2

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Users Major Page Faults (swap read) (users.major_faults) apps


140.0 users.major_faults
120.0 page faults/s
message… -
100.0
netdata -
page faults/s

80.0 debian9 -
systemd-t… -
60.0
root -
40.0

20.0
0.0
21.17.30 21.18.00 21.18.30 21.19.00

net
Users Open Sockets (users.sockets) apps
100.0 users.sockets
open sockets
80.0 message… -
netdata -
open sockets

60.0 debian9 -
systemd-t… -
40.0 root -

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Netdata Monitoring
Performance metrics for the operation of netdata itself and its plugins.

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 44/52
6/9/2019 debian9 netdata dashboard

netdata
NetData Network Traffic (netdata.net) netdata:stats
200 netdata.net
kilobits/s
0
in -
-200
out -
kilobits/s

-400

-600

-800

21.17.30 21.18.00 21.18.30 21.19.00

NetData CPU usage (netdata.server_cpu) netdata:stats


netdata.server_cpu
30.0
milliseconds/s
25.0 user -
system -
milliseconds/s

20.0

15.0

10.0

5.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

NetData Web Clients (netdata.clients) netdata:stats


6.00 netdata.clients
5.00 connected clients
clients -
connected clients

4.00

3.00

2.00

1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

NetData Web Requests (netdata.requests) netdata:stats


40.0 netdata.requests
requests/s
requests -
30.0
requests/s

20.0

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 45/52
6/9/2019 debian9 netdata dashboard

The netdata API response time measures the time netdata needed to serve requests. This time
includes everything, from the reception of the first byte of a request, to the dispatch of the last byte of its
reply, therefore it includes all network latencies involved (i.e. a client over a slow network will influence
these metrics).
NetData API Response Time (netdata.response_time) netdata:stats
6.00 netdata.response_time
5.00
milliseconds/request
average -
milliseconds/request

4.00 max -

3.00

2.00

1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

NetData API Responses Compression Savings Ratio (netdata.compression_ratio) netdata:stats


netdata.compression_ra…
80.0 percentage
savings -
60.0
percentage

40.0

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

queries
NetData API Queries (netdata.queries) netdata:stats
40.0 netdata.queries
queries/s
queries -
30.0
queries/s

20.0

10.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

NetData API Points (netdata.db_points) netdata:stats


30,000 netdata.db_points
points/s
20,000
read -
10,000 generated -
points/s

-10,000

-20,000

-30,000
21.17.30 21.18.00 21.18.30 21.19.00

cgroups

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 46/52
6/9/2019 debian9 netdata dashboard

NetData CGroups Plugin CPU usage (netdata.plugin_cgroups_cpu) cgroups:stats


4.00
netdata.plugin_cgroups…
milliseconds/s
3.00 user -
system -
milliseconds/s

2.00

1.00

0.00
21.17.30 21.18.00 21.18.30 21.19.00

proc
NetData Proc Plugin CPU usage (netdata.plugin_proc_cpu) netdata:stats
2.50 netdata.plugin_proc_cpu
milliseconds/s
2.00 user -
system -
milliseconds/s

1.50

1.00

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

NetData Proc Plugin Modules Durations (netdata.plugin_proc_modules) netdata:stats


netdata.plugin_proc_mo…
milliseconds/run
4.00 stat -
uptime -
3.50 loadavg -
entropy -
interrupts -
3.00 softirqs -
vmstat -
milliseconds/run

2.50 meminfo -
ksm -
netdev -
2.00
sockstat -
sockstat6 -
1.50 netstat -
snmp -
snmp6 -
1.00
softnet -
diskstats -
0.50 btrfs -
ipc -
0.00 power su… -
21.17.30 21.18.00 21.18.30 21.19.00

web

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 47/52
6/9/2019 debian9 netdata dashboard

NetData web server thread No 1 CPU usage (netdata.web_thread1_cpu) web:stats


netdata.web_cpu
25.0 milliseconds/s
user -
20.0
system -
milliseconds/s

15.0

10.0

5.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

NetData web server thread No 2 CPU usage (netdata.web_thread2_cpu) web:stats


14.0 netdata.web_cpu
12.0 milliseconds/s
user -
10.0
system -
milliseconds/s

8.0
6.0

4.0

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

statsd
NetData statsd charting thread CPU usage (netdata.plugin_statsd_charting_cpu) statsd:stats
2.50 netdata.statsd_cpu
milliseconds/s
2.00 user -
system -
milliseconds/s

1.50

1.00

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

NetData statsd collector thread No 1 CPU usage (netdata.plugin_statsd_collector1_cpu) statsd:stats


2.50 netdata.statsd_cpu
milliseconds/s
2.00 user -
system -
milliseconds/s

1.50

1.00

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

Metrics in the netdata statsd database (netdata.statsd_metrics) statsd:stats


1
netdata.statsd_metrics
metrics
0.8
gauges -
counters -
0.6
timers -
metrics

meters -
0.4 histograms -
sets -
0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 48/52
6/9/2019 debian9 netdata dashboard

Useful metrics in the netdata statsd database (netdata.statsd_useful_metrics) statsd:stats


1
netdata.statsd_useful_m…
metrics
0.8 gauges -
counters -
0.6
timers -
metrics

meters -
0.4 histograms -
sets -
0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

Events processed by the netdata statsd server (netdata.statsd_events) statsd:stats


1 netdata.statsd_events
events/s
0.8
gauges -
counters -
0.6
timers -
events/s

meters -
0.4 histograms -
sets -
0.2 unknown -
errors -
0
21.17.30 21.18.00 21.18.30 21.19.00

Read operations made by the netdata statsd server (netdata.statsd_reads) statsd:stats


1 netdata.statsd_reads
reads/s
0.8 tcp -
udp -
0.6
reads/s

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

Bytes read by the netdata statsd server (netdata.statsd_bytes) statsd:stats


1 netdata.statsd_bytes
kilobits/s
0.8 tcp -
udp -
0.6
kilobits/s

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

Network packets processed by the netdata statsd server (netdata.statsd_packets) statsd:stats


1
netdata.statsd_packets
pps
0.8
tcp -
udp -
0.6
pps

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 49/52
6/9/2019 debian9 netdata dashboard

statsd server TCP connects and disconnects (netdata.tcp_connects) statsd:stats


1
netdata.tcp_connects
events
0.8 connects -
disconne… -
0.6
events

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

statsd server TCP connected sockets (netdata.tcp_connected) statsd:stats


1 netdata.tcp_connected
sockets
0.8
connected -

0.6
sockets

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

Private metric charts created by the netdata statsd server (netdata.private_charts) statsd:stats
1 netdata.private_charts
charts
0.8 charts -

0.6
charts

0.4

0.2

0
21.17.30 21.18.00 21.18.30 21.19.00

diskspace
NetData Disk Space Plugin CPU usage (netdata.plugin_diskspace) diskspace
2.50 netdata.plugin_diskspace
milliseconds/s
2.00 user -
system -
milliseconds/s

1.50

1.00

0.50

0.00
21.17.30 21.18.00 21.18.30 21.19.00

NetData Disk Space Plugin Duration (netdata.plugin_diskspace_dt) diskspace


10.0 netdata.plugin_diskspac…
milliseconds/run
8.0 duration -
milliseconds/run

6.0

4.0

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 50/52
6/9/2019 debian9 netdata dashboard

tc.helper
NetData TC CPU usage (netdata.plugin_tc_cpu) tc
netdata.plugin_tc_cpu
3.50 milliseconds/s
3.00 user -
system -
milliseconds/s

2.50
2.00
1.50
1.00
0.50
0.00
21.17.30 21.18.00 21.18.30 21.19.00

NetData TC script execution (netdata.plugin_tc_time) tc


netdata.plugin_tc_time
milliseconds/run
15.0 run time -
milliseconds/run

10.0

5.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

apps.plugin
Apps Plugin CPU (netdata.apps_cpu) apps
netdata.apps_cpu
12.0
milliseconds/s
10.0 user -
system -
milliseconds/s

8.0

6.0

4.0

2.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Apps Plugin Files (netdata.apps_sizes) apps


300.0 netdata.apps_sizes
files/s
250.0 calls -
files -
200.0
filenames -
inode_ch… -
files/s

150.0
link_chan… -
100.0 pids -
fds -
50.0 targets -
new pids -
0.0
21.17.30 21.18.00 21.18.30 21.19.00

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 51/52
6/9/2019 debian9 netdata dashboard

Apps Plugin Normalization Ratios (netdata.apps_fix) apps


100.0
netdata.apps_fix
percentage
80.0 utime -
stime -
60.0
percentage

gtime -
minflt -
40.0 majflt -

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Apps Plugin Exited Children Normalization Ratios (netdata.apps_children_fix) apps


100.0 netdata.apps_children_fix
percentage
80.0
cutime -
cstime -
60.0
percentage

cgtime -
cminflt -
40.0 cmajflt -

20.0

0.0
21.17.30 21.18.00 21.18.30 21.19.00

Netdata (https://github.com/netdata/netdata/wiki)

Copyright 2018, Netdata, Inc (mailto:info@netdata.cloud).


Copyright 2016-2018, Costa Tsaousis (mailto:costa@tsaousis.gr).

Released under GPL v3 or later (http://www.gnu.org/licenses/gpl-3.0.en.html). Netdata uses third party tools
(https://github.com/netdata/netdata/blob/master/REDISTRIBUTED.md).

192.168.43.61:19999/#menu_system;theme=slate;help=false;mode=print 52/52

Das könnte Ihnen auch gefallen