Beruflich Dokumente
Kultur Dokumente
Release 0.1
Wazuh, Inc.
2 Installation guide 3
2.1 OSSEC HIDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Wazuh HIDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 First steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
i
8 OSSEC deployment with Puppet 83
8.1 Puppet master installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.2 PuppetDB installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8.3 Puppet agents installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.4 Puppet certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
8.5 OSSEC Puppet module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ii
CHAPTER 1
Welcome to Wazuh documentation. Here you will find instructions to install and deploy OSSEC HIDS, both the
official version and our forked one. Please note that this documentation is not intended to substitute OSSEC HIDS
documentation, or the reference manual, which are currently maintained by the project team members and external
contributors.
Wazuh team is currently supporting OSSEC enterprise users, and decided to develop and publish additional capabilities
as a way to contribute back to the Open Source community. Find below a list and description of our main projects,
that have been released under the terms of GPLv2 license.
• OSSEC Wazuh Ruleset: Includes new rootchecks, decoders and rules, increasing OSSEC monitoring and de-
tection capabilities. Those have also been tagged for PCI Data Security Standard, allowing users to monitor
compliance for each of the standard requirements. Users can contribute to this ruleset by submitting pull re-
quests to our Github repository. Our team will continue to maintain and update it periodically.
• Wazuh HIDS: Our OSSEC fork. Implements bug fixes and new features. It provides extended JSON logging ca-
pabilities, for easy integration with ELK Stack and third party log management tools. It also includes compliance
support, and modifications in OSSEC binaries needed by the OSSEC RESTful API.
• Wazuh RESTful API: Used to monitor and control your OSSEC deployment, providing an interface to interact
with the manager from anything that can send an HTTP request.
• Pre-compiled installation packages, both for OSSEC agent and manager: Including repositories for RedHat,
CentOS, Fedora, Debian, Ubuntu and Windows.
• Puppet scripts for automatic OSSEC deployment and configuration.
• Docker containers to virtualize and run your OSSEC manager and an all-in-one integration with ELK Stack.
Note: If you want to contribute to this documentation or our projects please head over to our Github repositories.
You can also join our users mailing list, by sending an email to wazuh+subscribe@googlegroups.com, to
ask questions and participate in discussions.
1
OSSEC Wazuh documentation, Release 0.1
Installation guide
Two different installation options: OSSEC HIDS and Wazuh HIDS. Please read carefully below to learn the dif-
ferencies between these two options since it might be key for the utilization of further items of your interest in this
documentation.
OSSEC HIDS installers contain the latest stable version as stated at OSSEC project Github repository. Wazuh creates
and maintains OSSEC installers for the Open Source community, and you can find the instructions on how to use them
in this documentation section.
Wazuh HIDS is an OSSEC fork, that contains additional features for the OSSEC manager, such as compliance
support and extended JSON logging capabilities, that allow the integration with ELK Stack (Elasticsearch, Logstash
and Kibana) and other log management tools. As well, this installation is ready for the utilization of the Wazuh RESTful
API.
OSSEC HIDS
OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking,
policy monitoring, rootkit detection, real-time alerting and active response. It runs on most operating systems,
including Linux, MacOS, Solaris, HP-UX, AIX and Windows.
You can find more information at OSSEC HIDS project documentation, or the reference manual.
Note: For the OSSEC manager, this version doesn’t allow the integration with ELK Stack neither the use of Wazuh
RESTFUL API. If you plan to use either of these two, or both, follow the Wazuh HIDS installation guide instead.
Debian packages
3
OSSEC Wazuh documentation, Release 0.1
If it is the first installation from Wazuh repository you need to import the GPG key:
$ wget -qO - https://ossec.wazuh.com/repos/apt/conf/ossec-key.gpg.key | sudo apt-key
˓→add -
Debian repositories
To add your Debian repository, depending on your distribution, run these commands:
For Wheezy:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/debian wheezy main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Jessie:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/debian jessie main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Stretch:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/debian stretch main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Sid:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/debian sid main" >> /etc/apt/sources.
˓→list.d/ossec.list
Ubuntu repositories
To add your Ubuntu repository, depending on your distribution, run these commands:
For Precise:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/ubuntu precise main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Trusty:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/ubuntu trusty main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Vivid:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/ubuntu vivid main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Wily:
$ echo -e "deb https://ossec.wazuh.com/repos/apt/ubuntu wily main" >> /etc/apt/
˓→sources.list.d/ossec.list
For Xenial:
For Yakkety:
$ apt-get update
To install the OSSEC manager debian package, from our repository, run this command:
To install the OSSEC agent debian package, from our repository, run this command:
RPM packages
Yum repository
To add the Wazuh yum repository, depending on your Linux distribution, create a file named /etc/yum.repos.
d/wazuh.repo with the following content:
For Amazon Linux AMI:
[wazuh]
name = WAZUH OSSEC Repository - www.wazuh.com
baseurl = http://ossec.wazuh.com/el/7/x86_64
gpgcheck = 1
gpgkey = http://ossec.wazuh.com/key/RPM-GPG-KEY-OSSEC
enabled = 1
[wazuh]
name = WAZUH OSSEC Repository - www.wazuh.com
baseurl = http://ossec.wazuh.com/el/$releasever/$basearch
gpgcheck = 1
gpgkey = http://ossec.wazuh.com/key/RPM-GPG-KEY-OSSEC-RHEL5
enabled = 1
To install the OSSEC manager using Yum packages manager, run the following command:
$ yum install ossec-hids
On Fedora 23, to install the OSSEC manager with DNF packages manager, run the following command:
$ dnf install ossec-hids
To install the OSSEC agent using the Yum packages manger, run the following command:
$ yum install ossec-hids-agent
On Fedora 23, to install the OSSEC agent with the DNF packages manager, run the following command:
$ dnf install ossec-hids-agent
Note: If it is your first installation from our repository, you will need to accept our repository GPG key when prompted
during the installation. This key can be found at: http://ossec.wazuh.com/key/RPM-GPG-KEY-OSSEC
Windows agent
You can find a pre-compiled version of the OSSEC agent for Windows, both for 32 and 64 bits architectures, at our
repository.
Current version is 2.8.3 and these are the MD5 and SHA1 checksums:
• md5sum: 633d898d51eb49050c735abd278e08c8
• sha1sum: 4ebcb31e4eccd509ae34148dd7b1b78d75b58f53
This section describes how to download and compile your OSSEC HIDS Windows agent (version 2.8.3). You can use
either a CentOS or a Debian system as a compilation environment.
Note: Both checksums need to match, meaning that data has not been corrupted through the download process. If
that is not the case, please try it again through a reliable connexion.
First, you need to install MinGW and Nsis (to build the installer). Let’s start installing the EPEL repository:
$ wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ rpm -i epel-release-latest-7.noarch.rpm
After that, we install MinGW gcc and other libraries for the Nsis compilation:
$ yum install gcc-c++ gcc scons mingw32-gcc mingw64-gcc zlib-devel bzip2 unzip
$ wget http://downloads.sourceforge.net/project/nsis/NSIS%203%20Pre-release/3.0b2/
˓→nsis-3.0b2.zip
$ mkdir /usr/local/nsis
$ mv nsis-3.0b2-src.tar.bz2 nsis-3.0b2.zip /usr/local/nsis
$ cd /usr/local/nsis
$ tar -jxvf nsis-3.0b2-src.tar.bz2
$ unzip nsis-3.0b2.zip
Then we need to build makensis, which will actually build the OSSEC Installer Package for Windows:
$ cd /usr/local/nsis/nsis-3.0b2-src/
$ scons SKIPSTUBS=all SKIPPLUGINS=all SKIPUTILS=all SKIPMISC=all NSIS_CONFIG_CONST_
˓→DATA=no PREFIX=/usr/local/nsis/nsis-3.0b2 install-compiler
$ mkdir /usr/local/nsis/nsis-3.0b2/share
$ cd /usr/local/nsis/nsis-3.0b2/share
$ ln -s /usr/local/nsis/nsis-3.0b2 nsis
$ cp ../bin/makensis /bin
Output: "ossec-win32-agent.exe"
Install: 7 pages (448 bytes), 3 sections (3144 bytes), 586 instructions (16408 bytes),
˓→ 287 strings (31800 bytes), 1 language table (346 bytes).
Now you should have the OSSEC agent installer for Windows, ossec-win32-agent.exe, ready to be used.
$ wget https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz
$ wget https://raw.githubusercontent.com/ossec/ossec-docs/master/docs/whatsnew/
˓→checksums/2.8.3/ossec-hids-2.8.3.tar.gz.sha256
Note: Both checksums need to match, meaning that data has not been corrupted through the download process. If
that is not the case, please try it again through a reliable connection.
Build environment
Now we need to prepare our build environment, so we can compile the downloaded OSSEC source code.
On Debian based distributions install the build-essential package:
$ apt-get install build-essential
Or if you use the DNF package manager (Fedora 23), run this command:
$ dnf groupinstall "Development tools"
Note: On OS X you are required to install Xcode command line tools, which include GCC compiler.
Compiling OSSEC
Now the following script will pop up multiple questions, which may vary depending on your installation type:
Choose language:
1.-What kind of installation do you want (server, agent, local, hybrid or help)?
3.2- Do you want to run the integrity check daemon? (y/n) [y]:
3.3- Do you want to run the rootkit detection engine? (y/n) [y]:
- Do you want to add more IPs to the white list? (y/n)? [n]:
Note: If you select yes for Active response you are enabling some basic Intrusion Prevention capabilities. This is
generally a good thing, but only recommended if you know what you are doing.
Wazuh HIDS
Wazuh team has developed an OSSEC fork, implementing new features to improve OSSEC manager capabilities.
These modifications do not affect OSSEC agents. Meaning that, if you are looking to install an agent, you just need
to run a standard OSSEC installation and do not need to follow next steps. Documentation to perform an standard
OSSEC installation can be found here.
Now, if you are installing an OSSEC manager, we strongly recommend you to use our forked OSSEC version. It
provides compliance support, extended logging, and additional management features. Some of these capabilities are
required for the integration with ELK Stack and Wazuh RESTful API.
To start with this installation, first we need to set up the compilation environment by installing development tools and
compilers. In Linux this can easily be done using your distribution packages manager:
For RPM based distributions:
Now we are ready to clone our Github repository and compile the source code, to install OSSEC:
$ cd ~
$ mkdir ossec_tmp && cd ossec_tmp
$ git clone -b stable https://github.com/wazuh/wazuh.git ossec-wazuh
$ cd ossec-wazuh
$ sudo ./install.sh
Choose server when being asked about the installation type and answer the rest of questions as desired. Once
installed, you can start your OSSEC manager running:
$ sudo /var/ossec/bin/ossec-control start
Here are some useful commands to check that everything is working as expected. You should get a similar output in
your system.
$ ps aux | grep ossec
root 31362 0.0 0.1 27992 824 ? S 23:01 0:00 /var/ossec/bin/ossec-
˓→execd
root 31407 0.0 0.1 7832 876 pts/0 S+ 23:02 0:00 grep ossec
$ lsof /var/ossec/logs/alerts/alerts.json
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ossec-ana 31366 ossec 10w REG 202,0 245 274582 /var/ossec/logs/alerts/
˓→alerts.json
$ cat /var/ossec/logs/alerts/alerts.json
{"rule":{"level":3,"comment":"Ossec server started.","sidid":502,"groups":["ossec",
˓→"pci_dss"],"PCI_DSS":["10.6.1"]},"full_log":"ossec: Ossec started.","hostname":"vpc-
First steps
In this documentation you will find the instructions to add a new agent, and to configure it to report to your OS-
SEC/Wazuh manager. For more information on OSSEC HIDS configuration options, please go to the project docu-
mentation, or the reference manual.
You will then be presented the options shown below. Choose “A” to add an agent”:
****************************************
* OSSEC HIDS v2.8 Agent manager. *
* The following options are available: *
****************************************
(A)dd an agent (A).
(E)xtract key for an agent (E).
(L)ist already added agents (L).
(R)emove an agent (R).
(Q)uit.
Choose your action: A,E,L,R or Q: A
You need to type a name for the agent, an IP address and an ID:
- Adding a new agent (use '\q' to return to the main menu).
Please provide the following:
* A name for the new agent: agent-name
* The IP Address of the new agent: 10.0.0.1
* An ID for the new agent[001]:
Agent information:
ID:001
Name:agent-name
IP Address:10.0.0.1
Note: The agent IP address should always match the one the agent will be connected from. If unsure you can use
any. As well you could inspect your network traffic with tcpdump, to see IP headers of incoming packets.
Now you have to extract the agent’s key, which will be displayed on the screen. See below an example:
****************************************
* OSSEC HIDS v2.8 Agent manager. *
* The following options are available: *
****************************************
(A)dd an agent (A).
(E)xtract key for an agent (E).
(L)ist already added agents (L).
(R)emove an agent (R).
(Q)uit.
Choose your action: A,E,L,R or Q:e
Available agents:
ID: 001, Name: agent-name, IP: 10.0.0.1
Provide the ID of the agent to extract the key (or '\q' to quit): 001
Now copy the key (the whole line ending in ==), because you’ll have to import it on the agent.
Your agent needs to have the IP address of your manager, in order to know where to send the data. Please check
your agent configuration file, which is located at /var/ossec/etc/ossec.conf, and set the server-ip to
the right value:
<ossec_config>
<client>
<server-ip>XXX.XXX.XXX.XXX</server-ip>
</client>
Now you can run manage_agents (remember we are on your agent system, not on the manager), and paste the
previously copied key:
$ /var/ossec/bin/manage_agents
****************************************
* OSSEC HIDS v2.8 Agent manager. *
* The following options are available: *
****************************************
(I)mport key from the server (I).
(Q)uit.
Choose your action: I or Q: I
Agent information:
ID:001
Name:agent-name
IP Address:10.0.0.1
Now your agent has been properly added. You can restart it running:
$ /var/ossec/bin/ossec-control restart
Documentation structure
This document will guide you through the installation, configuration and integration of ELK Stack and Wazuh HIDS
(our OSSEC fork). We will make use of expanded logging features that have been implemented for the manager,
along with custom Logstash/Elasticsearch configurations, our OSSEC Wazuh Ruleset, our Wazuh RESTful API
and Kibana with hardcoded modifications.
Components
See below a brief description of the components and tools involved in the integration of our OSSEC Wazuh fork with
ELK Stack, for long term data storage, alerts indexing, management and visualization.
• Wazuh HIDS: Performs log analysis, file integrity checking, policy monitoring, rootkits/malware detection and
real-time alerting. The alerts are written in an extended JSON format, and stored locally on the box running as
the OSSEC manager.
• Logstash: Is a data pipeline used for processing logs and other event data from a variety of systems. Logstash
will read and process OSSEC JSON files, adding IP Geolocation information and modeling data before sending
it to the Elasticsearch Cluster.
• Elasticsearch: Is the search engine used to index and store our OSSEC alerts. It can be deployed as a cluster,
with multiple nodes, for better performance and data replication.
• Kibana: Kibana is a WEB framework used to explore all elasticsearch indexes. We will use it to analyze OSSEC
alerts and to create custom dashboards for different use cases, including compliance regulations like PCI DSS
or benchmarks like CIS.
17
OSSEC Wazuh documentation, Release 0.1
These components are meant to communicate with each other, so the original data generated by your systems and
applications is centralized, analyzed, indexed, stored and made available for you at the Kibana interface. See below a
graph describing this data flow:
Architecture
The components for OSSEC and ELK Stack integration can be deployed all in a single host, or distributed across
multiple systems. This last type of deployment is useful for load balancing, high availability and data replication.
In most cases Elasticesearch will only be indexing OSSEC alerts, as opposed to every event processed by the system
(also possible using archives.json output). This approach reduces considerably the performance and storage require-
ments, making it perfectly possible to deploy all the components in a single server. In this case, the same system would
run the OSSEC manager, the Logstash server and an Elasticsearch single-node cluster with Kibana user interface on
top of it.
In an effort to cover all possible scenarios, this guide describes both options to deploy OSSEC with ELK Stack
(distributed and single-host).
See below our recommended deployement when using four different hosts (which includes a 3 nodes Elasticsearch
cluster):
• Host 1: OSSEC Manager + Logstash Forwarder
• Host 2: Logstash Server + Elasticsearch Node 1 + Kibana
• Host 3: Elasticsearch Node 2
• Host 3: Elasticsearch Node 3
Requirements
• Operating System: This document includes a detailed description of the steps you need to follow to install the
components both in Debian (latest stable is version 8) and CentOS (latest stable is version 7) Linux distributions.
• RAM memory: Elasticsearch tends to utilize a high amount of memory for data sorting and aggregation and,
according to their documentation, less than 8GB RAM is counterproductive. For single-host deployments, con-
sidering that Elasticsearch will share resources with OSSEC, Logstash and Kibana, we recommend to provision
your server with at least 16GB RAM (more if possible). Less than 16GB RAM would only work for small
OSSEC deployments.
• OSSEC Wazuh fork: It is required for the integration with ELK Stack. You can install it by following the
instructions in our documentation
• Java 8 JRE: Java 8 is required both by Logstash server and by Elasticsearch. In this guide we have also included
a description on how to install it.
Kibana offers interactive visualization capabilities, that we have used to put together an OSSEC alerts dashboard with
visualization of alerts geolocation and timeline. In addition you will be able to see the alerts level evolution, and charts
showing you aggregated information for easy analysis. Filters can also be applied, as all alert fields are also indexed
by the search engine. See below an screenshot of this dashboard.
OSSEC HIDS can be used to become compliant with PCI DSS, especially due to the intrusion detection, file integrity
monitoring and policy enforcement capabilities. This dashboard will make use of OSSEC rules mapping with the
compliance controls, showing useful information to identify which systems are not fully compliant with the regulation.
Java 8 JRE
Java 8 JRE is required by Logstash server and by the Elasticsearch engine to be able to run. That is why we need
to install it both for single-host deployments or distributed ones (only in those systems running Logstash server or
Elasticsearch).
To install Java 8 JRE on Debian based distributions we just need to add the webupd8team JAVA repository to our
sources and then proceed to install Java 8 via apt-get install:
To install Java 8 JRE on CentOS, download and run the Oracle Java 8 JDK RPM, following these steps:
$ cd ~
$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.
˓→oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.
˓→com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.rpm"
At last, to set the JAVA_HOME environment variable for all users, we need to add this line at the end of our /etc/
profile file:
export JAVA_HOME=/usr/java/jdk1.8.0_60/jre
What’s next
Once you have Java 8 JRE installed you can move forward and install Logstash, Elasticsearch and Kibana:
• Logstash
• Elasticsearch
• Kibana
• OSSEC Wazuh RESTful API
• OSSEC Wazuh Ruleset
Logstash
When integrating OSSEC HIDS with ELK Stack, we use Logstash to model OSSEC alerts output using an Elastic-
search template that will let the indexer know how to process each alert field.
For single-host type of deployments we directly install the Logstash server on the same system where the
OSSEC manager and Elasticsearch are running. This type of installations do not require the Logstash forwarder
component. This one is only necessary when deploying the OSSEC manager on a different server from the one where
Logstash server and Elasticsearch are running.
Note: Remember Java 8 JRE is required by Logstash server. You can see instructions to install it at our documentation.
Distributed architectures
For distributed deployments, with multiple servers, this is where you need to install Logstash components:
• Elasticsearch main cluster node: Logstash server
• OSSEC manager server: Logstash forwarder
To install Logstash server version 2.1 on Debian based distributions run the following commands on your
system:
Logstash forwarder
Only for distributed architectures you need to install Logstash forwarder, on the system where you run your
OSSEC manager, running the following commands:
3.3. Logstash 21
OSSEC Wazuh documentation, Release 0.1
To install Logstash server version 2.1 RPM package. Lets start importing the repository GPG key:
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=https://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
Logstash forwarder
Only for distributed architectures you need to install Logstash forwarder, on the system where you run your
OSSEC manager. Lets start importing the necessary GPG key:
[logstash-forwarder]
name=logstash-forwarder repository
baseurl=https://packages.elasticsearch.org/logstashforwarder/centos
gpgcheck=1
gpgkey=https://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
Note: This step is only necessary when deploying the OSSEC manager and Elasticsearch on different systems. If you
are using a single host deployment, with OSSEC manager and ELK Stack on the same box, you can skip this section.
Since we are going to use Logstash forwarder to ship logs from our hosts to our Logstash server, we need to create an
SSL certificate and key pair. The certificate is used by the Logstash forwarder to verify the identity of Logstash server
and encrypt communications.
SSL Certificate
The SSL certificate needs to be created on your Logstash Sever, and then copied to your Logstash forwarder
machine. See below how to create this certificate when you run your Logstash server on a Debian or a CentOS Linux
distribution.
To create the SSL certificate on a Debian system, open /etc/ssl/openssl.cnf and find the [ v3_ca ] sec-
tion, adding the following line below it (replacing logstash_server_ip with your Logstash Server IP):
[ v3_ca ]
subjectAltName = IP: logstash_server_ip
Now generate the SSL certificate and private key, and copy it to your Logstash forwarder system via scp (substituting
user and logstash_forwarder_ip by their real values):
$ cd /etc/ssl/
$ sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -
˓→newkey rsa:2048 -keyout /etc/logstash/logstash-forwarder.key -out /etc/logstash/
˓→logstash-forwarder.crt
Then log into your Logstash forwarder system, via SSH, and move the certificate to the right directory:
To create the SSL certificate on a CentOS system, open /etc/pki/tls/openssl.cnf and find the [ v3_ca
] section, adding the following line below it (replacing logstash_server_ip with your Logstash Server IP):
[ v3_ca ]
subjectAltName = IP: logstash_server_ip
Now generate the SSL certificate and private key, and copy it to your Logstash forwarder system via scp (substituting
user and logstash_forwarder_ip by their real values):
$ cd /etc/pki/tls/
$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -
˓→newkey rsa:2048 -keyout /etc/logstash/logstash-forwarder.key -out /etc/logstash/
˓→logstash-forwarder.crt
3.3. Logstash 23
OSSEC Wazuh documentation, Release 0.1
Then log into your Logstash forwarder system, via SSH, and move the certificate to the right directory:
Now on your Logstash forwarder system (same one where you run the OSSEC manager), open the configuration
file /etc/logstash-forwarder.conf and, at the network section, modify the servers array adding your
Logstash server IP address (substitute logstash_server_ip with the real value). As well don’t forget to uncom-
ment the line
Below those lines you will find the CA configuration settings. We use ssl ca variable to specify the path to our
Logstash forwarder SSL certificate
Once that is done, in the same file, uncomment timeout option line to increase connection reliability:
Finally set Logstash forwarder to read OSSEC alerts file, modify list of files configuration to look like this:
At this point, save and exit the Logstash forwarder configuration file. Let’s now give it permissions to read the alerts
file, by adding logstash-forwarder user to the ossec group:
We are now done with the configuration, and just need to restart the Logstash Forwarder to apply changes:
Logstash configuration is based on three different plugins: input, filter and output. You can find the plugins already
preconfigured, to integrate OSSEC with ELK Stack, in our public github repository.
Depending on your architecture, single-host or distributed, we will configure Logstash server to read OSSEC alerts
directly from OSSEC log file, or to read the incoming data (sent by Logstash forwarder) from port 5000/udp (remember
to open your firewall to accept this traffic).
For single-host deployments (everything running on the same box), just copy the configuration file
01-ossec-singlehost.conf to the right directory:
Instead, for distributed architectures, you need to copy the configuration file 01-ossec.conf
Logstash server by default is bound to loopback address 127.0.0.1 , if your Elasticsearch server is in a different host,
remember to modify 01-ossec.conf or 01-ossec-singlehost.conf to set up your Elastic IP
Note: Remember that, for both single-host and distributed deployments, we recommend to run Logstash server and
Elasticsearch on the same server. This means that elasticsearch_server_ip would match your logstash_server_ip.
Copy the Elasticsearch custom mapping from the extensions folder to the Logstash folder:
$ sudo cp ~/ossec_tmp/ossec-wazuh/extensions/elasticsearch/elastic-ossec-template.
˓→json /etc/logstash/
And now download and install GeoLiteCity from the Maxmind website. This will add geolocation support for public
IP addresses:
In single-host deployments, you also need to grant the logstash user access to OSSEC alerts file:
Note: We are not going to start Logstash service yet, we need to wait until we import Wazuh template into Elastic-
search (see next guide)
What’s next
Once you have Logstash installed and configured you can move forward with Elasticsearch and Kibana:
• Elasticsearch
• Kibana
• OSSEC Wazuh RESTful API
• OSSEC Wazuh Ruleset
3.3. Logstash 25
OSSEC Wazuh documentation, Release 0.1
Elasticsearch
In this guide we will describe how to install Elasticsearch, as a single-node cluster (with no shard replicas). This is
usally enough to process OSSEC alerts data. For very large deployments we recommend to actually use a multi-node
cluster, which provides load balancing and data replication.
As a reminder, for a single-host OSSEC integration with ELK Stack, we run all components in the same server,
which also act as an Elasticsearch single-node cluster. On the other hand, for distributed deployments, we recom-
mend to run the Elasticsearch engine and the OSSEC manager in different systems. Please go to components and
architecture documentation for more information.
To install the Elasticsearch version 2.x Debian package, using official repositories run the following commands:
To install Elasticsearch version 2.x RPM package. Lets start importing the repository GPG key:
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
Once the installation is completed, we can now configure some basic settings modifiying in /etc/
elasticsearch/elasticsearch.yml. Open this file and look for the following variables, uncommenting
the lines and assigning them the right values:
cluster.name: ossec
node.name: ossec_node1
Elasticsearch server by default is bound to loopback address 127.0.0.1 , remember to modify it if it is necessary
Shards default number is 5 and Replicas default number is 1, if you are deploying a single-node Elastic cluster, in
order to have a Green status you have to set to 1/0 shards and replicas
index.number_of_shards: 1
index.number_of_replicas: 0
Elasticsearch perform poorly with memory swaps, in order to disable memory swappping and lock some memory to
Elastic, set true the mlockall option and follow the next steps
bootstrap.mlockall: true
As well, open your Elasticsearch service default configuration file (/etc/default/elasticsearch on De-
bian, and /etc/sysconfig/elasticsearch on CentOS) and edit the following settings (please notice that
ES_HEAP_SIZE should be set to half your server memory):
MAX_LOCKED_MEMORY=unlimited
MAX_OPEN_FILES=65535
LimitMEMLOCK=infinity
Now we are done with Elasticsearch configuration and tuning, and we must start the service to apply changes and
Elastic will be up and running:
3.4. Elasticsearch 27
OSSEC Wazuh documentation, Release 0.1
Elasticsearch uses port 9200/tcp (by default) for API queries and ports in the range 9300-9400/tcp to communicate
with other cluster nodes. Remember to open those ports in your firewall for this type of deployments.
On the other hand, for multi-node clusters, it is recommended to have as many number of shards per index (index.
number_of_shards) as nodes you have in your cluster. And it is also a good practice to use at least one replica
(index.number_of_replicas).
Cluster health
To be sure our single-node cluster is working properly, wait a couple of minutes and check if Elasticsearch is running:
Expected result:
{
"name": "node1",
"cluster_name": "ossec",
"version": {
"number": "2.1.1",
"build_hash": "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
"build_timestamp": "2015-12-15T13:05:55Z",
"build_snapshot": false,
"lucene_version": "5.3.1"
},
"tagline": "You Know, for Search"
}
Expected result:
{
"cluster_name": "ossec",
"status": "green",
"timed_out": false,
"number_of_nodes": 2,
"number_of_data_nodes": 2,
"active_primary_shards": 281,
"active_shards": 562,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100
}
It’s time to integrate the OSSEC Wazuh custom mapping. It’s an Elasticsearch template that has already pre-mapped
all possible OSSEC alert fields, as they are generated by OSSEC Wazuh fork JSON Output. This way the indexer
will automatically know how to process the data, which will be displayed with user-friendly names on your Kibana
interface.
Add the template by a CURL request to the Elastic API:
{"acknowledged":true}
To make sure it has actually been added successfully, you can check the template using the Elasticsearch API:
Start Logstash-Server
Now that we have insert our custom Elasticsearch template containing about 72 OSSEC fields, we can start Logstash
server
What’s next
Once you have Elasticsearch installed and configured you can move forward with Kibana:
• Kibana
• OSSEC Wazuh RESTful API
• OSSEC Wazuh Ruleset
Kibana
This is your last step in the process of setting up your ELK cluster. In this section you will find the instructions to
install Kibana, version 4.3, and to configure it to provide a centralized OSSEC alerts dashboard. In addition you will
find dashboards for CIS security benchmark and PCI DSS compliance regulation.
Furthermore, the documentation also includes extra steps to secure your Kibana interface with username and password,
using Nginx web server.
To install the Kibana version 4.5 Debian package, using official repositories run the following commands:
3.5. Kibana 29
OSSEC Wazuh documentation, Release 0.1
Configure Kibana to automatically start during bootup. If your distribution is using the System V version of init, run
the following command:
$ sudo update-rc.d kibana defaults 95 10
To install Kibana version 4.5 RPM package. Lets start importing the repository GPG key:
$ sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
New Kibana 4.3 based on Node (V8) uses a lazy and greedy garbage collector. With its default limit of about 1.5 GB.
In low ram memory systems (below 2GB) Kibana could not run properly. Kibana developers included one fix, but
later decided remove this patch. If your host total RAM is below 2GB, from Wazuh we recommend to limit NodeJS
max ram space, to do it open the file /opt/kibana/bin/kibana and add the following line
NODE_OPTIONS="${NODE_OPTIONS:=--max-old-space-size=250}"
Kibana configuration
Kibana is bound by default to 0.0.0.0 address (listening on all addresses), it uses by default 5601 port and try to
connect to Elasticsearch using the URL http://localhost:9200. If you need to change any of this settings,
open the /opt/kibana/config/kibana.yml configuration file and set up the following variables:
# Kibana is served by a back end server. This controls which port to use.
server.port: 80
Note: Please note that the IP address we use in elasticsearch.url variable needs to match the one we used for
network.bind_host and network.host when we configured the Elasticsearch component.
To create OSSEC alerts index, access your Kibana interface at http://your_server_ip:5601, Kibana will ask you to
“Configure an index pattern”, set it up following these steps:
- Check "Index contains time-based events".
- Insert Index name or pattern: ossec-*
- On "Time-field name" list select @timestamp option.
- Click on "Create" button.
- You should see the fields list with about ~72 fields.
- Go to "Discover" tap on top bar buttons.
Note: Kibana will search Elasticsearch index name pattern ossec-yyyy.mm.dd. You need to have at least an
OSSEC alert before you set up the index pattern on Kibana. Otherwise it won’t find any index on Elasticsearch. If you
want to generate one, for example you could try a sudo -s and miss the password on purpose several times.
OSSEC Dashboards
Custom dashboards for OSSEC alerts, GeoIP maps, file integrity, alert evolution, PCI DSS controls and CIS bench-
mark.
Import the custom dashboards. Access Kibana web interface on your browser and navigate to “Objects”:
- Click at top bar on "Settings".
- Click on "Objects".
- Then click the button "Import"
- Select the file ~/ossec_tmp/ossec-wazuh/extensions/kibana/kibana-ossecwazuh-
˓→dashboards.json
- Optional: You can download the Dashboards JSON File directly from the repository
˓→`here<https://raw.githubusercontent.com/wazuh/ossec-wazuh/stable/extensions/kibana/
˓→kibana-ossecwazuh-dashboards.json>`_.
3.5. Kibana 31
OSSEC Wazuh documentation, Release 0.1
Refresh the Kibana page and you should be able to load your imported Dashboards.
Note: Some Dashboard visualizations require time and specific alerts to work. Please don’t worry if some visualiza-
tions do not display data immidiately after the import.
We are going to use the Nginx web server to build a secure proxy to our Kibana web interface, we will establish a
secure connection with SSL Certificates and HTTP Authentication.
To install Nginx on Debian systems, update your repositories and install Nginx and apache2-utils (for htpassword):
Nginx configuration
- On CentOS: /etc/nginx/conf.d/kibana.conf
- On Debian: /etc/nginx/sites-available/default
server {
listen 80 default_server; #Listen on IPv4
listen [::]:80; #Listen on IPv6
return 301 https://$host$request_uri;
}
server {
listen *:443;
listen [::]:443;
ssl on;
ssl_certificate /etc/pki/tls/certs/kibana-access.crt;
ssl_certificate_key /etc/pki/tls/private/kibana-access.key;
server_name "Server Name";
access_log /var/log/nginx/kibana.access.log;
error_log /var/log/nginx/kibana.error.log;
location ~ (/|/app/kibana|/bundles/|/kibana4|/status|/plugins) {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
proxy_pass http://127.0.0.1:5601;
}
}
On CentOS we also need to edit /etc/nginx/nginx.conf, including the following line inside the server
block:
include /etc/nginx/conf.d/*.conf;
SSL Certificate
Now we can create the SSL certificate to encrypt our connection via HTTPS. This can be done by following the next
steps:
$ cd ~
$ sudo openssl genrsa -des3 -out server.key 1024
Enter the password again, fill the certificate information, and continue:
Password authentication
To generate your .htpasswd file, run this command, replacing kibabaadmin with your own username
Try to access the Kibana web interface via HTTPS. It will ask for the username and password you just created.
Note: If you are running SELinux in enforcing mode, you might need to do some additional configuration in order to
allow connections to 127.0.0.1:5601.
What’s next
Now you have finished your ELK cluster installation and we recommend you to go to your OSSEC Wazuh manager
and install OSSEC Wazuh RESTful API and OSSEC Wazuh Ruleset modules:
• OSSEC Wazuh RESTful API
• OSSEC Wazuh Ruleset
3.5. Kibana 33
OSSEC Wazuh documentation, Release 0.1
Manage agents
Introduction
We have introduced new features into manage_agents OSSEC binary to prevent adding two agents with the same
IP address.
manage_agents will not allow us to add an Agent if the IP is assigned already to another agent, in that case it will
generate a log and warn us about it.
Forcing insertion
In case you want to overwrite an existing agent, we have created a way to force the agent registration, option [-d
<seconds>] will remove the old agent if it is disconnected since <seconds> value. Using 0 value will replace the
agent in any case.
Usage example
Adding new agent called MyNewAgent, in case the IP 10.0.0.100 already exists, replace it if it was disconnected for
the last 3600 seconds.
See also:
For a complete description of every manage_agents option, please read OSSEC documentation: manage_agents.
35
OSSEC Wazuh documentation, Release 0.1
Data backup
Before OSSEC removes an agent by forcing, it will backup the data of the old agent in /backup/agents, in a new
folder with the agent’s name and IP, and the current timestamp. The saved data is the following:
• Agent’s operating system.
• Version of the agent.
• Timestamp when it was added.
• Syscheck database.
• Rootcheck database.
See also:
There is a compile option that allows a new agent to inherit the ID of the agent that was removed by forcing insertion.
To learn more about this, please read Agent ID reusage.
OSSEC Authd
Configuration
On server-side
New options:
-i Register agent with client’s IP instead of any.
-f <seconds> Remove old agents with the same IP if they were not connected since <seconds>.
It has only sense along with option -i.
-P Enable shared password authentication.
Option -f forces the insertion on IP collision, this means that if OSSEC finds another agent with the same IP, but it
has not connected since a specified time, that agent will be deleted automatically and the new agent will be added. To
force insertion always (regardless of the time of the last agent connection), use -f 0.
See also:
For a complete description of every option, please read OSSEC documentation: ossec-authd.
On client-side
New options:
-P <password> Use the specified password instead of searching for it at authd.pass.
If a password is not provided in the file nor on the console, the client will connect with the server without a password
(insecure mode).
See also:
For a complete description of every option, please read OSSEC documentation: agent-auth.
Data backup
Before OSSEC removes an agent by forcing, it will backup the data of the old agent in /var/ossec/backup/
agents/<id> <name> <ip> <delete timestamp>, in a new folder with the agent’s name and IP, and the
current timestamp. The saved data is the following:
• Agent’s operating system.
• Version of the agent.
• Timestamp when it was added.
• Syscheck database.
• Rootcheck database.
See also:
There is a compile option that allows a new agent to inherit the ID of the agent that was removed by forcing insertion.
To learn more about this, please read Agent ID reusage.
Integrator
Enabling Integrator
Integrator is not enabled by default, but it can be enabled with the following command:
Configuration
Integrations are configured in the file etc/ossec.conf, which is located inside your OSSEC installation directory.
Add inside <ossec_config></ossec_config> tags your integration like this:
4.3. Integrator 37
OSSEC Wazuh documentation, Release 0.1
<integration>
<name> </name>
<hook_url> </hook_url>
<api_key> </api_key>
<rule_id> </rule_id>
<level> </level>
<group> </group>
<event_location> </event_location>
</integration>
Basic configuration
<name>
<hook_url>
The URL provided by Slack when the integration was enabled. Mandatory for Slack.
<api_key>
The key that you retrieved from the PagerDuty API. Mandatory for PagerDuty.
<integration>
<name>slack</name>
<hook_url>https://hooks.slack.com/services/...</hook_url>
</integration>
<integration>
<name>pagerduty</name>
<api_key>MYKEY</api_key>
</integration>
Optional filters
<level>
Filter rules by level: push only alerts with the specified level or above.
<rule_id>
<group>
<event_location>
Agent ID reusage
When enabled, deleting agents will remove the corresponding key from client.keys. Every time that
manage_agents or ossec-auth remove an agent to add another with the same IP, the new agent will get the id
of the former, and the key in client.keys will be overwritten.
This feature doesn’t affect the backup: the old agent’s data will still be backed up.
See also:
• OSSEC Authd
• manual_manage_agents
Introduction
OSSEC Wazuh RESTful API provides a new mechanism to manage OSSEC Wazuh. The goal is to manage your
OSSEC deployment remotely (e.g. through a web browser), or to control OSSEC with external systems.
Perform everyday actions like adding an agent, restart OSSEC, or check the configuration are now simplest using
Wazuh RESTful API.
OSSEC Wazuh API RESTful capabilities:
• Agents management
• Manager control & overview
• Rootcheck control
• Syscheck control
• Statistical Information
• HTTPS and User authentication
• Error Handling
Documentation sections
41
OSSEC Wazuh documentation, Release 0.1
Installation
Pre-requisites
In order to install and run the API, you will need some packages, in the following steps we will guide you to install
them.
• Wazuh HIDS
OSSEC Wazuh RESTful API requires you to have previously installed our OSSEC fork as your manager. You can
download and install it following these instructions.
The API will operate on port 55000/tcp by default, and NodeJS service will be protected with HTTP Authentication
and encrypted by HTTPS.
NodeJS
Most of distributions contain a version of NodeJS in its default repositories but we prefer to use the repositories
maintained by NodeSource because they have more recent versions. Follow the official guide to install it.
Python packages
In case you need the pip tool, you can install it following these steps:
RESTful API
Proceed to download the API and copy API folder to OSSEC folder:
$ cd ~
$ wget https://github.com/wazuh/wazuh-api/archive/v1.2.1.tar.gz -O wazuh-API-1.2.1.
˓→tar.gz
Once you have installed NodeJS, NPM and the API, you must install the NodeJS modules:
$ sudo -s
$ cd /var/ossec/api
$ npm install
Configuration
// Security
// Use HTTP protocol over TLS/SSL
config.https = "yes";
// In case the API run behind a proxy server, turn to "yes" this feature.
config.BehindProxyServer = "no";
// Paths
config.ossec_path = "/var/ossec";
config.log_path = "/var/ossec/logs/api.log";
config.api_path = __dirname;
// Logs
// Values for API log: disabled, info, warning, error, debug (each level includes
˓→the previous level).
config.logs = "info";
config.logs_tag = "WazuhAPI";
Basic Authentication
By default you can access by entering user “foo” and password “bar”. We recommend you to generate new creden-
tials. This can be done very easily, doing the following steps:
At first please make sure that you have htpasswd tool installed.
On Debian, update your repositories and install apache2-utils package:
43
OSSEC Wazuh documentation, Release 0.1
SSL Certificate
At this point, you will create certificates to use the API, in case you prefer to use the out-of-the-box certificates,
skip this section.
Follow the next steps to generate them (Openssl package is required):
$ cd /var/ossec/api/ssl
$ sudo openssl genrsa -des3 -out server.key 1024
$ sudo openssl req -new -key server.key -out server.csr
The password must be entered everytime you run the server, if you don’t want to enter the password everytime, you
can remove it by running these commands:
$ sudo cp server.key server.key.org
$ sudo openssl rsa -in server.key.org -out server.key
Running API
Service
We recommend to run the API as a service. In order to install the service excecute the following script:
$ sudo /var/ossec/api/scripts/install_daemon.sh
Background
Note: Sometimes NodeJS binary is called “nodejs” or it is located on /usr/bin/, if the API does not start, check it
please.
Reference
* Response with errors: { "error": "NOT 0", "data": "", "message": "...
" }
• All responses have a HTTP Status code: 2xx (success), 4xx (client error), 5xx (server error), etc.
Find some Examples of how to use this API with CURL, Powershell and Python.
Request List
• Agents
– DELETE /agents/:agent_id
– GET /agents
– GET /agents/:agent_id
– GET /agents/:agent_id/key
45
OSSEC Wazuh documentation, Release 0.1
– POST /agents
– PUT /agents/:agent_id/restart
– PUT /agents/:agent_name
• Manager
– GET /manager/configuration
– GET /manager/configuration/test
– GET /manager/stats
– GET /manager/stats/hourly
– GET /manager/stats/weekly
– GET /manager/status
– PUT /manager/restart
– PUT /manager/start
– PUT /manager/stop
• Rootcheck
– DELETE /rootcheck
– DELETE /rootcheck/:agent_id
– GET /rootcheck/:agent_id
– GET /rootcheck/:agent_id/last_scan
– PUT /rootcheck
– PUT /rootcheck/:agent_id
• Syscheck
– DELETE /syscheck
– DELETE /syscheck/:agent_id
– GET /syscheck/:agent_id/files/changed
– GET /syscheck/:agent_id/last_scan
– PUT /syscheck
– PUT /syscheck/:agent_id
Agents
List
GET /agents
Query:
• status: Status of the agents to return. Possible values: Active, Disconnected or Never connected.
Example Request:
GET https://IP:55000/agents?status=never+connected
Example Response:
{
"error": "0",
"data": [
{
"id": "001",
"name": "Host1",
"ip": "any",
"status": "Never connected"
},
{
"id": "002",
"name": "Host2",
"ip": "10.0.0.4",
"status": "Never connected"
}
],
"message": ""
}
Info
GET /agents/:agent_id
Example Response:
{
"error": "0",
"data": {
"id": "000",
"name": "LinMV",
"ip": "127.0.0.1",
"status": "Active",
"os": "Linux LinMV 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_
˓→64",
key
GET /agents/:agent_id/key
Example Response:
{
"error": "0",
"data":
˓→"MDAxIEhvc3QxIGFueSBkMDZlYjRkNTk4MzU2YjAwYWQzNzcxZTdiMDJiMmRiZDhkM2ZhNjA3ZGU0NGU4YTQyZGVkYTJjMGY0N
˓→",
"message": ""
}
Restart
PUT /agents/:agent_id/restart
Example Response:
{
"error": "0",
"data": "Restarting agent",
"message": ""
}
Add
PUT /agents/:agent_name
Add a new agent with name :agent_name. This agent will use ANY as IP.
Parameters:
• agent_name
Query:
• N/A
Example Request:
PUT https://IP:55000/agents/Host_005
Example Response:
{
"error": 0,
"data": {
"id": "002",
"message": "Agent added"
},
"message": ""
}
POST /agents
Example Response:
49
OSSEC Wazuh documentation, Release 0.1
{
"error": 0,
"data": {
"id": "003",
"message": "Agent added"
},
"message": ""
}
Remove
DELETE /agents/:agent_id
Removes an agent.
Internally use manage_agents with option -r <id>. You must restart OSSEC after removing an agent.
Parameters:
• agent_id
Query:
• N/A
Example Request:
DELETE https://IP:55000/agents/005
Example Response:
{
"error": "0",
"data": "Agent removed",
"message": ""
}
Manager
Start
PUT /manager/start
PUT https://IP:55000/manager/start
Example Response:
{
"error": "0",
"data": [
{
"daemon": "ossec-maild",
"status": "running"
},
{
"daemon": "ossec-execd",
"status": "running"
},
{
"daemon": "ossec-analysisd",
"status": "running"
},
{
"daemon": "ossec-logcollector",
"status": "running"
},
{
"daemon": "ossec-remoted",
"status": "running"
},
{
"daemon": "ossec-syscheckd",
"status": "running"
},
{
"daemon": "ossec-monitord",
"status": "running"
}
],
"message": ""
}
Stop
PUT /manager/stop
51
OSSEC Wazuh documentation, Release 0.1
Example Response:
{
"error": "0",
"data": [
{
"daemon": "ossec-monitord",
"status": "killed"
},
{
"daemon": "ossec-logcollector",
"status": "killed"
},
{
"daemon": "ossec-remoted",
"status": "killed"
},
{
"daemon": "ossec-syscheckd",
"status": "killed"
},
{
"daemon": "ossec-analysisd",
"status": "killed"
},
{
"daemon": "ossec-maild",
"status": "stopped"
},
{
"daemon": "ossec-execd",
"status": "killed"
}
],
"message": ""
}
Restart
PUT /manager/restart
Example Response:
{
"error": "0",
"data": [
{
"daemon": "ossec-maild",
"status": "running"
},
{
"daemon": "ossec-execd",
"status": "running"
},
{
"daemon": "ossec-analysisd",
"status": "running"
},
{
"daemon": "ossec-logcollector",
"status": "running"
},
{
"daemon": "ossec-remoted",
"status": "running"
},
{
"daemon": "ossec-syscheckd",
"status": "running"
},
{
"daemon": "ossec-monitord",
"status": "running"
}
],
"message": ""
}
Status
GET /manager/status
Example Response:
53
OSSEC Wazuh documentation, Release 0.1
{
"error": "0",
"data": [
{
"daemon": "ossec-monitord",
"status": "running"
},
{
"daemon": "ossec-logcollector",
"status": "running"
},
{
"daemon": "ossec-remoted",
"status": "running"
},
{
"daemon": "ossec-syscheckd",
"status": "running"
},
{
"daemon": "ossec-analysisd",
"status": "running"
},
{
"daemon": "ossec-maild",
"status": "stopped"
},
{
"daemon": "ossec-execd",
"status": "running"
}
],
"message": ""
}
Configuration
GET /manager/configuration
Example Response:
{
"error": "0",
"data": [
{
"$t": "rules_config.xml"
},
{
"$t": "pam_rules.xml"
},
{
"$t": "..._rules.xml"
}
],
"message": ""
}
GET /manager/configuration/test
Example Response:
{
"error": 82,
"data": "",
"message": "[\"2016/02/23 12:30:57 ossec-testrule(1226): ERROR: Error reading XML
˓→file '/var/ossec/etc/ossec.conf': XMLERR: Element 'globaaaal' not closed. (line
˓→'/var/ossec/etc/ossec.conf'. Exiting.\"]"
Stats
GET /manager/stats
55
OSSEC Wazuh documentation, Release 0.1
Query:
• date: Select date for getting the statistical information. Format: YYYYMMDD
Example Request:
GET https://IP:55000/manager/stats?date=20160223
Example Response:
{
"error": "0",
"data": [
{
"hour": 10,
"firewall": 0,
"alerts": [
{
"times": 2,
"sigid": 600,
"level": 0
},
{
"times": 2,
"sigid": 1002,
"level": 2
},
{
"times": 8,
"sigid": 530,
"level": 0
},
{
"times": 1,
"sigid": 535,
"level": 1
},
{
"times": 1,
"sigid": 502,
"level": 3
},
{
"times": 1,
"sigid": 515,
"level": 0
}
],
"totalAlerts": 15,
"syscheck": 1126,
"events": 1144
},
{
"hour": 11,
"firewall": 0,
"alerts": [
{
"...": "..."
}
],
"totalAlerts": 432,
"syscheck": 1146,
"events": 1607
56 } Chapter 5. OSSEC Wazuh RESTful API
],
"message": ""
}
OSSEC Wazuh documentation, Release 0.1
GET /manager/stats/hourly
Returns OSSEC statistical information per hour. Each item in averages field represents the average of alerts per
hour.
Parameters:
• N/A
Query:
• N/A
Example Request:
GET https://IP:55000/manager/stats/hourly
Example Response:
{
"error":"0",
"response":{
"averages":[
974,
1291,
886,
784,
1013,
843,
880,
872,
805,
681,
1094,
868,
609,
659,
1455,
1382,
1465,
2092,
1475,
1879,
1548,
1854,
1849,
1020
],
"interactions":20
},
"message":null
}
GET /manager/stats/weekly
Returns OSSEC statistical information per week. Each item in hours field represents the average of alerts per hour
and week day.
57
OSSEC Wazuh documentation, Release 0.1
Parameters:
• N/A
Query:
• N/A
Example Request:
GET https://IP:55000/manager/stats/weekly
Example Response:
{
"error": "0",
"data": {
"Mon":{
"hours":[
948,
838,
711,
1091,
589,
574,
888,
665,
522,
428,
593,
638,
446,
757,
401,
443,
1439,
1114,
648,
1047,
629,
483,
2641,
649
],
"interactions":0
},
"...": {
...
},
"Sun":{
"hours":[
1066,
1684,
901,
652,
1078,
1236,
1052,
920,
803,
686,
391,
800,
58 736, Chapter 5. OSSEC Wazuh RESTful API
558,
418,
703,
591,
OSSEC Wazuh documentation, Release 0.1
Rootcheck
Database
GET /rootcheck/:agent_id
Example Response:
{
"error": "0",
"data": [
{
"status": "outstanding",
"readDay": "2016 Feb 23 12:52:58",
"oldDay": "2016 Feb 22 19:41:05",
"event": "(null)System Audit: CIS - Testing against the CIS Debian Linux
˓→Benchmark v1.0. File: /etc/debian_version. Reference: http://www.ossec.net/wiki/
˓→index.php/CIS_DebianLinux ."
},
{
"status": "outstanding",
"readDay": "2016 Feb 23 12:52:58",
"oldDay": "2016 Feb 22 19:41:05",
"event": "(null)System Audit: CIS - Debian Linux - 1.4 - Robust partition
˓→scheme - /tmp is not on its own partition {CIS: 1.4 Debian Linux}. File: /etc/
},
{
"status": "outstanding",
"readDay": "2016 Feb 23 12:52:58",
"oldDay": "2016 Feb 22 19:41:05",
"event": "(null)System Audit: CIS - Debian Linux - 1.4 - Robust partition
˓→scheme - /opt is not on its own partition {CIS: 1.4 Debian Linux}. File: /opt.
},
{
"status": "outstanding",
"readDay": "2016 Feb 23 12:52:58",
"oldDay": "2016 Feb 22 19:41:05",
"event": "(null)System Audit: CIS - Debian Linux - 1.4 - Robust partition
˓→scheme - /var is not on its own partition {CIS: 1.4 Debian Linux}. File: /etc/
},
{
"status": "outstanding",
"readDay": "2016 Feb 23 12:52:58",
"oldDay": "2016 Feb 22 19:41:05", 59
"event": "(null)System Audit: CIS - Debian Linux - 4.13 - Disable standard
˓→boot services - Web server Enabled {CIS: 4.13 Debian Linux} {PCI_DSS: 2.2.2}.
Last scan
GET /rootcheck/:agent_id/last_scan
Example Response:
{
"error": "0",
"data": {
"rootcheckTime": "Tue Feb 23 15:54:13 2016",
"rootcheckEndTime": "Tue Feb 23 15:58:52 2016"
},
"message": ""
}
Run
PUT /rootcheck
Example Response:
{
"error": "0",
"data": "Restarting Syscheck/Rootcheck on all agents",
"message": ""
}
PUT /rootcheck/:agent_id
Example Response:
{
"error": "0",
"data": "Restarting Syscheck/Rootcheck on agent",
"message": ""
}
Clear Database
DELETE /rootcheck
Example Response:
{
"error": "0",
"data": "Policy and auditing database updated",
"message": ""
}
DELETE /rootcheck/:agent_id
61
OSSEC Wazuh documentation, Release 0.1
• agent_id
Query:
• N/A
Example Request:
DELETE https://IP:55000/rootcheck/001
Example Response:
{
"error": "0",
"data": "Policy and auditing database updated",
"message": ""
}
Syscheck
Database
GET /syscheck/:agent_id/files/changed
Returns changed files for an agent. If a filename is specified, returns the changes in that files.
Parameters:
• agent_id
Query:
• filename
Example Request:
GET https://IP:55000/syscheck/000/files/changed?filename=/home/test/passwords.txt
Example Response:
{
"error": "0",
"data": [
{
"date": "2016 Feb 23 15:42:46",
"file": "/home/test/passwords.txt",
"changes": 0,
"attrs": {
"event": "added",
"size": "2",
"mode": 33188,
"perm": "rw-r--r--",
"uid": "0",
"gid": "0",
"md5": "60b725f10c9c85c70d97880dfe8191b3",
"sha1": "3f786850e387550fdab836ed7e6dc881de23001b"
}
},
{
"date": "2016 Feb 23 15:53:41",
"file": "/home/test/passwords.txt",
"changes": 0,
62 "attrs": { Chapter 5. OSSEC Wazuh RESTful API
"event": "modified",
"size": "53",
"mode": 33279,
"perm": "rwxrwxrwx",
OSSEC Wazuh documentation, Release 0.1
Last scan
GET /syscheck/:agent_id/last_scan
Example Response:
{
"error": "0",
"data": {
"syscheckTime": "Tue Feb 23 15:37:42 2016",
"syscheckEndTime": "Tue Feb 23 15:42:58 2016"
},
"message": ""
}
Run
PUT /syscheck
Example Response:
{
"error": "0",
"data": "Restarting Syscheck/Rootcheck on all agents",
"message": ""
}
63
OSSEC Wazuh documentation, Release 0.1
PUT /syscheck/:agent_id
Example Response:
{
"error": "0",
"data": "Restarting Syscheck/Rootcheck on agent",
"message": ""
}
Clear Database
DELETE /syscheck
Example Response:
{
"error": "0",
"data": "Integrity check database updated",
"message": ""
}
DELETE /syscheck/:agent_id
• agent_id
Query:
• N/A
Example Request:
DELETE https://IP:55000/syscheck/001
Example Response:
{
"error": "0",
"data": "Integrity check database updated",
"message": ""
}
Examples
CURL
cURL is a command-line tool for transferring data using various protocols. It can be used to interact with this API.
It is pre-installed on many Linux and Mac systems. Some examples:
GET
$ curl -u foo:bar -k https://127.0.0.1:55000
{"error":"0","data":"OSSEC-API","message":"wazuh.com"}
PUT
$ curl -u foo:bar -k -X PUT https://127.0.0.1:55000/agents/new_agent
{"error":0,"data":{"id":"004","message":"Agent added"},"message":""}
POST
$ curl -u foo:bar -k -X POST -d 'name=NewHost&ip=10.0.0.8' https://127.0.0.1:55000/
˓→agents
{"error":0,"data":{"id":"004","message":"Agent added"},"message":""}
DELETE
$ curl -u foo:bar -k -X DELETE https://127.0.0.1:55000/rootcheck/001
Python
Code:
65
OSSEC Wazuh documentation, Release 0.1
#!/usr/bin/env python
import json
import requests # Install request: pip install requests
# Configuration
base_url = 'https://IP:55000'
auth = requests.auth.HTTPBasicAuth('foo', 'bar')
verify = False
requests.packages.urllib3.disable_warnings()
# Request
url = '{0}{1}'.format(base_url, "/agents/000")
r = requests.get(url, auth=auth, params=None, verify=verify)
print(json.dumps(r.json(), indent=4, sort_keys=True))
print("Status: {0}".format(r.status_code))
Output:
{
"error": "0",
"message": "",
"data": {
"id": "000",
"ip": "127.0.0.1",
"lastKeepAlive": "Not available",
"name": "LinMV",
"os": "Linux LinMV 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24)
˓→x86_64",
"rootcheckEndTime": "Unknown",
"rootcheckTime": "Unknown",
"status": "Active",
"syscheckEndTime": "Unknown",
"syscheckTime": "Unknown",
"version": "OSSEC HIDS v2.8"
}
}
Status: 200
Powershell
The Invoke-RestMethod cmdlet sends requests to the API and handle the response easily. This cmdlet is introduced
in Windows PowerShell 3.0.
Code:
function Ignore-SelfSignedCerts {
add-type @"
using System.Net;
using System.Security.Cryptography.X509Certificates;
# Configuration
$base_url = "https://IP:55000"
$username = "foo"
$password = "bar"
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:
˓→{1}" -f $username, $password)))
Ignore-SelfSignedCerts
# Request
$url = $base_url + "/syscheck/000/last_scan"
$method = "get"
try{
$r = Invoke-RestMethod -Headers @{Authorization=("Basic {0}" -f
˓→$base64AuthInfo)} -Method $method -Uri $url
}catch{
$r = $_.Exception
}
Write-Output $r
Output:
error data
˓→ message
----- --------
˓→ -------
0 @{syscheckTime=Wed Feb 24 09:55:04 2016; syscheckEndTime=Wed Feb 24 10:00:42
˓→2016}
What’s next
Once you have your OSSEC RESTful API running, we recommend you to check our OSSEC Wazuh ruleset:
• OSSEC Wazuh Ruleset installation guide
67
OSSEC Wazuh documentation, Release 0.1
Introduction
This documentation explains how to install, update, and contribute to OSSEC HIDS Ruleset mantained by Wazuh.
These rules are used by the system to detect attacks, intrusions, software misuse, configuration problems, application
errors, malware, rootkits, system anomalies or security policy violations. OSSEC provides an out-of-the-box set of
rules that we update by modifying them or including new ones, in order to increase OSSEC detection capabilities.
In the ruleset repository you will find:
• OSSEC out-of-the-box rule/rootcheck updates and compliance mapping We update and maintain out-of-
the-box rules provided by OSSEC, both to eliminate false positives or to increase their accuracy. In
addition, we map those with PCI-DSS compliance controls, making it easy to identify when an alert is
related to a compliance requirement.
• New rules/rootchecks OSSEC default number of rules and decoders is limited. For this reason, we centralize,
test and maintain decoders and rules submitted by Open Source contributors. As well, we create new rules
and rootchecks periodically that are added to this repository so they can be used by the users community.
Some examples are the new rules for Netscaler and Puppet.
Resources
• Visit our repository to view the rules in detail at Github OSSEC Wazuh Ruleset
• Find a complete description of the available rules: OSSEC Wazuh Ruleset Summary
Log analysis rule for Netscaler with PCI DSS compliance mapping:
<rule id="80102" level="10" frequency="6">
<if_matched_sid>80101</if_matched_sid>
<same_source_ip />
69
OSSEC Wazuh documentation, Release 0.1
</rule>
Rootcheck rule for SSH Server with mapping to CIS security benchmark and PCI DSS compliance:
[CIS - Debian Linux - 2.3 - SSH Configuration - Empty passwords permitted {CIS: 2.3
˓→Debian Linux} {PCI_DSS: 4.1}] [any] [http://www.ossec.net/wiki/index.php/CIS_
˓→DebianLinux]
Manual installation
In the Github repository you will find two different kind of rules under ossec-rules/rules-decoders/ direc-
tory:
These rules can be found under ossec-rules/rules-decoders/ossec directory, and you can manually in-
stall them following these steps:
If you do not use the OSSEC Wazuh fork, copy, after the above steps, the decoders ossec/decoders/
compatibility/*_decoders.xml to /var/ossec/etc/ossec_decoders/.
These rules are located at ossec-rules/rules-decoders/software (being software the name of your log
messages source) and can be installed manually following next steps.
Copy new rule files into OSSEC directories and add the new rules file to ossec.conf configuration file:
- If there are additional instructions to install these rules and decoders, you will
˓→find them in an instructions.md file in the same directory.
Decoder paths
Configure decoder paths adding the next lines after tag <rules> at /var/ossec/etc/ossec.conf:
<decoder_dir>etc/ossec_decoders</decoder_dir>
<decoder>etc/local_decoder.xml</decoder>
<decoder_dir>etc/wazuh_decoders</decoder_dir>
If you do not use the OSSEC Wazuh fork, you must move the file decoder.xml to the directory etc/
ossec_decoders. Also, if you do not use local_decoder.xml, remove that line in ossec.conf. Remember
that local_decoder.xml can not be empty.
Rootcheck rules
Rootchecks can be found in ossec-rules/rootcheck/ directory. There you will see both updated out-of-the-
box OSSEC rootchecks, and new ones.
To install a rootcheck file, go to your OSSEC manager and copy the .txt file to /var/ossec/etc/shared/
. Then modify /var/ossec/etc/ossec.conf by adding the path to the .txt file into the <rootcheck>
section.
Examples:
- <rootkit_files>/var/ossec/etc/shared/rootkit_files.txt</rootkit_files>
- <system_audit>/var/ossec/etc/shared/cis_rhel5_linux_rcl.txt</system_audit>
- <windows_malware>/var/ossec/etc/shared/win_malware_rcl.txt</windows_malware>
- <windows_audit>/var/ossec/etc/shared/win_audit_rcl.txt</windows_audit>
- <windows_apps>/var/ossec/etc/shared/win_applications_rcl.txt</windows_apps>
Automatic installation
Run ossec_ruleset.py script to update OSSEC Wazuh Ruleset with no need to manually change OSSEC internal
files.
Getting the script:
Usage examples
Update decoders/rules/rootchecks:
./ossec_ruleset.py
./ossec_ruleset.py -a
Restore a backup:
Select ruleset:
-r, --rules Update rules
-c, --rootchecks Update rootchecks
*If not -r or -c indicated, rules and rootchecks will be updated.
Activate:
-a, --activate Prompt a interactive menu for selection of rules and rootchecks to
˓→activate.
*If not -a or -A indicated, NEW rules and rootchecks will NOT activated.
Restart:
-s, --restart Restart OSSEC when required.
-S, --no-restart Do not restart OSSEC when required.
Backups:
-b, --backups Restore backups. Use 'list' to show the backups list available.
Additional Params:
-f, --force-update Force to update all rules and rootchecks. By default, only
˓→it is updated the new/changed rules/rootchecks.
Run ossec_ruleset.py weekly and keep your OSSEC Wazuh Ruleset installation up to date by adding a crontab
job to your system.
Run sudo crontab -e and, at the end of the file, add the following line
Wazuh rules
Netscaler
NetScaler is a network appliance (or hardware device) manufactured by Citrix, which primary role is to provide Level
4 Load Balancing. It also supports Firewall, proxy and VPN functions.
Puppet
Puppet is an open-source configuration management utility. After installing Puppet rules (automatically or manually)
you need to perform the next manual step. This is due to some rules need to read the output of a command.
Copy the code below to /var/ossec/etc/shared/agent.conf in your OSSEC Manager to allow OSSEC
execute this command and read its output:
<agent_config>
<localfile>
<log_format>full_command</log_format>
<command>timestamp_puppet=`cat /var/lib/puppet/state/last_run_summary.yaml |
˓→grep last_run | cut -d: -f 2 | tr -d '[[:space:]]'`;timestamp_current_date=$(date +"
˓→%s");diff_min=$((($timestamp_current_date-$timestamp_puppet)/60));if [ "$diff_min" -
˓→le "30" ];then echo "Puppet: OK. It runs in the last 30 minutes";else puppet_
˓→command>
<frequency>2100</frequency>
</localfile>
</agent_config>
Also you must configure in every agent the logcollector option to accept remote commands from the manager. To do
this, add the following lines to /var/ossec/etc/internal_options.conf:
Serv-U
FTP Server software (FTP, FTPS, SFTP, Web & mobile) for secure file transfer and file sharing on Windows & Linux.
Amazon
Before installing our Amazon rules, you need to follow the steps below in order to enable logging through AWS API
and download the JSON data files. A detailed description of each of the steps will be find further below.
1. Turn on CloudTrail.
2. Create a user with permission over S3.
3. Install AWS Cli in your Ossec Manager.
4. Configure the previous user credentials with AWS Cli in your Ossec Manager.
5. Run a python script to download JSON data in gzipped files logs and convert it into a flat file.
6. Install Wazuh Amazon rules.
In this section you will learn how to create a trail for your AWS account. Trails can be created using the AWS
CloudTrail console or the AWS Command Line Interface (AWS CLI). Both methods follow the same steps but we will
be focusing on the first one:
• Turn on CloudTrail. By default, when creating a trail in one region in the CloudTrail console, this one will
apply to all regions.
• Create a new Amazon S3 bucket for storing your log files, or specify an existing bucket where you want your
log files to be stored. By default, log files from all AWS regions in your account will be stored in the bucket you
specified.
S3 bucket name is common for all amazon users, don’t worry if you get this error Bucket already exists.
Select a different bucket name., even if you don’t have any bucket created before.
From now on all your actions in Amazon AWS console will be logged. You can search logs manually inside
CloudTrail/API activity history. Also, notice that every 7 min a .json file will be stored in your bucket.
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. In
the navigation panel, choose Users and then choose Create New Users. Type the user names for the users you
would like to create. You can create up to five users at one time.
Note: User names can only use a combination of alphanumeric characters and these characters: plus (+), equal (=),
comma (,), period (.), at (@), and hyphen (-). Names must be unique within an account.
The users require access to the API. For this they must have access keys. To generate access key for new users, select
Generate an access key for each user and Choose Create.
(Optional) To view users’ access keys (access key IDs and secret access keys), choose Show User Security
Credentials. To save the access keys, choose Download Credentials and then save the file to a safe location
on your computer.
Warning: This is your only opportunity to view or download the secret access keys, and you must provide this
information to your users before they can use the AWS Console. If you don’t download and save them now, you
will need to create new access keys for the users later. Save the new user’s access key ID and secret access key in
a safe and secure place. You will not have access to the secret access keys again after this step.
Give the user(s) permission to manage security policies, press Attach Policy and select
AmazonS3FullAccess policy.
To download and process the Amazon AWS logs that already are archived in S3 Bucket we need to install AWS Cli in
your system and configure it to use with AWS.
The AWS CLI comes pre-installed on the Amazon Linux AMI. Run $ sudo yum update after connecting to
the instance to get the latest version of the package available via yum. If you need a more recent version of the AWS
CLI than the available in the Amazon updates repository, uninstall the package $ sudo yum remove aws-cli
and then install using pip as follows.
Prerequisites for AWS CLI Using Pip
$ python --version
If Python 2.7 or later is not installed, install it with your distribution’s package manager. The command and package
name varies:
• On Debian derivatives such as Ubuntu, use APT:
Open a command prompt or shell and run the following command to verify that Python has been installed correctly:
$ python --version
Python 2.7.9
$ curl -O https://bootstrap.pypa.io/get-pip.py
Now than we have Python and pip installed, use pip to install the AWS CLI:
Note: If you installed a new version of Python alongside an older version that came with your distribution, or update
pip to the latest version, you may get the following error when trying to invoke pip with sudo: command not
found. We can work around this issue by using which pip to locate the executable, and then invoke it directly by
using an absolute path when installing the AWS CLI:
$ which pip
/usr/local/bin/pip
$ sudo /usr/local/bin/pip install awscli
This command is interactive, prompting you to enter additional information. Enter each of your access keys in turns
and press Enter. Region name is not necessary, press Enter, and press Enter once again to skip the output format
setting. The latest Enter command is shown as replaceable text because there is no user input for that line.
The result should be something like this:
To download the JSON file from S3 Bucket and convert it into a flat file to be used with Ossec, we use a python script
written by Xavier Martens @xme with minor modifications done by Wazuh. The script is located in our repository at
wazuh/ossec-rules/tools/amazon/getawslog.py.
The command to use this script is:
Where s3bucketname is the name of the bucket created when CloudTrail was activated and /var/log/amazon/
amazon.log is the path where the log is stored after being converted by the script.
Note: In case you don’t want to use an existing folder, then the folder where the log is stored need to be created
manually before starting the script.
CloudTrail delivers log files to your S3 bucket approximately every 5 minutes. CloudTrail does not deliver log files
if no API calls are made on your account so you can run the script every 5 min or more adding a crontab job to your
system.
Note: If after executing the first time getawslog.py the result is:
Traceback (most recent call last):
File "/root/script/getawslog.py", line 16, in <module>
import boto
ImportError: No module named boto
To work around this issue install the module named boto, use this command $ sudo pip install boto
Run vi /etc/crontab and, at the end of the file, add the following line
Note: This script downloads and deletes the files from your S3 Bucket, but you can always review the last 7 days logs
through CloudTrail.
To install Wazuh Amazon rules follow either the Automatic installation section or Manual installation section in this
guide.
If you have created new rules, decoders or rootchecks and you would like to contribute to our repository, please fork
our Github repository and submit a pull request.
If you are not familiar with Github, you can also share them through our users mailing list, to which you can subscribe
by sending an email to wazuh+subscribe@googlegroups.com. As well do not hesitate to request new rules
or rootchecks that you would like to see running in OSSEC and our team will do our best to make it happen.
Note: In our repository you will find that most of the rules contain one or more groups called pci_dss_X. This
is the PCI DSS control related to the rule. We have produced a document that can help you tag each rule with its
corresponding PCI requirement: http://www.wazuh.com/resources/PCI_Tagging.pdf
What’s next
Once you have your ruleset up to date we encourage you to move forward and try out ELK integration or the API
RESTful, check them on:
• ELK Stack integration guide
• OSSEC Wazuh RESTful API installation Guide
Docker installation
Docker requires a 64-bit installation regardless of your CentOS or Debian version. Also, your kernel must be 3.10 at
minimum.
To check your current kernel version, open a terminal and use uname -r to display your kernel version:
$ uname -r
3.10.0-229.el7.x86_64
Note: These Docker containers are based on “xetus-oss” dockerfiles, which can be found at https://github.com/
xetus-oss/docker-ossec-server. We created our own fork, which we test and maintain. Thank you Terence Kent for
your contribution to the community.
If you would like to use Docker as a non-root user, you should now consider adding your user to the “docker” group
with something like:
Note: Remember that you will have to log out and back in for this to take effect!
79
OSSEC Wazuh documentation, Release 0.1
OSSEC-ELK Container
These Docker container source files can be found in our ossec-wazuh Github repository. It includes both an OSSEC
manager and an Elasticsearch single-node cluster, with Logstash and Kibana. You can find more information on how
these components work together in our documentation.
To install the ossec-elk container run this command:
The /var/ossec/data directory allows the container to be replaced without configuration or data loss: logs, etc,
stats,rules, and queue (all OSSEC files). In addition to those directories, the bin/.process_list file is symlinked to
process_list in the data volume.
Other available configuration parameters are:
• AUTO_ENROLLMENT_ENABLED: Specifies whether or not to enable auto-enrollment via ossec-authd. De-
faults to true.
• AUTHD_OPTIONS: Options to passed ossec-authd, other than -p and -g. No default.
• SYSLOG_FORWADING_ENABLED: Specifies whether Syslog forwarding is enabled or not. Defaults to
false.
• SYSLOG_FORWARDING_SERVER_IP: The IP address for the Syslog server. No default.
• SYSLOG_FORWARDING_SERVER_PORT: The destination port for Syslog messages. Default is 514.
• SYSLOG_FORWARDING_FORMAT: The Syslog message format to use. Default is default.
Note: All SYSLOG configuration variables are only applicable to the first time setup. Once the container’s data
volume has been initialized, all the configuration options for OSSEC can be changed.
Note: You can also use agents auto enrollment with ossec-authd
Access to Kibana4.5
If you have an error the first time you log in kibana: move to a different menu and return to discover and it should be
working properly.
Note: Some Dashboard visualizations require time and specific alerts to work. Please don’t worry if some visualiza-
tions do not display data immidiately after the import.
These Docker container source files can be found in our ossec-server Github repository. To install it run this command:
The /var/ossec/data directory allows the container to be replaced without configuration or data loss: logs, etc,
stats,rules, and queue. In addition to those directories, the bin/.process_list file is symlinked to process_list in the data
volume.
Other available configuration parameters are:
• AUTO_ENROLLMENT_ENABLED: Specifies whether or not to enable auto-enrollment via ossec-authd. De-
faults to true.
• AUTHD_OPTIONS: Options to passed ossec-authd, other than -p and -g. No default.
• SYSLOG_FORWADING_ENABLED: Specifies whether Syslog forwarding is enabled or not. Defaults to
false.
• SYSLOG_FORWARDING_SERVER_IP: The IP address for the Syslog server. No default.
• SYSLOG_FORWARDING_SERVER_PORT: The destination port for Syslog messages. Default is 514.
• SYSLOG_FORWARDING_FORMAT: The Syslog message format to use. Default is default.
• SMTP_ENABLED: Whether or not to enable SMTP notifications. Defaults to true if ALERTS_TO_EMAIL
is specified, otherwise defaults to false.
• SMTP_RELAY_HOST: The relay host for SMTP messages, required for SMTP notifications. This host must
support non-authenticated SMTP. No default.
• ALERTS_FROM_EMAIL: The email address the alerts should come from. Defaults to ossec@$HOSTNAME.
• ALERTS_TO_EMAIL: The destination email address for SMTP notifications, required for SMTP notifications.
No default.
Note: All SMTP and SYSLOG configuration variables are only applicable for the first time setup. Once the con-
tainer’s data volume has been initialized, all the configuration options for OSSEC can be changed.
Once the system starts up, you can execute the standard OSSEC commands using docker. For example, to list active
agents:
Before we get started with Puppet, check the following network requirements:
• Private network DNS: Forward and reverse DNS must be configured, and every server must have a unique
hostname. If you do not have DNS configured, you must use your hosts file for name resolution. We will
assume that you will use your private network for communication within your infrastructure.
• Firewall open ports: The Puppet master must be reachable on port 8140.
Installation on CentOS
Install your Yum repository, and puppet-server package, for your Enterprise Linux distribution. For example, for EL7:
Installation on Debian
To install your Puppet master on Debian/Ubuntu systems, we first need to add our distribution repository. This can be
done, downloading and installing a package named puppetlabs-release-distribution.deb where “dis-
tribution” needs to be substituted by your distribution codename (e.g. wheezy, jessie, trusty, utopic). See below the
commands to install the Puppet master package for a “jessie” distribution:
$ wget https://apt.puppetlabs.com/puppetlabs-release-pc1-trusty.deb
$ sudo dpkg -i puppetlabs-release-pc1-trusty.deb
$ sudo apt-get update && apt-get install puppetserver
83
OSSEC Wazuh documentation, Release 0.1
Memory Allocation
By default, Puppet Server will be configured to use 2GB of RAM. However, if you want to experiment with Pup-
pet Server on a VM, you can safely allocate as little as 512MB of memory. To change the Puppet Server memory
allocation, you can edit the init config file.
• /etc/sysconfig/puppetserver – RHEL
• /etc/default/puppetserver – Debian
Replace 2g with the amount of memory you want to allocate to Puppet Server. For example, to allocate 1GB of
memory, use JAVA_ARGS="-Xms1g -Xmx1g"; for 512MB, use JAVA_ARGS="-Xms512m -Xmx512m".
Configuration
[main]
dns_alt_names = puppet,puppet.example.com
Note: If found in the configuration file, remove the line templatedir=$confdir/templates, which has been
deprecated.
PuppetDB installation
After configuring your Puppet master to run on Apache with Passenger, the next step is to add Puppet DB so that you
can take advantage of exported resources, as well as have a central storage place for Puppet facts and catalogs.
Installation on CentOS
Installation on Debian
Configuration
The next step is to edit pg_hba.conf and modify the METHOD to md5 in the next two lines
/var/lib/pgsql/9.4/data/pg_hba.conf -- CentOS
The user is created so that it cannot create databases (-D), or roles (-R) and doesn’t have superuser privileges (-S). It’ll
prompt for a password (-P). Let’s assume a password of “yourpassword”” has been used. The database is created and
owned (-O) by the puppetdb user.
Test the database access and create the extension pg_trgm:
# psql -h 127.0.0.1 -p 5432 -U puppetdb -W puppetdb
Password for user puppetdb:
psql (8.4.13)
Type "help" for help.
Configure /etc/puppetlabs/puppetdb/conf.d/database.ini:
[database]
classname = org.postgresql.Driver
subprotocol = postgresql
subname = //127.0.0.1:5432/puppetdb
username = puppetdb
password = yourpassword
log-slow-statements = 10
Create /etc/puppetlabs/puppet/puppetdb.conf:
[main]
server_urls = https://puppetdb.example.com:8081
Create /etc/puppetlabs/puppet/routes.yaml:
---
master:
facts:
terminus: puppetdb
cache: yaml
[master]
storeconfigs = true
storeconfigs_backend = puppetdb
Once all steps are completed, restart your Puppet master and run puppet agent --test:
In this section we assume you have already installed APT and Yum Puppet repositories.
Installation on CentOS
Installation on Debian
Configuration
Add the server value to the [main] section of the node’s /etc/puppet/puppet.conf file, replacing puppet.
example.com with your Puppet master’s FQDN:
[main]
server = puppet.example.com
Puppet certificates
Run Puppet agent to generate a certificate for the Puppet master to sign:
Log into to your Puppet master, and list the certificates that need approval:
Back on the Puppet agent node, run the puppet agent again:
Note: Remember the Private Network DNS is a requisite for the correct certificate sign.
Note: This Puppet module has been authored by Nicolas Zin, and updated by Jonathan Gazeley and Michael Porter.
Wazuh has forked it with the purpose of maintaining it. Thank you to the authors for the contribution.
This module installs and configures OSSEC HIDS agent and manager.
The manager is configured by installing the ossec::server class, and using optionally:
• ossec::command: to define active/response command (like firewall-drop.sh).
• ossec::activeresponse: to link rules to active/response commands.
• ossec::addlog: to define additional log files to monitor.
Example
node "server.yourhost.com" {
class { 'ossec::server':
mailserver_ip => 'localhost',
ossec::command { 'firewallblock':
command_name => 'firewall-drop',
command_executable => 'firewall-drop.sh',
command_expect => 'srcip'
}
ossec::activeresponse { 'blockWebattack':
command_name => 'firewall-drop',
ar_level => 9,
ar_rules_id => [31153,31151],
ar_repeated_offenders => '30,60,120'
}
ossec::addlog { 'monitorLogFile':
logfile => '/var/log/secure',
logtype => 'syslog'
}
class { '::mysql::server':
root_password => 'yourpassword',
remove_default_accounts => true,
}
mysql::db { 'ossec':
user => 'ossec',
password => 'yourpassword',
host => 'localhost',
grant => ['ALL'],
sql => '/var/ossec/contrib/sqlschema/mysql.schema'
}
}
OSSEC agent:
node "client.yourhost.com" {
class { "ossec::client":
ossec_server_ip => "192.168.209.166"
}
Reference
class ossec::server
function ossec::command
• $command_name: Human readable name for ossec::activeresponse usage.
• $command_executable: Name of the executable. OSSEC comes preloaded with
disable-account.sh, host-deny.sh, ipfw.sh, pf.sh, route-null.sh,
firewall-drop.sh, ipfw_mac.sh, ossec-tweeter.sh, restart-ossec.sh.
• $command_expect (default: srcip).
• $timeout_allowed (default: true).
function ossec::activeresponse
• $command_name.
• $ar_location (default: local): It can be set to local,‘‘server‘‘,‘‘defined-agent‘‘,‘‘all‘‘.
• $ar_level (default: 7): Can take values between 0 and 16.
• $ar_rules_id (default: []): List of rules ID.
• $ar_timeout (default: 300): Usually active reponse blocks for a certain amount of time.
• $ar_repeated_offenders (default: empty): A comma separated list of increasing timeouts in min-
utes for repeat offenders. There can be a maximum of 5 entries.
function ossec::addlog
• $log_name.
• $agent_log: (default: false)
• $logfile /path/to/log/file.
• $logtype (default: syslog): The OSSEC log_format of the file.
ossec_scanpaths configuration
Leaving this unconfigured will result in OSSEC using the module defaults. By default, it will monitor /etc, /usr/bin,
/usr/sbin, /bin and /sbin on Ossec Server, with real time monitoring disabled and report_changes enabled.
To overwrite the defaults or add in new paths to scan, you can use hiera to overwrite the defaults.
To tell OSSEC to enable real time monitoring of the default paths:
ossec::server::ossec_scanpaths:
• path: /etc report_changes: ‘no’ realtime: ‘no’
• path: /usr/bin report_changes: ‘no’ realtime: ‘no’
• path: /usr/sbin report_changes: ‘no’ realtime: ‘no’
• path: /bin report_changes: ‘yes’ realtime: ‘yes’
• path: /sbin report_changes: ‘yes’ realtime: ‘yes’
Note: Configuring the ossec_scanpaths variable will overwrite the defaults. i.e. if you want to add a new
directory to monitor, you must also add the above default paths to be monitored.
This section provides instructions to integrate OSSEC with Amazon AWS. It also explains different use cases as
examples on how the rules, developed by Wazuh, can be used to alert on specific events. In our github repository there
are rules for IAM, EC2 and VPC services.
The diagram below explains how a log message, generated by an AWS event, flows until it arrives to the OSSEC
agent. Once the agent reads the message, it sends it to the OSSEC manager which performs the analysis using the
rules. When a rule matches, an alert is triggered (if the level is high enough).
1. CloudTrail is a web service that records AWS API calls for your account and delivers log files. Meaning that,
when an AWS event occurs, Cloudtrail generates the log message. Using CloudTrail we can get more visibility
into AWS user activity, tracking changes made to AWS resources.
2. Once an event takes place, CloudTrail delivers the log message to Amazon S3, writing it to a log file. S3 allows
log files to be stored durably and inexpensively.
3. The script getawslog.py downloads the logs files from Amazon S3 into the OSSEC agent, uncompressing
them and appending new data to a local plain text file.
This diagram makes it easier to understand the integration process described below.
Prior to the installation of the OSSEC rules for Amazon AWS, follow the steps below in order to enable AWS API to
generate log messages and store them as JSON data files in Amazon S3 Bucket. A detailed description of each of the
steps can be found further below.
93
OSSEC Wazuh documentation, Release 0.1
1. Turn on CloudTrail.
2. Create a user with permission to access S3.
3. Install Python Boto in your Ossec Agent.
4. Configure the previous user credentials with AWS Cli in your Ossec Agent.
5. Run the script getawslog.py to download the log JSON files and convert them into flat files.
6. Install Wazuh Amazon rules.
Turn on CloudTrail
Create a trail for your AWS account. Trails can be created using the AWS CloudTrail console or the AWS Command
Line Interface (AWS CLI). Both methods follow the same steps. In this case we will be focusing on the first one:
• Turn on CloudTrail. Note that, by default, when creating a trail in one region in the CloudTrail console, this
one will apply to all regions.
Warning: Please do not enable Enable log file validation parameter, it’s not supported by provided python script.
• Create a new Amazon S3 bucket or specify an existing bucket to store all your log files. By default, log files
from all AWS regions in your account will be stored in the bucket selected.
Note: When naming a new bucket, if you get this error Bucket already exists. Select a different
bucket name., then try a different name, since the one you have selected is already in use by other Amazon AWS
user.
From now on, all the events in your Amazon AWS account will be logged. You can search log messages manually
inside CloudTrail/API activity history. Note that every 7 min a JSON file containing new log messages
will be stored in your bucket.
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/. In
the navigation panel, choose Users and then choose Create New Users. Type the user names for the users you
would like to create.
Note: User names can only use a combination of alphanumeric characters and these characters: plus (+), equal (=),
comma (,), period (.), at (@), and hyphen (-). Names must be unique within an account.
The users require access to the API. For this, they must have access keys. To generate access key for new users, select
Generate an access key for each user and Choose Create.
Warning: This is your only opportunity to view or download the secret access keys, and you must provide this
information to your users before they can use the AWS Console. If you don’t download and save them now, you
will need to create new access keys for the users later. You will not have access to the secret access keys again
after this step.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::YOURBUCKETNAME"]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::YOURBUCKETNAME/*"]
}
]
}
To download and process the Amazon AWS logs that already are archived in S3 Bucket we need to install Python Boto
in the OSSEC agent and configure it to enable the connection with AWS S3.
Prerequisites for Python Boto installation using Pip
• Windows, Linux, OS X, or Unix
• Python 2 version 2.7+ or Python 3 version 3.3+
• Pip
Check if Python is already installed:
$ python --version
If Python 2.7 or later is not installed then, install it with your distribution’s package manager as shown below:
• On Debian derivatives such as Ubuntu, use APT:
Open a command prompt or shell and run the following command to verify that Python has been installed correctly:
$ python --version
Python 2.7.9
$ curl -O https://bootstrap.pypa.io/get-pip.py
Now that Python and pip are installed, use pip to install boto:
To configure the user credentials you need to create a file called /etc/boto.cfg looking like:
[Credentials]
aws_access_key_id = <your_access_key_here>
aws_secret_access_key = <your_secret_key_here>
We use a python script to download JSON files from S3 Bucket and convert them into flat files that can be used with
Ossec. This script was written by Xavier Martens @xme and contains minor modifications done by Wazuh. It is
located in our repository at wazuh/ossec-rules/tools/amazon/getawslog.py.
Run the following command to use this script:
Where s3bucketname is the name of the bucket created when CloudTrail was activated (see the first step in this
section: “Turn on CloudTrail”) and /path-with-write-permission/amazon.log is the path where the log
flat file is stored once has been converted by the script.
Note: In case you don’t want to use an existing folder, create it manually before running the script.
CloudTrail delivers log files to your S3 bucket approximately every 7 minutes. Run the script adding a crontab job and
note that running it more frequently than once every 7 minutes would be useless. CloudTrail does not deliver log files
if no API calls are made on your account.
Run crontab -e and, at the end of the file, add the following line
Note: This script downloads and deletes the files from your S3 Bucket. However, you can always review the log
messages generated during the last 7 days through CloudTrail.
To install Wazuh Amazon rules follow either the Automatic installation section or Manual installation section in this
guide.
Use Cases
Our Rules focuses on providing the desired visibility within the Amazon AWS platform.
The following describes some use cases for IAM, EC2 and VPC services. The structure followed is always the same.
You will see the definition of the rule that matches with the log message generated by the AWS event. You can check
how this log message flows in the diagram at the beginning of this section. Also, in each of the examples, you will see
a screenshot of how Kibana shows the corresponding alert. Remember that an alert is triggered when the log message
matches a specific rule if its level is high enough.
AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources
for your users. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny
their access to AWS resources.
To follow find some use cases when using some of the Wazuh rules built for IAM.
When we create a new user account in IAM, an AWS event is generated. As per the diagram at the beginning of this
section, the log message flows until the OSSEC agent gets the log file and sends it to the OSSEC manager. The latter
analyze the log file and finds that the log message generated by this event matches the rule with id number 880861.
Due to this match, an alert is generated and Kibana will show it as seen below.
If the user that is creating a new user account doesn’t have permissions to create new users, then the log message
generated will match the rule id 80862 and Kibana will show the alert as follows:
When a user try to log in introducing an invalid password, a new event, and therefore a new log message will be
generated. This log message, once is analyzed by the OSSEC manager, will match the rule id 80802, generating
an alert that will be shown in Kibana as follows:
When having more than 4 incorrect access in less than 360 seconds the rule id 80803 will apply and an alert will
be generated:
<group>amazon,authentication_failures,pci_dss_11.4,pci_dss_10.2.4,pci_dss_10.2.
˓→5,</group>
</rule>
Login success
After a succesful login the rule id 80801 will match the log message generated by this event and a new alert will
be shown in Kibana:
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud.
It is designed to make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It
provides you with complete control of your computing resources and lets you run on Amazon’s proven computing
environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing
you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes
the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides
developers the tools to build failure resilient applications and isolate themselves from common failure scenarios.
To follow find some use cases when using some of the Wazuh rules built for EC2.
When a user runs a new instance in EC2, an AWS event is generated. As per the diagram at the beginning of this
section, the log message flows until the OSSEC agent gets the log file and sends it to the OSSEC manager. The latter
analyzes the log file and finds that the log message generated by this event which matches the rule with id number
80301. Due to this match, an alert is generated and Kibana will show it as seen below:
Definition of rule 80301
When a user without permissions tries to run an instance, then the log message will match the rule id 80303
and an alert will be generated as seen below:
When one instance in EC2 is started, the log message will match the rule id 80305 and an alert will be generated
as shown below:
If one user without permissions to start instances tries to start one the rule id 80306 will apply and an alert will
be generated as shown below:
When one instance in EC2 is stopped, the rule id 80308 will apply and an alert will be generated as shown
below:
If one user without permissions to start instances tries to start one, the rule id 80309 will apply and an alert will
be generated as shown below:
When a new security group is created the rule id 80404 will match the log message generated by this event and
an alert will be shown as follows:
If a new address Elastic IP’s is allocated, then the rule id 80411 will apply:
If one Elastic IP’s addres is associated, then the rule id 80446 will apply generating the corresponding alert:
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web
Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete
control over your virtual networking environment, including selection of your own IP address range, creation of
subnets, and configuration of route tables and network gateways.
Create VPC
If one VPC is created the rule id 81000 will apply and an alert will be generated as shown below:
Definition of rule 81000
If the user doesn’t have permissions the rule id 81001 will apply:
If you have created new rules, decoders or rootchecks and you would like to contribute to our repository, please fork
our Github repository and submit a pull request.
If you are not familiar with Github, you can also share them through our users mailing list, to which you can subscribe
by sending an email to wazuh+subscribe@googlegroups.com. As well, do not hesitate to request new rules
or rootchecks that you would like to see running in OSSEC and our team will do our best to make it happen.
Note: In our repository you will find that most of the rules contain one or more groups called pci_dss_X. This
is the PCI DSS control related to the rule. We have produced a document that can help you tag each rule with its
corresponding PCI requirement: http://www.wazuh.com/resources/PCI_Tagging.pdf
What’s next
Once you have your rules for Amazon AWS up to date we encourage you to move forward and try out ELK integration
or the API RESTful, check them on:
• ELK Stack integration guide
• OSSEC Wazuh RESTful API installation Guide
Introduction
The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for
organizations that handle branded credit cards from the major card schemes including Visa, MasterCard, American
Express, Discover, and JCB. The standard was created to increase controls around cardholder data to reduce credit
card fraud.
OSSEC helps to implement PCI DSS by performing log analysis, file integrity checking, policy monitoring, intrusion
detection, real-time alerting and active response. This guide (pdf, excel) explains how these capabilities help with each
of the standard requirements.
In the following section we will elaborate on some specific use cases. They explain how to use OSSEC capabilities to
meet the standard requirements.
Log analysis
Here we will use OSSEC log analysis collection and analysis capabilities to meet the following PCI DSS controls:
• 10.2.4 Invalid logical access attempts
• 10.2.5 Use of and changes to identification and authentication mechanisms—including but not limited to cre-
ation of new accounts and escalation of privileges—and all changes, additions, or deletions to accounts with
root or administrative privileges
These controls require us to log invalid logical access attempts, multiple invalid login attempts (possible brute force
attacks), escalation privileges, changes in accounts, etc. To achieve this, we have added PCI DSS tags to OSSEC log
analysis rules, mapping them to the corresponding requirement. This way, it will be easy to analyze and visualize our
PCI DSS related alerts.
The syntax used for rule tagging is pci_dss_ followed by the number of the requirement. In this case those would be:
pci_dss_10.2.4 and pci_dss_10.2.5.
See below examples of OSSEC rules tagged for PCI requirements 10.2.4 and 10.2.5:
115
OSSEC Wazuh documentation, Release 0.1
Use cases
In this scenario, we try to open the file cardholder_data.txt. Since our current user doesn’t have read access to the file,
we run sudo to elevate privileges.
Using sudo log analysis decoder and rules, OSSEC will generate an alert for this particular action. Since we have
JSON output enabled, we can see the alert in both files alerts.log and alerts.json. Using the rule tags we can also see
which PCI DSS requirements are specifically related to this alert.
Kibana displays information in an organized way, allowing filtering by different type of alert fields, including compli-
ance controls. We have also developed some specific dashboards to display the PCI DSS related alerts.
The OSSEC rootcheck module can be used to enforce and monitor your security policy. This is the process of verifying
that all systems conform to a set of pre-defined rules surrounding configuration settings and approved application
usage.
There are several PCI DSS requirements to verify that systems are properly hardened. An example would be:
2.2 Develop configuration standards for all system components. Assure that these standards address all known security
vulnerabilities and are consistent with industry-accepted system hardening standards. Sources of industry-accepted
system hardening standards may include, but are not limited to: Center for Internet Security (CIS), International Orga-
nization for Standardization (ISO), SysAdmin Audit Network Security (SANS), Institute National Institute of Standards
Technology (NIST).
OSSEC includes out-of-the-box CIS baselines for Debian and Redhat and other baselines could be created for other
systems or applications, just by adding the corresponding rootcheck file:
<rootcheck>
<system_audit>/var/ossec/etc/shared/cis_debian_linux_rcl.txt</system_audit>
<system_audit>/var/ossec/etc/shared/cis_rhel_linux_rcl.txt</system_audit>
<system_audit>/var/ossec/etc/shared/cis_rhel5_linux_rcl.txt</system_audit>
</rootcheck>
Other PCI DSS requirements will ask us to check that applications (especially network services) are configured in a
secure way. One example is the following control:
2.2.4 Configure system security parameters to prevent misuse.
The following are good examples of rootcheck rules developed to check the configuration of SSH services:
In our OSSEC Wazuh fork, the rootcheck rules use this syntax in the rootcheck name: {PCI_DSS: X.Y.Z}. Meaning
that all rootchecks already have the PCI DSS requirement tag.
Use cases
In order to check the security parameters of SSH (and meet the requirement 2.2.4), we have developed the rootchecks
system_audit_ssh. In our example, when OSSEC runs the rootcheck scan, it is able to detect some errors in the SSH
configuration.
Rootkit and trojan detection is performed using two files: rootkit_files.txt and rootkit_trojans.txt. There are also some
tests are performed to detect kernel-level rootkits. You can use these capabilities by adding the files to ossec.conf :
<rootcheck>
<rootkit_files>/var/ossec/etc/shared/rootkit_files.txt</rootkit_files>
<rootkit_trojans>/var/ossec/etc/shared/rootkit_trojans.txt</rootkit_trojans>
</rootcheck>
Use cases
OSSEC performs several tests to detect rootkits, one of them is to check the hidden files in /dev. The /dev directory
should only contain device-specific files such as the primary IDE hard disk (/dev/hda), the kernel random number
generators (/dev/random and /dev/urandom), etc. Any additional files, outside of the expected device-specific files,
should be inspected because many rootkits use /dev as a storage partition to hide files. In the following example we
have created the file .hid which is detected by OSSEC and generates the corresponding alert.
File integrity Monitoring (syscheck) is performed by comparing the cryptographic checksum of a known good file
against the checksum of the file after it has been modified. The OSSEC agent scans the system at an interval you
specify, and it sends the checksums of the monitored files and registry keys (for Windows systems) to the OSSEC
server. The server stores the checksums and looks for modifications by comparing the newly received checksums
against the historical checksum values of that file or registry key. An alert is sent if anything changes.
Syscheck can be used to meet PCI DSS requirement 11.5:
11.5 Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unautho-
rized modification (including changes, additions, and deletions) of critical system files, configuration files, or content
files; and configure the software to perform critical file comparisons at least weekly.
Use cases
In this example, we have configured OSSEC to detect changes in the file /home/credit_cards.
<syscheck>
<directories check_all="yes">/home/credit_cards</directories>
</syscheck>
As you can see, syscheck alerts are tagged with the requirement 11.5.
Active response
Although active response is not explicitly discussed in PCI DSS, it is important to mention that an automated remedi-
ation to security violations and threats is a powerful tool that reduces the risk. Active response allows a scripted action
to be performed whenever a rule matchs in your OSSEC ruleset. Remedial action could be firewall block/drop, traffic
shaping or throttling, account lockout, etc.
ELK
OSSEC Wazuh integration with ELK Stack comes with out-of-the-box dashboards for PCI DSS compliance and CIS
benchmarking. You can do forensic and historical analysis of the alerts and store your data for several years, in a
reliable and scalable platform.
The following requirements can be met with a combination of OSSEC + ELK Stack:
• 10.5 Secure audit trails so they cannot be altered.
• 10.6.1 Review the following at least daily: All security events, Logs of all critical system components, etc.
• 10.7 Retain audit trail history for at least one year, with a minimum of three months immediately available for
analysis
What’s next
Once you know how OSSEC can help with PCI DSS, we encourage you to move forward and try out ELK integration
or the OSSEC Wazuh ruleset, check them out at:
• ELK Stack integration guide
• OSSEC Wazuh Ruleset