Sie sind auf Seite 1von 22

Balage Tech blog

Tutorials and guides about *nix software

Monitor home network tra c with OpenWRT and


Syslog-ng
(Last Updated On: 08/16/2019)
I wanted to see what happens on my home network. Is there something going on I should be aware of? Is
there any device which creates suspicious connections like phoning home? I will use OpenWRT and syslog-
ng to get the answers and Elasticsearch to get analytics.

SOHO routers are usually not really resourceful, neither is mine. Therefore I needed a solution using as little
resource
Dear as We
Visitor! possible
informbut
youstill
thatcapable to answers
our website that questions.
uses cookies in order My solutionthe
to increase uses connection
user tracking
experience. data
By using our
website
from the main OpenWRT router. O oadyou
theconsent to accept
information fromthe noti cation.
OpenWRT to a central syslog server. Enrich it
Accept No Privacy policy
with GeoIP, Reverse DNS and session length metadata by using syslog-ng. Then analyze the logs with
Elasticsearch.

The rst part of this blog series answers where the packets come and go and some metrics. What are inside
the packets is up to another posts.

Table of Contents 

Logging connection tracking data with OpenWRT and syslog-ng

My original idea was to log the SYN and ACK,FIN packets with Iptables on the FORWARD chain and correlate
them. However it did not work as I planned. Although the most important data are included in syslog
messages like network source, destination, port numbers. However the logs cannot be easily correlated to
each other to get session data because there is no common identi er in iptables logs which would be
unique to any given connection. (Stream ID would be but it is encoded in TCP options.) Logging all packets
would simply kill the performance so it does not worth it. I needed an alternative solution.

Founding an essay about Flow-based network accounting in Linux did turn the tide. I realized that
‘nf_conntrack’ in Linux kernel (net lter) actually keeps track of every connections throughout their lifetime
(even UDP which is stateless). I only needed a tool to get that data o the OpenWRT router possibly to
syslog-ng. The essay mentioned many tools but ulogd looked the most promising.

Ulogd is capable to log connection tracking data to local syslog. The following example shows a NEW and a
DESTROY event of a speci c connection logged by ulogd.

Mar 13 15:03:57 openwrt ulogd[21765]: [NEW] ORIG: SRC=172.18.0.227 DST=1.2.3.4 PROTO=TCP


Mar 13 15:09:00 openwrt ulogd[21765]: [DESTROY] ORIG: SRC=172.18.0.227 DST=1.2.3.4 PROTO=

Note: 1.2.3.4 represents a website, while 5.6.7.8 represents the public IP of my home network.

Con guring ulogd on OpenWRT to send conntrack events to syslog-ng

My OpenWRT systems already send their syslog to a remote central syslog server. OpenWRT uses logread
to send syslogs remotely. The remote server runs on syslog-ng. Therefore I only have to con gure ulogd to
send the connection tracking events to local syslog instead of a le.

Fortunately ulogd can send the events to many destinations. I found a post about logging connection
tracking events with ulogd and it helped me to con gure the service properly.
On the following link you can nd the complete con guration of ulogd I created for OpenWRT 18.06.
Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
Nevertheless I describe the details below as well.
website you consent to accept the noti cation.

Accept No Privacy policy


1. First you have to install ulogd and some of its modules. You can do this either in LuCI or in the
command line.

root@openwrt:~# opkg update \


&& opkg install ulogd ulogd-mod-nfct ulogd-mod-syslog ulogd-mod-extra

2. The con guration of ulogd uses an INI style syntax. Two sections will be important for us, [global] and
[ct1].
In [global] section after the list of plugins there will be presets of stacks. Stacks are a list of plugins and
they work like commands piped together. There are input plugins, lter plugins and output plugins.
3. Look for the comment below. We are going to adjust the stack belonging to that comment like this.

# this is a stack for flow-based logging via LOGEMU


stack=ct1:NFCT,ip2str1:IP2STR,print1:PRINTFLOW,sys1:SYSLOG

4. Look for the section called [ct1]. We are adding a new con guration element called hash_enable.
Disabling hashes makes ulogd to log both NEW and DESTROY events separately. Otherwise it would
only log DESTROY events. Although DESTROY contains everything we need the NEW events as well
because of their timestamps, because we will make use of them for building session metadata.

[ct1]
hash_enable=0

5. You can do a manual check of the con guration by starting ulogd manually.

root@openwrt:~# ulogd -v
Mon Mar 11 15:42:51 2019 <5> ulogd.c:843 building new pluginstance stack: 'ct1:NFCT,
Mon Mar 11 15:42:51 2019 <5> ulogd_inpflow_NFCT.c:1399 NFCT plugin working in event

6. The last step is to enable the service and start it.

root@openwrt:~# /etc/init.d/ulogd enable


root@openwrt:~# /etc/init.d/ulogd start

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
website you consent to accept the noti cation.
That is all you have to do to make OpenWRT
Accept sendNo
its connection
Privacytracking
policy events to syslog-ng.
Processing ulogd log messages from OpenWRT with syslog-ng

Note: the complete syslog-ng con guration can be found on GitHub.

My goal is to parse and enrich the log messages with metadata. Syslog-ng provides many parsers to use out
of the box. You can see a short overview about the parsers I use in this case.

Metadata we add to logs Syslog-ng parsers providing the metadata

separate upstream and downstream metrics csv-parser()

parsing key-value pairs from all streams kv-parser()

creating session start, session end and length grouping-by()

GeoIP metadata geoip2()

Reverse DNS for clients and servers python()

Some parsers can be chained together in a single parser{} block. They behave like commands piped
together in a Linux shell. One parser’s output will feed the input of the next parser. Thus their order is
important.

The rst block of parsers look like this. Please refer to the ulogd log samples to get an idea why I do these. I
also added comments to the con g for clari cation.

parser {
# Split a single log line into two parts around the comma character
# Name the first part as ORIG and the second as REPLY
csv-parser(
columns(ORIG, REPLY)
delimiters(chars(","))
);
# Parse the part of the log message available in macro ${ORIG}
# the separator is the = character
# Prefix it with "nf_orig."
kv-parser(
prefix("outbound.")
template("${ORIG}")
);
# Do the same with the other part of the log message
kv-parser(
prefix("inbound.")
template("${REPLY}")
);
Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
# Lookup geoip data for the destination (server)
website you consent to accept the noti cation.
geoip2(
Accept No
"${outbound.destination.ip}", Privacy policy
prefix( "geo." )
database( "/etc/syslog-ng/GeoLite2-City.mmdb" )
);
};

Further parsers like grouping-by() and python() will be discussed later.

Correlating log messages from OpenWRT with syslog-ng

The central syslog server receives two type of log messages from each connection. One message from NEW
events and another from DESTROY events. These two symbolize the beginning and the end of a session. I
will use grouping-by() parser to correlate these messages into one context and get the session length
metadata. The admin guide has a ow chart about how messages are added to the context and how it gets
terminated. You may want to read that in advance.

The parser uses key() and scope() to build-up a context and identify which messages needs to be added to
the context.

For specifying key() requires already parsed data. My setup can be translated to this: “Messages
containing the same SRC, DST, SPT and DPT values from the ORIG part of the message belong to the
same connection, unless they are not from the same host.”

parser p_correlate_session_data {
grouping-by(
key("${outbound.source.ip}/${outbound.destination.ip}/${outbound.source.ip}/${out
scope("host")
where(match("ORIG" value("MESSAGE")))
trigger(match("DESTROY" value("MESSAGE")))
having( "${UNIXTIME}@2" ne "1" )
aggregate(
value("event.start" "${ISODATE}@2")
value("event.end" "${ISODATE}@1")
value("event.duration", "$(- ${UNIXTIME}@1 ${UNIXTIME}@2)")
value("MESSAGE" "Session completed; client='${outbound.source.ip}'; server='$
inherit-mode("context")
)
inject-mode("pass-through")
# destroy events sometimes arrive later than 2 minutes, even when a client app is
timeout(600)
);
};

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
website you
The context will be closed and evaluated consent
either whentoaaccept the arrives
message noti cation.
which matches the lter speci ed
in having() or the timeout() occurs. Accept No Privacy policy
Important! The timeout is currently set to 10 minutes. Connections longer than 10 minutes will set to 10 minutes in
Elasticsearch.

The evaluation will aggregate the context and creates new name-value pairs speci ed with values(). For
example, it creates a new MESSAGE. This message is logged the same place where the received logs are
stored. This is how such a new message looks like.

{
"tags" : "openwrt",
"host" : {
"name" : "firewall",
"ip" : "10.1.1.1"
},
"inbound" : {
"source" : {
"port" : "80",
"bytes" : "44",
"packets" : "1",
"ip" : "10.1.1.2"
},
"destination" : {
"ip" : "1.2.3.4",
"port" : "36351"
}
},
"message" : "Session completed; client='1.2.3.4'; server='5.6.7.8'; destination_port='
"ecs" : {
"version" : "1.0.0"
},
"@timestamp" : "2019-07-03T14:55:41+02:00",
"outbound" : {
"source" : {
"packets" : "1",
"port" : "36351",
"address" : "example.com",
"bytes" : "40",
"ip" : "1.2.3.4"
},
"destination" : {
"geo" : {
"region_iso_code" : "EX",
"location" : "11.000000,12.000000",
"city_name" : "Smithwille",
"continent_name" : "Antarctica",
"country_iso_code" : "AN",
"region_name" : "",
"country_name"
Dear Visitor! We inform : ""
you that our website uses cookies in order to increase the user experience. By using our
}, website you consent to accept the noti cation.
"address" : "foobar.com",
Accept No Privacy policy
"port" : "80",
"ip" : "5.6.7.8"
}
},
"event" : {
"duration" : "3",
"end" : "2019-07-03T14:55:41+02:00",
"start" : "2019-07-03T14:55:38+02:00"
},
"network" : {
"transport" : "TCP"
}
}

Actually I only need the parsed name-value pairs for Elasticsearch. The existence of this message indicates
that everything is available. Therefore I will lter on this message later.

Add reverse DNS data to OpenWRT messages with syslog-ng’s Python parser

Although getting reverse DNS can be questionable I still nd it useful. You just need to keep in mind, the
domain name given back by reverse DNS queries does not necessarily correspond to the domain name of a
server you visited. What you usually have is the domain name of an edge router or reverse proxy of CDN
networks your OpenWRT connected to.

Unfortunately syslog-ng does not support running name resolution on any macro. However we can write
such a parser in Python. By using such a parser I managed to resolve host names for clients and servers
and add them to the session metadata.

There is an existing Python code to do reverse DNS resolution, but it needs some changes to work on any
macros. Because of the current license of that post does not explicitly permit changes and redistribution of the
content. I only provide a patch you may need to apply on top of that. (I am in discussion with the owner to
get proper permissions and update this post accordingly.)

10a11,14
> def init(self, options):
> self.ip = options["ip"]
> self.result_key = options["result"]
> return True
16c20
< ipaddr_b = log_message['suricata.dest_ip']
---
> ipaddr_b = log_message[self.ip]
23c27

Dear<Visitor! We inform
log_message['parsed.dest.hostname'] = to
you that our website uses cookies in order hostname
increase the user experience. By using our
--- website you consent to accept the noti cation.
> log_message[self.result_key] = hostname
Accept No Privacy policy
Specifying the python parsers should come after the previous parsers. You can do it like this.

parser p_reversedns_server {
python(
class("SngResolver")
options(
"ip" "outbound.destination.ip"
"result" "outbound.destination.address"
)
);
};

My OpenWRT server assigns domain names for clients with xed IP addresses. Therefore I do reverse
lookup on client IP addresses too. If you would like to do the same, then create another parser but with
options pointing to nf_orig.SRC and hostname.client. Again for the complete con guration check the
GitHub repository.

Sending conntrack sessions from OpenWRT to Elasticsearch

Sending network tra c data of your home network from OpenWRT to syslog-ng is a great thing. But what is
more cooler is to send it to Elasticsearch and create visualizations and reports. Creating nice visualizations
require proper data type mapping to be set. Regardless what software one will use for log transfer data
type mapping must be done for Elasticsearch. Therefore I manually set explicit data type mappings in
advance.

Creating data type mappings for connection session details

Explicit mapping should be created manually by using Elasticsearch’s PUT API. You can do this in Dev Tools
→ Console. If this is new to you, then you should check my previous step by step guide about it.
Because of the length of the mapping le I provide a downloadable version in the following le on GitHub.

Please note that this mapping also speci es a template for the index names it will match on. In my case it is
network-*.

Monitoring my home network with examples

I already created videos about how you can make di erent type of visualizations in Kibana. You should
de nitely check the videos if you are stuck. Therefore this time I picked up two recent cases because I
wanted to focus on what bene ts I could get by having network data in Elastic.

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
Where and how are my images uploaded
website you consent toby using
accept theanoti
web printing service?
cation.

Accept No Privacy policy


I wanted to print dozens of family photos on paper. I decided to use CEWE’s (Rossmann) service to do that.
They even have a Qt based software for Linux to place orders.

I wanted to feel secure as I am about to upload private data to someone else’s computer. Where are they
uploaded? Are those services use HTTPS for transferring my precious family photos? Let’s nd that out.

Note: My actions did take place between 21:00 and 22:30 on March 17th, 2019.

In the browser’s address bar I noticed that registration and uploading takes place on “photoprintit”
domains via HTTPS. Let’s check the tra c belonging to those domains. Using the following regex
query outbound.destination.address:*photoprint* on the dashboard gave me the following
results.

On the top left corner in Coordinate Map visualization we can see that all tra c goes to Germany (I
am located in Hungary). As a result I checked the service’s Data Privacy document which also shows
that both their web site and the hosting is located in Germany.

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
website you consent to accept the noti cation.

Accept No Privacy policy


I am not surprised as Rossmann is a German company, however it is quite funny that I had to send
my photos to Germany for printing so they will deliver them back to Hungary.
Do they use HTTPS for the tra c? Let’s use a Data Table visualization to see that. It seems like that
there are some tra c on 80, however most of the tra c goes to 443 (HTTPS).

I did not have to do any extra work to get these information because GeoIP, Reverse DNS, tra c size and
details are all there. As a result I could do the same exercise with any other web site.

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
Is there any device on my network
website phoning
you consent home?
to accept the noti cation.

Accept No Privacy policy


Is there any network tra c going to a speci ed country, for instance is there anything going to China
unattended? (I could choose any other country.)

Note: In the following examples I excluded a client host where I made explicit connections to
Chinese websites. Data is from 11 day long period.

We can lter on a country by using outbound.destination.geo.country_name:China.

According to the results there are two devices which communicate to that country. The device
tplink.lan is a WiFi Access Point running on OpenWRT. The other device Galaxy-A5-2016.lan is a
smartphone connecting to that access point over WiFi.
Let’ see the details on a di erent Data Table visualization again.

Apparently there are communication going outwards of my home network, however I still do not know
Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
what is inside of them. That WiFi Access Point used NAT a couple of days ago until I disabled it, seems like
website you consent to accept the noti cation.
there may be duplicate entries.
Accept No Privacy policy
A logical next step would be to run tcpdump on OpenWRT parameterized to only capture and save TCP
packets going on to those IP addresses. You can further analyze the packet captures  with other tools. But
this is a topic of another post I will write later.

Software issues you may need to be aware of

There were several issues I have experienced with syslog-ng in this setup. Those have been already xed in
syslog-ng version 3.22.1 and above.

Fixed. Users of distributions using Apparmor (OpenSUSE, Debian, Ubuntu and derivatives) may
experience crashes, because the python parser requires apparmor pro le updates. Workaround
exists. Further references can be found in the relevant GitHub issue: python-parser requires updated
apparmor pro le otherwise got SIGSEGV #2625
Fixed. The grouping-by() parser when used in conjunction with le destination could cause deadlocks
due to locking problems. Further information can be found in the relevant GitHub issue: syslog-ng-
3.20 deadlock caused by grouping-by() parser #2630

Verdict

I hope that with the help of this blog post anyone can monitor their home network with OpenWRT and
syslog-ng. However it is not necessary to use Elasticsearch. You can use any other tools you like.

If this post was helpful for you, then please share it. Even more should you have anything to ask then feel
free to make any comments below. I will highly appreciate it.

 Share  Tweet

This entry was posted in Syslog and tagged Elasticsearch, Kibana, Network, OpenWRT, syslog-ng on
03/20/2019 [https://balagetech.com/monitor-network-tra c-openwrt-syslog-ng/] .

22 thoughts on “Monitor home network tra c with OpenWRT and Syslog-ng”

Gandalf
07/01/2019 at 07:24

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
Hi,
website you consent to accept the noti cation.

Accept No Privacy policy


Great tutorial…
I have installed SELKS 5, Debian based, and upgraded syslog-ng to the latest from the unno cial repository,
to get the python parser support.
Still no luck and syslog-ng fail with an error,

selks-user@SELKS:/etc/syslog-ng$ sudo syslog-ng -Fedv


[2019-07-01T09:22:17.096428] Error loading Python module; module='_syslogng_main',

exception='exceptions.ImportError: No module named _syslogng_main'


ImportError: No module named _syslogng_main

[2019-07-01T09:22:17.096736] Error looking Python parser class;


parser='p_reversedns_server', class='SngResolver', exception='None'

[2019-07-01T09:22:17.096939] Error printing proper Python traceback for the exception,


printing the error caused by print_exception() itself;

SystemError: NULL object passed to Py_BuildValue


[2019-07-01T09:22:17.097286] Error initializing message pipeline; plugin_name='python',
location='/etc/syslog-ng/conf.d/network.conf:36:5'

I have taken the con g les from the github…


Can you help about this ?

Have you plans about simplifying the monitoring tutorial from OpenWRT with ELK (or equivalent)…

Thanks

Balázs Németh Post author

07/01/2019 at 07:43

Hi,

I remember in 3.22 the developers incorporated some changes regarding python module support.

However my code still works. I am going to update the github repo with the latest changes I have. Give it a
try and let me know if still breaks.

You shall also compare the available plugins in your installation with mine.

# syslog-ng -V
syslog-ng 3 (3.22.1)

Config version: 3.22


Installer-Version: 3.22.1
Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
Revision:
website you consent to accept the noti cation.
Module-Directory: /usr/lib64/syslog-ng
Accept No Privacy policy
Module-Path: /usr/lib64/syslog-ng

Include-Path: /usr/share/syslog-ng/include
Available-Modules: add-contextual-

data,affile,afprog,afsocket,afstomp,afuser,appmodel,basicfuncs,cef,confgen,cryptofuncs,csvpa
rser,date,dbparser,disk-buffer,examples,graphite,hook-commands,json-plugin,kvformat,linux-

kmsg-format,map-value-pairs,pseudofile,sdjournal,snmptrapd-
parser,stardate,syslogformat,system-source,tags-parser,tfgetent,xml,mod-python,geoip-

plugin,geoip2-plugin,http
Enable-Debug: off

Enable-GProf: off
Enable-Memtrace: off

Enable-IPv6: on
Enable-Spoof-Source: on

Enable-TCP-Wrapper: on
Enable-Linux-Caps: on

Enable-Systemd: on

ps: I have some plans moving away from the python parser. I know that Logstash can do geoip resolving,
however it is way to resource heavy.
Recently I changed my logging settings and I am getting correct DNS data from dnsmasq running on
OpenWRT. The only thing preventing me posting it here is the lack of integration with ulogd2 logs.

Gandalf
07/01/2019 at 12:49

look like better and promising with this version of the python parser…

Now I still have an issue but it is now in the Kibana Console;


The PUT _template…… get me stuck in an error :

{
"error": {

"root_cause": [
{

"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [nf_orig : {properties=

{BYTES={type=integer}, DST={type=ip}, SRC={type=ip}, SPT={type=integer}, DPT={type=integer},


PKTS={type=integer}}}] [nf : {properties={SESSION_END={type=date}, SESSION_START=
Dear Visitor! We SESSION_LENGTH={type=integer}}}]
{type=date}, inform you that our website uses cookies in order
[geoip2 : to increase the user experience. By using our
{properties={location2=
website
{type=geo_point}}}] [nf_reply you consent to accept the noti cation. DST={type=ip}, SRC=
: {properties={BYTES={type=integer},
Accept No
{type=ip}, SPT={type=integer}, DPT={type=integer},Privacy policy
PKTS={type=integer}}}]"
}
],

"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [properties]: Root mapping definition has unsupported

parameters: [nf_orig : {properties={BYTES={type=integer}, DST={type=ip}, SRC={type=ip}, SPT=


{type=integer}, DPT={type=integer}, PKTS={type=integer}}}] [nf : {properties={SESSION_END=

{type=date}, SESSION_START={type=date}, SESSION_LENGTH={type=integer}}}] [geoip2 :


{properties={location2={type=geo_point}}}] [nf_reply : {properties={BYTES={type=integer},

DST={type=ip}, SRC={type=ip}, SPT={type=integer}, DPT={type=integer}, PKTS=


{type=integer}}}]",

"caused_by": {
"type": "mapper_parsing_exception",

"reason": "Root mapping definition has unsupported parameters: [nf_orig : {properties=


{BYTES={type=integer}, DST={type=ip}, SRC={type=ip}, SPT={type=integer}, DPT={type=integer},

PKTS={type=integer}}}] [nf : {properties={SESSION_END={type=date}, SESSION_START=


{type=date}, SESSION_LENGTH={type=integer}}}] [geoip2 : {properties={location2=

{type=geo_point}}}] [nf_reply : {properties={BYTES={type=integer}, DST={type=ip}, SRC=


{type=ip}, SPT={type=integer}, DPT={type=integer}, PKTS={type=integer}}}]"

}
},

"status": 400
}

Hope you’ll help me on this also, tanks you…

PS : will try to get test on your next release, do you pre er issue on your github ? or comment here is ok ?

Balázs Németh Post author

07/01/2019 at 13:20

Hi,

I think the problem is with the syntax. The mapping syntax changed in Elastic 7.x. The one uploaded to
GitHub is for v7.x
If you are using and older version then try to add back the the nested type directive. In my case it was called
‘test’. Check the di for hints.

I plan to rework the current mapping as it is totally ad-hoc and does not follow ECS. I try to do it this week
but itVisitor!
Dear is not a promise.
We inform you that our website uses cookies in order to increase the user experience. By using our
website you consent to accept the noti cation.
Commenting here is okay. Once the visibility
Accept of the
NogithubPrivacy
repos may arise I will switch over there.
policy
Gandalf
07/02/2019 at 08:49

Hy,

following this https://www.elastic.co/guide/en/elasticsearch/reference/7.0/deb.html


I will upgrade my SELKS 5 debian 9 (stretch) with the latest from ElasticSearch and give a try…

Thanks…

Gandalf
07/02/2019 at 09:31

Just upgraded from 6.8.1 to 7.2.0 with success…

I get now
{
“acknowledged” : true
}

But still get no indexes network-* nor any data…


May be I still misunderstood how kibana and elastcisearch do work…

Balázs Németh Post author

07/02/2019 at 10:12

The basic procedure is this:

1) upload index mapping which expects the index names matching “network-*”. As I understand you
already did that.
2) Send data into Elastic with syslog-ng
3) Create index pattern in Kibana matching index names “network-*”, then go to Discover. You should be
able to see the documents there.
Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
website you consent to accept the noti cation.

Accept No Privacy policy


Gandalf
07/02/2019 at 16:04

I have to verify the hostname in the syslog-ng/network.conf like the ip from the /var/log/network/IP
I also put the url to elastic as 127.0.0.1 in syslog-ng/network.conf
Then I get datas uploaded from my OpenWRT (ulogd > syslog-ng > elasticsearch)
I have created the index with network and now I can discover datas…

Will follow with making dashboards ! (hope to)

thanks

Gandalf
07/02/2019 at 16:57

It works… thanks

Balázs Németh Post author

07/03/2019 at 07:49

Great to hear that!

Gandalf
07/03/2019 at 07:46

got some errors in syslog-ng like :


[2019-07-03T09:44:42.267530] Parsing failed, template function's second argument is not a
number; function='-', arg2=''

Balázs Németh Post author


Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
07/03/2019 at 07:49
website you consent to accept the noti cation.

Accept No Privacy policy


It is hard to determine the cause of this without context. I’ve been reworking the network.conf and the
Elastic part since yesterday to make it more easy to use and better comply with ECS.
If you are interested I will upload the changes to github in the following days and update the blog post
accordingly.

Gandalf
07/03/2019 at 07:52

Yes, verry interested to tests all this !


Could you also post some dashboards on GitHub…

THANKS

Balázs Németh Post author

07/03/2019 at 13:17

I have updated the blog post as well as the GitHub page with the latest changes.
It is better to use it from scratch. You may need to DELETE _template/network_openwrt before you upload
the new one.
I also exported my visualizations and dashboards that you can import in Kibana -> Saved objects.

Let me know how it works for you.

Gandalf
07/03/2019 at 16:28

all works ne !

great job…

Thank you

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
website you consent to accept the noti cation.
Balázs Németh Post author

07/03/2019 at 19:24 Accept No Privacy policy


Thank you very much.

I am happy that what I’ve done could be useful for others too.

Gandalf
07/03/2019 at 07:50

got also theses errors loged ;


[2019-07-03T09:49:17.774850] Incoming log entry from journal;
message='{"type":"log","@timestamp":"2019-07-03T07:48:59Z","tags":

["error","task_manager"],"pid":373,"message":"Failed to poll for work:


[cluster_block_exception] index [.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-

only / allow delete (api)]; :: {\"path\":\"/.kibana_task_manager/_update/Maps-


maps_telemetry\",\"query\":

{\"if_seq_no\":91,\"if_primary_term\":13,\"refresh\":\"true\"},\"body\":\"{\\\"doc\\\":
{\\\"type\\\":\\\"task\\\",\\\"task\\\":

{\\\"taskType\\\":\\\"maps_telemetry\\\",\\\"state\\\":\\\"
{\\\\\\\"runs\\\\\\\":1,\\\\\\\"stats\\\\\\\":

{\\\\\\\"mapsTotalCount\\\\\\\":0,\\\\\\\"timeCaptured\\\\\\\":\\\\\\\"2019-07-
02T09:58:11.782Z\\\\\\\",\\\\\\\"attributesPerMap\\\\\\\":{\\\\\\\"dataSourcesCount\\\\\\\":
{\\\\\\\"min\\\\\\\":0,\\\\\\\"max\\\\\\\":0,\\\\\\\"avg\\\\\\\":0},\\\\\\\"layersCount\\\\\

\\":
{\\\\\\\"min\\\\\\\":0,\\\\\\\"max\\\\\\\":0,\\\\\\\"avg\\\\\\\":0},\\\\\\\"layerTypesCount\

\\\\\\":{},\\\\\\\"emsVectorLayersCount\\\\\\\":{}}}}\\\",\\\"params\\\":\\\"
{}\\\",\\\"attempts\\\":0,\\\"'

[2019-07-03T09:49:17.774864] json-parser(): no marker at the beginning of the message,


skipping JSON parsing ; input='{"type":"log","@timestamp":"2019-07-03T07:48:59Z","tags":

["error","task_manager"],"pid":373,"message":"Failed to poll for work:


[cluster_block_exception] index [.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-

only / allow delete (api)]; :: {\"path\":\"/.kibana_task_manager/_update/Maps-


maps_telemetry\",\"query\":

{\"if_seq_no\":91,\"if_primary_term\":13,\"refresh\":\"true\"},\"body\":\"{\\\"doc\\\":
{\\\"type\\\":\\\"task\\\",\\\"task\\\":

{\\\"taskType\\\":\\\"maps_telemetry\\\",\\\"state\\\":\\\"
{\\\\\\\"runs\\\\\\\":1,\\\\\\\"stats\\\\\\\":

{\\\\\\\"mapsTotalCount\\\\\\\":0,\\\\\\\"timeCaptured\\\\\\\":\\\\\\\"2019-07-
02T09:58:11.782Z\\\\\\\",\\\\\\\"attributesPerMap\\\\\\\":{\\\\\\\"dataSourcesCount\\\\\\\":

{\\\\\\\"min\\\\\\\":0,\\\\\\\"max\\\\\\\":0,\\\\\\\"avg\\\\\\\":0},\\\\\\\"layersCount\\\\\
Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
\\":
website you consent to accept the noti cation.
{\\\\\\\"min\\\\\\\":0,\\\\\\\"max\\\\\\\":0,\\\\\\\"avg\\\\\\\":0},\\\\\\\"layerTypesCount\
Accept No Privacy policy
\\\\\\":{},\\\\\\\"emsVectorLayersCount\\\\\\\":{}}}}\\\",\\\"params\\\":\\\"

{}\\\",\\\"attempts\\\":0,\\\"scheduledAt\\\":\\\"2019-07-
02T09:58:11.518Z\\\",\\\"runAt\\\":\\\"2019-07-

03T07:49:59.504Z\\\",\\\"status\\\":\\\"running\\\"},\\\"kibana\\\":
{\\\"uuid\\\":\\\"21ca7904-ac5c-4314-b75e-

3996fd5c3c42\\\",\\\"version\\\":7020099,\\\"apiVersion\\\":1}}}\",\"statusCode\":403,\"resp
onse\":\"{\\\"error\\\":{\\\"root_cause\\\":

[{\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index
[.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-only / allow delete

(api)];\\\"}],\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index
[.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-only / allow delete

(api)];\\\"},\\\"status\\\":403}\"}"}', marker='@cim:'

Balázs Németh Post author

07/03/2019 at 09:08

This looks to be unrelated to the topic. Seems like there is a permission issue in Kibana.

Gandalf
07/03/2019 at 09:45

yes, looks like my upgrade of SELKS do broke some access…

Will try to do a fresh install on a new Debian of ELK…


If you have advice about howto and tutorial, thanks… 😉

Gandalf
07/03/2019 at 15:31

Look okay with the latest from your github…

I used a new fresh SELKS install, upgrade syslog-ng to latest from unno cial repository, upgraded
ElasticSearch,
Dear Visitor! WeLogStash andthat
inform you Kibana to 7.2 from
our website usesocookies
cial repository.
in order to increase the user experience. By using our
I also upgraded the latest java-open.
website you consent to accept the noti cation.

Accept No Privacy policy


Then I downloaded the GEOIP with your script from the fail2ban post.
Con gure syslog-ng and add network-* index from kibana (after a small data added).
I also added your templates and dashboards ( ne, thank you)…

thanks again…

a samll tips, to help; I have done this on network.conf that make the con guration more “generic/portable”
root@SELKS:~# cat /etc/syslog-ng/conf.d/network.conf
@de ne network_ip “0.0.0.0”
@de ne router_hostname “netgear”
@de ne elastic_host “127.0.0.1:9200”

Gandalf
07/09/2019 at 16:49

A new question rise to me.


Is it possible to log any tra c with ulogd, like on a switch listening mode ?
Not the tra c which get through the routeur, but the tra c listened on a switch ?

It’s a little o topic about your tutorial, but it may quickly be adapted to get a mirror port device passively
listen the home network tra c without requesting to get a OpenWrt Routeur, but a sepci c device for
network tra c analysis…

Thanks if you can help me to understand how to make this …

Balázs Németh Post author

07/10/2019 at 08:29

Depending on what log a given switch could produce, you can integrate them into Elasticsearch too.
If if can run ulogd then it should be easy.
If it is NetFlow then that should be also achievable.
I picked my main router as it has the singe point of view about connections.

OpenWRT can be also used to produce net ow data. Also it can mirror the tra c via iptables to other host
where you can analyze it with Suricata. I have plans to do it, but ulogd seemed to be a low hanging fruit
compared to the complexity and network requirements of port mirroring.

Dear Visitor! We inform you that our website uses cookies in order to increase the user experience. By using our
website you consent to accept the noti cation.

Accept No Privacy policy

Das könnte Ihnen auch gefallen