Examples & quickstart guides

Table of contents

I.      Plugins included with pmacct distribution
II.     Configuring pmacct for compilation and installing
III.    Brief SQL (MySQL, PostgreSQL, SQLite 3.x) and noSQL (MongoDB) setup examples
IV.     Running the libpcap-based daemon (pmacctd)
V.      Running the NetFlow and sFlow daemons (nfacctd/sfacctd)
VI.     Running the ULOG-based daemon (uacctd)
VII.    Running the pmacct client (pmacct)
VIII.   Running the RabbitMQ/AMQP plugin
IX.     Quickstart guide to packet/stream classifiers
X.      Quickstart guide to setup a NetFlow agent/probe
XI.     Quickstart guide to setup a sFlow agent/probe
XII.    Quickstart guide to setup the BGP daemon
XIII.   Quickstart guide to setup a NetFlow/sFlow replicator
XIV.    Quickstart guide to setup the IS-IS daemon
XV.     Running the print plugin to write to flat-files
XVI.    Quickstart guide to setup GeoIP lookups
XVII.   Using pmacct as traffic/event logger

I. Plugins included with pmacct distribution

Given its open and pluggable architecture, pmacct is easily extensible with new
plugins. Here is a list of plugins included in the official pmacct distribution:

'memory':  data is stored in a memory table and can be fetched via the pmacct
           command-line client tool, 'pmacct'. This plugin also allows easily to
           inject data into 3rd party tools like GNUplot, RRDtool or a Net-SNMP
           server.
'mysql':   a working MySQL installation can be used for data storage.
'pgsql':   a working PostgreSQL installation can be used for data storage.
'sqlite3': a working SQLite 3.x or BerkeleyDB 5.x (compiled in with the SQLite
           API) installation can be used for data storage.
'print':   data is printed at regular intervals to flat-files or standard output
           in tab-spaced, CSV and JSON formats.
'mongodb': a working MongoDB installation can be used for data storage. It is
           required to install the MongoDB API C driver.
'amqp':    data is sent to a RabbitMQ message exchange, running AMQP protocol,
           for delivery to consumer applications or tools.

II. Configuring pmacct for compilation

The simplest way to configure the package for compilation is to let the configure
script to probe default headers and libraries for you. Switches you are likely to
want enabled are already set so, ie. 64 bits counters and multi-threading (pre-
requisite for the BGP and IGP daemon codes). SQL plugins and IPv6 support are by
default disabled instead. A few examples will follow; as usual to get the list of
available switches, you can use the following command-line:

shell> ./configure --help

Examples on how to enable the support for (1) MySQL, (2) PostgreSQL, (3) SQLite,
(4) MongoDB and any (5) mixed compilation:

(1) shell> ./configure --enable-mysql
(2) shell> ./configure --enable-pgsql
(3) shell> ./configure --enable-sqlite3
(4) shell> ./configure --enable-mongodb
(5) shell> ./configure --enable-mysql --enable-pgsql

Then to compile and install simply:

shell> make; make install

Once daemons are installed you can check:
* how to instrument each daemon via its usage help page:
  shell> pmacctd -h
* review version and build details:
  shell> sfacctd -V
* supported traffic aggregation primitives by the daemon, and their description:
  shell> nfacctd -a

III. Brief SQL setup examples

Scripts for setting up databases (MySQL, PostgreSQL and SQLite) are into the 'sql/'
tree. For further guidance read the relevant README files in such directory. One of
the crucial concepts to deal with, when using default IP or BGP SQL tables, is table
versioning: please read more about it in the FAQS document (Q16).

IIIa. MySQL examples

shell> cd sql/

- To create v1 tables:
shell> mysql -u root -p < pmacct-create-db_v1.mysql
shell> mysql -u root -p < pmacct-grant-db.mysql

Data will be available in 'acct' table of 'pmacct' DB.

- To create v2 tables:
shell> mysql -u root -p < pmacct-create-db_v2.mysql
shell> mysql -u root -p < pmacct-grant-db.mysql

Data will be available in 'acct_v2' table of 'pmacct' DB.

... And so on for the newer versions.

IIIb. PostgreSQL examples

Which user has to execute the following two scripts and how to autenticate with the
PostgreSQL server depends upon your current configuration. Keep in mind that both
scripts need postgres superuser permissions to execute some commands successfully:
shell> cp -p *.pgsql /tmp
shell> su - postgres

To create v1 tables:
shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
shell> psql -d pmacct -f /tmp/pmacct-create-table_v1.pgsql

To create v2 tables:
shell> psql -d template1 -f /tmp/pmacct-create-db.pgsql
shell> psql -d pmacct -f /tmp/pmacct-create-table_v2.pgsql

... And so on for the newer versions.

A few tables will be created into 'pmacct' DB. 'acct' ('acct_v2' or 'acct_v3') table is
the default table where data will be written when in 'typed' mode (see 'sql_data' option
in CONFIG-KEYS document; default value is 'typed'); 'acct_uni' ('acct_uni_v2' or
'acct_uni_v3') is the default table where data will be written when in 'unified' mode.

Since v6, PostgreSQL tables are greatly simplified: unified mode is no longer supported
and an unique table ('acct_v6', for example) is created instead.

IIIc. SQLite examples

shell> cd sql/

- To create v1 tables:
shell> sqlite3 /tmp/pmacct.db < pmacct-create-table.sqlite3

Data will be available in 'acct' table of '/tmp/pmacct.db' DB. Of course, you can change
the database filename basing on your preferences.

- To create v2 tables:
shell> sqlite3 /tmp/pmacct.db < pmacct-create-table_v2.sqlite3

Data will be available in 'acct_v2' table of '/tmp/pmacct.db' DB.

... And so on for the newer versions.

IIId. Custom SQL tables

Custom SQL tables can be built by creating your own SQL schema and indexes. This allows
to freely mix-and-match the primitives relevant to your accounting scenario. Specifying
SQL table version and type (sql_table_version, sql_table_type) is not required; whereas
a new directive, sql_optimize_clauses, is introduced to flag table customization to
pmacct. This is a simple configuration snippet:

sql_optimize_clauses: true
sql_table: <table name>
aggregate: <aggregation primitives list>

How to build the custom schema? Let's say the aggregation method of choice
(aggregate directive) is "vlan, in_iface, out_iface, etype" the table name is
"acct" and the database of choice is MySQL. The SQL schema is composed of four
main parts, explained below:

1) A fixed skeleton needed by pmacct logics:

CREATE TABLE <table_name> (
        packets INT UNSIGNED NOT NULL,
        bytes BIGINT UNSIGNED NOT NULL,
        stamp_inserted DATETIME NOT NULL,
        stamp_updated DATETIME,
);

2) Indexing: primary key (of your choice, this is only an example) plus
   any additional index you may find relevant.

3) Primitives enabled in pmacct, in this specific example the ones below; should
   one need more/others, these can be looked up in the sql/README.mysql file in
   the section named "Aggregation primitives to SQL schema mapping:" :

        vlan INT(2) UNSIGNED NOT NULL,
        iface_in INT(4) UNSIGNED NOT NULL,
        iface_out INT(4) UNSIGNED NOT NULL,
        etype INT(2) UNSIGNED NOT NULL,

4) Any additional fields, ignored by pmacct, that can be of use, these can be
   for lookup purposes, auto-increment, etc. and can be of course also part of
   the indexing you might choose.

Putting the pieces together, the resulting SQL schema is below along with the
required statements to create the database:

DROP DATABASE IF EXISTS pmacct;
CREATE DATABASE pmacct;

USE pmacct;

DROP TABLE IS EXISTS acct;

CREATE TABLE acct (
        vlan INT(2) UNSIGNED NOT NULL,
        iface_in INT(4) UNSIGNED NOT NULL,
        iface_out INT(4) UNSIGNED NOT NULL,
        etype INT(2) UNSIGNED NOT NULL,
        packets INT UNSIGNED NOT NULL,
        bytes BIGINT UNSIGNED NOT NULL,
        stamp_inserted DATETIME NOT NULL,
        stamp_updated DATETIME,
        PRIMARY KEY (vlan, iface_in, iface_out, etype, stamp_inserted)
);

To grant default pmacct user permission to write into the database look at the
file sql/pmacct-grant-db.mysql

IIIe. Historical accounting

Enabling historical accounting allows to aggregate data over time (ie. 5 mins, hourly,
daily) in a flexible and fully configurable way. Timestamps are lodged into two fields:
'stamp_inserted' which represents the basetime of the timeslot and 'stamp_updated' which
says when a given timeslot was updated for the last time. Following there is a pretty
standard configuration fragment to slice data into nicely aligned (or rounded-off) 5
minutes timeslots:

sql_history: 5m
sql_history_roundoff: m

IIIf. INSERTs-only

UPDATE queries are demanding in terms of resources; this is why, even if they are
supported by pmacct, a savy approach is to cache data for longer times in memory and
write them off once per timeslot (sql_history): this produces a much lighter INSERTs-
only environemnt. This is an example based on 5 minutes timeslots:

sql_refresh_time: 300
sql_history: 5m
sql_history_roundoff: m
sql_dont_try_update: true

Note that sql_refresh_time is always expressed in seconds.

IIIg. MongoDB examples

MongoDB if a document-oriented noSQL database. Defining feature of document-oriented
databases is that they are schemaless hence this section will only need to focus on a
simple configuration with historical accounting support:

...
plugins: mongodb
aggregate: ...
mongo_history: 5m
mongo_history_roundoff: m
mongo_refresh_time: 300
mongo_table: pmacct.acct
...

MongoDB release >= 2.2.0 is recommended. Installation of the MongoDB C driver is
required.

IV. Running the libpcap-based daemon (pmacctd)

pmacctd, like the other daemons, can be run with commandline options, using a config
file or a mix of the two. Sample configuration files are in examples/ tree. Note also
that most of the new features are available only as configuration directives. To be
aware of the existing configuration directives, please read the CONFIG-KEYS document.

Show all available pmacctd commandline switches:
shell> pmacctd -h

Run pmacctd reading configuration from a specified file (see examples/ tree for a brief
list of some commonly useed keys; divert your eyes to CONFIG-KEYS for the full list).
This example applies to all daemons:
shell> pmacctd -f pmacctd.conf

Daemonize the process; listen on eth0; aggregate data by src_host/dst_host; write to a
MySQL server; limit traffic matching only source ip network 10.0.0.0/16; note that
filters work the same as tcpdump. So, refer to libpcap/tcpdump man pages for examples
and further reading.

shell> pmacctd -D -c src_host,dst_host -i eth0 -P mysql src net 10.0.0.0/16

Or written the configuration way:
!
daemonize: true
plugins: mysql
aggregate: src_host, dst_host
interface: eth0
pcap_filter: src net 10.0.0.0/16
! ...

Print collected traffic data aggregated by src_host/dst_host over the screen; refresh
data every 30 seconds and listen on eth0.

shell> pmacctd -P print -r 30 -i eth0 -c src_host,dst_host

Or written the configuration way:
!
plugins: print
print_refresh_time: 30
aggregate: src_host, dst_host
interface: eth0
! ...

Daemonize the process; let pmacct aggregate traffic in order to show in vs out traffic
for network 192.168.0.0/16; send data to a PostgreSQL server. This configuration is not
possible via commandline switches; the corresponding configuration follows:

!
daemonize: true
plugins: pgsql[in], pgsql[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
sql_table[in]: acct_in
sql_table[out]: acct_out
! ...

The previous example looks nice! But how to make data historical ? Simple enough, let's
suppose you want to split traffic by hour and write data into the DB every 60 seconds.

!
daemonize: true
plugins: pgsql[in], pgsql[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
sql_table[in]: acct_in
sql_table[out]: acct_out
sql_refresh_time: 60
sql_history: 1h
sql_history_roundoff: h
! ...

Let's now translate the same example in the memory plugin world. It's use is valuable
expecially when it's required to feed bytes/packets/flows counters to external programs.
Examples about the client program will follow later in this document. Now, note that
each memory table need its own pipe file in order to get correctly contacted by the
client:

!
daemonize: true
plugins: memory[in], memory[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.0.0/16
aggregate_filter[out]: src net 192.168.0.0/16
imt_path[in]: /tmp/pmacct_in.pipe
imt_path[out]: /tmp/pmacct_out.pipe
! ...

As a further note, check the CONFIG-KEYS document about more imt_* directives as they
will support in the task of fine tuning the size and boundaries of memory tables, if
default values are not ok for your setup.

Now, fire multiple instances of pmacctd, each on a different interface; again, because
each instance will have its own memory table, it will require its own pipe file for
client queries aswell (as explained in the previous examples):
shell> pmacctd -D -i eth0 -m 8 -s 65535 -p /tmp/pipe.eth0
shell> pmacctd -D -i ppp0 -m 0 -s 32768 -p /tmp/pipe.ppp0

Run pmacctd logging what happens to syslog and using "local2" facility:
shell> pmacctd -c src_host,dst_host -S local2

NOTE: superuser privileges are needed to execute pmacctd correctly.

V. Running the NetFlow and sFlow daemons (nfacctd/sfacctd)

All examples about pmacctd are also valid for nfacctd and sfacctd with the exception
of directives that apply exclusively to libpcap. If you've skipped examples in section
'IV', please read them before continuing. All configuration keys available are in the
CONFIG-KEYS document. Some examples:

Run nfacctd reading configuration from a specified file.
shell> nfacctd -f nfacctd.conf

Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on port 5678 for incoming Netflow
datagrams (from one or multiple NetFlow agents). Let's make pmacct refresh data each
two minutes and let's make data historical, divided into timeslots of 10 minutes each.
Finally, let's make use of a SQL table, version 4.
shell> nfacctd -D -c sum_host -P mysql -l 5678

And now written the configuration way:
!
daemonize: true
plugins: mysql
aggregate: sum_host
nfacctd_port: 5678
sql_refresh_time: 120
sql_history: 10m
sql_history_roundoff: mh
sql_table_version: 4
! ...

VI. Running the ULOG-based daemon (uacctd)

All examples about pmacctd are also valid for uacctd with the exception of directives
that apply exclusively to libpcap. If you've skipped examples in section 'IV', please
read them before continuing. All configuration keys available are in the CONFIG-KEYS
document.

The Linux ULOG infrastructure requires a couple parameters in order to work properly.
These are the ULOG multicast group (uacctd_group) to which captured packets have to be
sent to and the Netlink buffer size (uacctd_nl_size). The default buffer settings (4KB)
typically works OK for small environments. If the uacctd user is not already familiar
with the iptables ULOG target, it is adviceable to start with a tutorial, like the one
at the following URL ("6.5.15. ULOG target" section):

http://www.faqs.org/docs/iptables/targets.html

Apart from determining how and what traffic to capture with iptables, which is topic
outside the scope of this document, the most relevant point is the "--ulog-nlgroup"
iptables setting has to match with the "uacctd_group" uacctd one.

A couple examples follow:

Run uacctd reading configuration from a specified file.
shell> uacctd -f uacctd.conf

Daemonize the process; aggregate data by sum_host (by host, summing inbound + outbound
traffic); write to a local MySQL server. Listen on ULOG multicast group #5. Let's make
pmacct divide data into historical time-bins of 5 minutes. Let's disable UPDATE queries
and hence align refresh time with the timeslot length. Finally, let's make use of a SQL
table, version 4:
!
uacctd_group: 5
daemonize: true
plugins: mysql
aggregate: sum_host
sql_refresh_time: 300
sql_history: 5m
sql_history_roundoff: mh
sql_table_version: 4
sql_dont_try_update: true
! ...

VII. Running the pmacct client (pmacct)

The pmacct client is used to retrieve data from memory tables. Requests and answers
are exchanged via a pipe file: authorization is strictly connected to permissions on
the pipe file. Note: while writing queries commandline, it may happen to write chars
with a special meaning for the shell itself (ie. ; or *). Mind to either escape ( \;
or \* ) them or put in quotes ( " ).

Show all available pmacct client commandline switches:
shell> pmacct -h

Fetch data stored into the memory table:
shell> pmacct -s

Match data between source IP 192.168.0.10 and destination IP 192.168.0.3 and return
a formatted output; display all fields (-a), this way the output is easy to be parsed
by tools like awk/sed; each unused field will be zero-filled:
shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -a

Similar to the previous example; it is requested to reset data for matched entries;
the server will return the actual counters to the client, then will reset them:
shell> pmacct -c src_host,dst_host -M 192.168.0.10,192.168.0.3 -r

Fetch data for IP address dst_host 10.0.1.200; we also ask for a 'counter only' output
('-N') suitable, this time, for injecting data in tools like MRTG or RRDtool (sample
scripts are in the examples/ tree). Bytes counter will be returned (but the '-n' switch
allows also select which counter to display). If multiple entries match the request (ie
because the query is based on dst_host but the daemon is actually aggregating traffic
as "src_host, dst_host") their counters will be summed:
shell> pmacct -c dst_host -N 10.0.1.200

Another query; this time let's contact the server listening on pipe file /tmp/pipe.eth0:
shell> pmacct -c sum_port -N 80 -p /tmp/pipe.eth0

Find all data matching host 192.168.84.133 as either their source or destination address.
In particular, this example shows how to use wildcards and how to spawn multiple queries
(each separated by the ';' symbol). Take care to follow the same order when specifying
the primitive name (-c) and its actual value ('-M' or '-N'):
shell> pmacct -c src_host,dst_host -N "192.168.84.133,*;*,192.168.84.133"

Find all web and smtp traffic; we are interested in have just the total of such traffic
(for example, to split legal network usage from the total); the output will be a unique
counter, sum of the partial (coming from each query) values.
shell> pmacct -c src_port,dst_port -N "25,*;*,25;80,*;*,80" -S

Show traffic between the specified hosts; this aims to be a simple example of a batch
query; note that as value of both '-N' and '-M' switches it can be supplied a value like:
'file:/home/paolo/queries.list': actual values will be read from the specified file (and
they need to be written into it, one per line) instead of commandline:
shell> pmacct -c src_host,dst_host -N "10.0.0.10,10.0.0.1;10.0.0.9,10.0.0.1;10.0.0.8,10.0.0.1"
shell> pmacct -c src_host,dst_host -N "file:/home/paolo/queries.list"

VIII. Running the RabbitMQ/AMQP plugin

The Advanced Message Queuing Protocol (AMQP) is an open standard for passing business
messages between applications. RabbitMQ is a messaging broker, an intermediary for
messaging, which implementes AMQP. pmacct RabbitMQ/AMQP plugin is designed to send
aggregated network traffic data, in JSON format, through a RabbitMQ server to 3rd
party applications. Requirements to use the plugin are:

* A working RabbitMQ server: http://www.rabbitmq.com/
* RabbitMQ C API, rabbitmq-c: https://github.com/alanxz/rabbitmq-c/
* Libjansson to cook JSON objects: http://www.digip.org/jansson/

Once these elements are installed, pmacct can be configured for compilation as
follows (assumptions: Jansson is installed in /usr/local/lib and RabbitMQ server
and rabbitmq-c are installed in /usr/local/rabbitmq as base path):

./configure --enable-rabbitmq \
            --with-rabbitmq-libs=/usr/local/rabbitmq/lib/ \
            --with-rabbitmq-includes=/usr/local/rabbitmq/include/ \
            --enable-jansson

Then "make; make install" as usual. Following a configuration snippet showing a
basic RabbitMQ/AMQP plugin configuration (assumes: RabbitMQ server is available
at localhost; look all configurable directives up in the CONFIG-KEYS document):

! ..
plugins: amqp
!
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
amqp_exchange: pmacct
amqp_routing_key: acct
amqp_refresh_time: 300
sql_history: 5m
sql_history_roundoff: m
! ..

pmacct will only declare a message exchange and provide a routing key, ie. it
will not get involved with queues at all. A basic consumer script, in Python,
is provided as sample to: declare a queue, bind the queue to the exchange and
show consumed data on the screen. The script is located in the pmacct default
distribution tarball in: examples/amqp/amqp_receiver.py and requires the pika
Python module installed. Should this not be available you can read on the
following page how to get it installed:

http://www.rabbitmq.com/tutorials/tutorial-one-python.html

Improvements to the basic Python script provided and/or examples in different
languages are very welcome at this stage.

IX. Quickstart guide to packet classifiers

pmacct 0.10.0 sees the introduction of a packet classification feature. The approach
is fully extensible: classification patterns are based over regular expressions (RE),
must be placed into a common directory and have a .pat file extension. Patterns for
well-known protocols are available and are just a click away. Furthermore, you can
write your own patterns (and share them with the active L7-filter project's community).
Below the quickstarter guide:

a) download pmacct
shell> wget http://www.pmacct.net/pmacct-x.y.z.tar.gz

b) compile pmacct
shell> cd pmacct-x.y.z; ./configure && make && make install

c-1) download regular expression (RE) classifiers as-you-need them: you just need to
     point your browser to http://l7-filter.sourceforge.net/protocols/ then:

     shell> cd /path/to/classifiers/
     shell> wget http://l7-filter.sourceforge.net/layer7-protocols/protocols/[ protocol ].pat

c-2) download all the RE classifiers available: you just need to point your browser to
     http://sourceforge.net/projects/l7-filter (and take to the latest L7-protocol
     definitions tarball).

c-3) download shared object (SO) classifiers (written in C) as-you-need them: you need
     just to point your browser to http://www.pmacct.net/classification/ , download the
     available package, extract files and compile things following INSTALL instructions.
     When everything is finished, install the produced shared objects:

     shell> mv *.so /path/to/classifiers/

d-1) build pmacct configuration, a memory table example:
!
daemonize: true
interface: eth0
aggregate: flows, class
plugins: memory
classifiers: /path/to/classifiers/
snaplen: 700
!...

d-2) build pmacct configuration, a SQL example:
!
daemonize: true
interface: eth0
aggregate: flows, class
plugins: mysql
classifiers: /path/to/classifiers/
snaplen: 700
sql_history: 1h
sql_history_roundoff: h
sql_table_version: 5
sql_aggressive_classification: true
!...

e) Ok, we are done! Fire the pmacct collector daemon:

   shell> pmacctd -f /path/to/configuration/file

   You can now play with the SQL or pmacct client; furthermore, you can add/remove/write
   patterns and load them by restarting the pmacct daemon. If using the memory plugin
   you can check out the list of loaded plugins with 'pmacct -C'. Don't underestimate
   the importance of 'snaplen', 'pmacctd_flow_buffer_size', 'pmacctd_flow_buffer_buckets'
   values; get the time to take a read about them in the CONFIG-KEYS document.

X. Quickstart guide to setup a NetFlow agent/probe

pmacct 0.11.0 sees the introduction of traffic data export capabilities, through both
NetFlow and sFlow protocols. While NetFlow v5 is fixed by nature, v9 adds flexibility
by allowing to transport custom informations (for example, L7-classification tags to a
remote collector). Below the quickstarter guide:

a) usual initial steps: download pmacct, unpack it, compile it.

b) build NetFlow probe configuration, using pmacctd:
!
daemonize: true
interface: eth0
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
plugins: nfprobe
nfprobe_receiver: 1.2.3.4:2100
nfprobe_version: 9
! nfprobe_engine: 1:1
! nfprobe_timeouts: tcp=120:maxlife=3600
!
! networks_file: /path/to/networks.lst
! classifiers: /path/to/classifiers/
! snaplen: 700
!...

   This is a basic working configuration. Additional features include: 1) generate ASNs
   by using a networks_file pointing to a valid Networks File (see examples/ directory)
   and adding src_as, dst_as primitives to the 'aggregate' directive; alternatively, as
   of release 0.12.0rc2, it's possible to generate ASNs from the pmacctd BGP thread. The
   following fragment can be added to the configuration above:

pmacctd_as: bgp
bgp_daemon: true
bgp_daemon_ip: 127.0.0.1
bgp_agent_map: /path/to/agent_to_peer.map
bgp_daemon_port: 17917

   The bgp_daemon_port can be changed from the standard BGP port (179/TCP) in order to
   co-exist with other BGP routing software which might be running on the same host.
   Furthermore, they can safely peer each other by using 127.0.0.1 as bgp_daemon_ip.
   In pmacctd, bgp_agent_map does the trick of mapping 0.0.0.0 to the IP address of
   the BGP peer (ie. 127.0.0.1: 'id=127.0.0.1 ip=0.0.0.0'); this setup, while generic,
   was tested working in conjunction with Quagga 0.99.14. Following a relevant fragment
   of the Quagga configuration:

router bgp Y
 bgp router-id X.X.X.X
 neighbor 127.0.0.1 remote-as Y
 neighbor 127.0.0.1 port 17917
 neighbor 127.0.0.1 update-source X.X.X.X
!

   2) encode flow classification information in NetFlow v9 like Cisco does with its
   NBAR/NetFlow v9 tie-up. This can be done by introducing the 'class' primitive to
   the afore mentioned 'aggregate' and add the extra configuration directives:

aggregate: class, src_host, dst_host, src_port, dst_port, proto, tos
classifiers: /path/to/classifiers/
snaplen: 700

   Further information on this topic can be found in the section of this document about
   stream classification; 3) add direction (ingress, egress) awareness to measured IP
   traffic flows. Direction can be inferred either statically (in, out) or dinamically
   (tag, tag2) via nfprobe_direction directive. Let's look at a dynamic example using
   tag2; first, add the following lines to the daemon configuration:

nfprobe_direction: tag2
pre_tag_map: /path/to/pretag.map

   then edit the tag map as follows. A return value of '1' means ingress while '2' is
   translated to egress. It is possible to employ L2 and/or L3 addresses to recognize
   flow directions. The 'id2' primitive (tag2) will be used to carry the return value:

id=1 filter='dst host XXX.XXX.XXX.XXX'
id=2 filter='src host XXX.XXX.XXX.XXX'

id=1 filter='ether src XX:XX:XX:XX:XX:XX'
id=2 filter='ether dst XX:XX:XX:XX:XX:XX'

   Indeed in such a case, the 'id' primitive (tag) can be leveraged to other uses (ie.
   filter sub-set of the traffic for flow export); 4) add interface (input, output)
   awareness to measured IP traffic flowsi - in addition to direction awareness, as
   just discussed. Interface can be inferred either statically (<1-4294967295>) or
   dynamically (tag, tag2) via nfprobe_ifindex directive. Let's look at a dynamic
   example using tag; first add the following lines to the daemon configuration:

nfprobe_direction: tag
pre_tag_map: /path/to/pretag.map

   then edit the tag map as follows. It is possible to employ L2 and/or L3 addresses
   to recognize flow directions. The 'id' primitive (tag) will be used to carry the
   return value:

id=100 filter='dst host XXX.XXX.XXX.XXX'
id=100 filter='src host XXX.XXX.XXX.XXX'
id=200 filter='dst host YYY.YYY.YYY.YYY'
id=200 filter='src host YYY.YYY.YYY.YYY'

id=200 filter='ether src YY:YY:YY:YY:YY:YY'
id=200 filter='ether dst YY:YY:YY:YY:YY:YY'

c) build NetFlow collector configuration, using nfacctd:
!
daemonize: true
nfacctd_ip: 1.2.3.4
nfacctd_port: 2100
plugins: memory[display]
aggregate[display]: src_host, dst_host, src_port, dst_port, proto
!
! classifiers: /path/to/classifiers

d) Ok, we are done ! Now fire both daemons:

   shell a> pmacctd -f /path/to/configuration/pmacctd-nfprobe.conf
   shell b> nfacctd -f /path/to/configuration/nfacctd-memory.conf

XI. Quickstart guide to setup a sFlow agent/probe

pmacct 0.11.0 sees the introduction of traffic data export capabilities via sFlow; such
protocol is quite different from NetFlow: in short, it works by exporting portions of
sampled packets rather than building uni-directional flows as it happens in NetFlow;
this less-stateful approach makes sFlow a light export protocol well-tailored for high-
speed networks. Further, sFlow v5 can be extended much like NetFlow v9: meaning, ie.,
L7 classification or basic Extended Gateway information (ie. src_as, dst_as) can be
embedded in the record structure being exported. Below the quickstarter guide:

b) build sFlow probe configuration, using pmacctd:
!
daemonize: true
interface: eth0
plugins: sfprobe
sfprobe_agentsubid: 1402
sfprobe_receiver: 1.2.3.4:6343
sfprobe_sampling_rate: 20
!
! networks_file: /path/to/networks.lst
! classifiers: /path/to/classifiers/
! snaplen: 700
!...

XII. Quickstart guide to setup the BGP daemon

pmacct 0.12.0 integrates a BGP daemon into the IP accounting collectors part of
the toolset. Such daemon is run as a thread within the collector core process. The
idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and control
plane information, ie. full routing tables via BGP from edge routers. Per-peer BGP
RIBs are maintained to ensure local or regional views of the network (ie. in case
of large networks which are partitioned in BGP clusters or federations).
In case of routers with default-only or partial BGP views, the default route can be
followed up (bgp_default_follow); also it might be desirable in certain situations,
for example to save resources, to entirely map one or a set of agents to a BGP peer
(bgp_agent_map).

The first requirement is that pmacct has to be configured for compilation with threads,
this line will do it:

./configure --enable-threads

The following configuration fragment is alone sufficient to set up a BGP daemon which
will bind to an IP address and will support up to a maximum number of 100 peers. Once
the PE routers begin sending NetFlow datagrams and peer up, it should be possible to
see the BGP-related fields, ie. src_as, dst_as, as_path, peer_as_dst, local_pref, MED,
etc., correctly populated while querying the memory table:

bgp_daemon: true
bgp_daemon_ip: X.X.X.X
bgp_daemon_max_peers: 100
nfacctd_as_new: bgp
[ ... ]
plugins: memory
aggregation: src_as, dst_as, local_pref, med, as_path, peer_dst_as

The BGP daemon reads the remote ASN upon receipt of a BGP OPEN message and dynamically
presents itself as part of the same Autonomous System - to ensure an iBGP relationship
is established all the times. Also, the BGP daemon acts as a passive BGP neighbor and
hence will never try to re-establish a fallen peering session.

XIIa. Limiting AS-PATH and BGP community attributes length

AS-PATH and BGP communities can by nature get easily long, when represented as strings.
Sometimes only a small portion of their content is relevant to the accounting task and
hence a filtering layer was developed to take special care of these attributes. The
bgp_aspath_radius cuts the AS-PATH down after a specified amount of hops; whereas the
bgp_stdcomm_pattern does a simple sub-string matching against standard BGP communities,
filtering in only those that match (optionally, for better precision, a pre-defined
number of characters can be wildcarded by employing the '.' symbol, like in regular
expressions). See a typical usage example below:

bgp_aspath_radius: 3
bgp_stdcomm_pattern: 12345:

A detailed description of these configuration directives is, as usual, included in
the CONFIG-KEYS document.

XIIb. The source peer AS case

The peer_src_as primitive adds useful insight in understanding where traffic enters
the network; unfortunately asymmetric routing compromises accuracy such information
in NetFlow datagrams when configured with the peer-as feature (as the router simply
performs a lookup on the source IP address in the BGP table and return its supposed
symmetric entrance point). In this context pmacct offers a few ways to perform some
mapping to easily model private and public peerings, both bi-lateral or multi-lateral.
Find below how to use a map, reloadable at runtime, and its contents (for full syntax
guide lines, please see the 'peers.map.example' file within the examples section):

nfacctd_bgp_peer_src_as_type: map
nfacctd_bgp_peer_src_as_map: /path/to/peers.map

[/path/to/peers.map]
id=12345 ip=1.2.3.4 in=10 bgp_nexthop=3.4.5.6
id=34567 ip=1.2.3.4 in=10

id=45678 ip=2.3.4.5 in=20 src_mac=00:11:22:33:44:55
id=56789 ip=2.3.4.5 in=20 src_mac=00:22:33:44:55:66

Even though all this mapping is static, it can be auto-provisioned to a good degree
by means of external scripts running at regular intervals and, for example, querying
relevant routers via SNMP. In this sense, the bgpPeerTable MIB is a good starting
point.

NOTE: the peer_src_as primitive doesn't really apply to egress NetFlow (or egress
sFlow) as it mainly relies on either the input ifIndex, the source MAC address, a
reverse BGP next-hop lookup or a combination of these.

XIIc. Tracking entities on the own IP address space

It might happen that not all entities attached to the service provider network are
speaking BGP but rather they get IP prefixes redistributed into iBGP (different
routing protocols, statics, directly connected, etc.). These can be private IP
addresses or segments of the SP address space. The common factor to all of them is
that while being present in iBGP, these prefixes can't be tracked any further due
to the lack of attributes like AS-PATH or an ASN. To overcome this situation the
simplest approach is to employ a nfacctd_bgp_peer_src_as_map directive, described
previously (ie. making use of interface descriptions as a possible way to automate
the process).
Alterntively, the bgp_stdcomm_pattern_to_asn directive was developed to fit into
this scenario: assuming procedures of a SP are (or can be changed) to label every
relevant non-BGP speaking entity IP prefixes uniquely with a BGP standard community,
this directive allows to map the community to a peer AS/origin AS couple as per the
following example: XXXXX:YYYYY => Peer-AS=XXXXX, Origin-AS=YYYYY.

XIId. Preparing the router to BGP peer

Once the collector is configured and started up the remaining step is to let routers
to export traffic samples to the collector and BGP peer with it. Configuring the same
source IP address across both NetFlow and BGP features allows the pmacct collector to
perform the required correlations. Also, setting the BGP Router ID accordingly allows
for more clear log messages. It's adviceable to configure the collector at the routers
as a Route-Reflector (RR) client.

A relevant configuration example for a Cisco router follows:

ip flow-export source Loopback12345
ip flow-export version 5
ip flow-export destination X.X.X.X 2100
!
router bgp 12345
 neighbor X.X.X.X remote-as 12345
 neighbor X.X.X.X update-source Loopback12345
 neighbor X.X.X.X version 4
 neighbor X.X.X.X send-community
 neighbor X.X.X.X route-reflector-client
 neighbor X.X.X.X description nfacctd

A relevant configuration example for a Juniper router follows:

forwarding-options {
    sampling {
        output {
            cflowd X.X.X.X {
                port 2100;
                source-address Y.Y.Y.Y;
                version 5;
            }
        }
    }
}
protocols bgp {
    group rr-netflow {
        type internal;
        local-address Y.Y.Y.Y;
        family inet {
            any;
        }
        cluster Y.Y.Y.Y;
        neighbor X.X.X.X {
            description "nfacctd";
        }
    }
}

XIIe. A working configuration example writing to a MySQL database

The following setup is a realistic example for a MPLS-enabled IP carrier network
divided in multiple BGP clusters. Samples are aggregated in a way which is suitable
to get an overview of traffic trajectories, collecting much information where these
enter the AS and where they get out.

daemonize: true
nfacctd_port: 2100
nfacctd_time_new: true

plugins: mysql[5mins], mysql[hourly]

sql_optimize_clauses: true
sql_dont_try_update: true
sql_multi_values: 1024000

sql_history_roundoff[5mins]: m
sql_history[5mins]: 5m
sql_refresh_time[5mins]: 300
sql_table[5mins]: acct_bgp_5mins

sql_history_roundoff[hourly]: h
sql_history[hourly]: 1h
sql_refresh_time[hourly]: 3600
sql_table[hourly]: acct_bgp_1hr

bgp_daemon: true
bgp_daemon_ip: X.X.X.X
bgp_daemon_max_peers: 100
bgp_aspath_radius: 3
bgp_follow_default: 1
nfacctd_as_new: bgp
bgp_peer_src_as_type: map
bgp_peer_src_as_map: /path/to/peers.map

plugin_buffer_size: 10240
plugin_pipe_size: 1024000
aggregate: tag, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip, peer_dst_ip, local_pref, as_path

pre_tag_map: /path/to/pretag.map
refresh_maps: true
pre_tag_map_entries: 3840

The content of the maps (bgp_peer_src_as_map, pre_tag_map) is meant to be pretty
standard and will not be shown. As it can be grasped from the above configuration,
the SQL schema was customized. Below a suggestion on how this can be modified for
more efficiency - with additional INDEXes, to speed up specific queries response
time, remaining to be worked out:

create table acct_bgp_5mins (
        id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT,
        agent_id INT(4) UNSIGNED NOT NULL,
        as_src INT(4) UNSIGNED NOT NULL,
        as_dst INT(4) UNSIGNED NOT NULL,
        peer_as_src INT(4) UNSIGNED NOT NULL,
        peer_as_dst INT(4) UNSIGNED NOT NULL,
        peer_ip_src CHAR(15) NOT NULL,
        peer_ip_dst CHAR(15) NOT NULL,
        as_path CHAR(21) NOT NULL,
        local_pref INT(4) UNSIGNED NOT NULL,
        packets INT UNSIGNED NOT NULL,
        bytes BIGINT UNSIGNED NOT NULL,
        stamp_inserted DATETIME NOT NULL,
        stamp_updated DATETIME,
        PRIMARY KEY (id),
        INDEX ...
) TYPE=MyISAM AUTO_INCREMENT=1;

create table acct_bgp_1hr (
        id INT(4) UNSIGNED NOT NULL AUTO_INCREMENT,
        agent_id INT(4) UNSIGNED NOT NULL,
        as_src INT(4) UNSIGNED NOT NULL,
        as_dst INT(4) UNSIGNED NOT NULL,
        peer_as_src INT(4) UNSIGNED NOT NULL,
        peer_as_dst INT(4) UNSIGNED NOT NULL,
        peer_ip_src CHAR(15) NOT NULL,
        peer_ip_dst CHAR(15) NOT NULL,
        as_path CHAR(21) NOT NULL,
        local_pref INT(4) UNSIGNED NOT NULL,
        packets INT UNSIGNED NOT NULL,
        bytes BIGINT UNSIGNED NOT NULL,
        stamp_inserted DATETIME NOT NULL,
        stamp_updated DATETIME,
        PRIMARY KEY (id),
        INDEX ...
) TYPE=MyISAM AUTO_INCREMENT=1;

XIIf. BGP daemon implementation concluding notes

The implementation supports 4-bytes ASN as well as IPv4, IPv6 and VPNv4 (MP-BGP)
address families; both IPv4 and IPv6 BGP sessions are supported. When storing
data via SQL, BGP primitives can be freely mix-and-matched with other primitives
(ie. L2/L3/L4) when customizing the SQL table (sql_optimize_clauses: true).
Environments making use of BGP Multi-Path are not currently supported; if you
are using this and would like to see it implemented, please get in touch. TCP
MD5 signature for BGP messages is also supported. For a review of all knobs and
features see the CONFIG-KEYS document.

XIII. Quickstart guide to setup a NetFlow/sFlow replicator

A 'tee' plugin which is meant, in basic terms, to replicate NetFlow/sFlow data
to remote collectors. The plugin can also act transparently by preserving the
original IP address of the datagrams. Setting up a replicator is very easy. All
is needed is where to listen to for incoming packets, where to replicate them
to and optionally a filtering layer, if required. Filtering bases on the
standard pre_tag_map infrastructure; only coarse-grained filtering against
original source IP address is possible.

nfacctd_port: 2100
nfacctd_ip: X.X.X.X
!
plugins: tee[a], tee[b]
tee_receivers[a]: /path/to/tee_receivers_a.lst
tee_receivers[b]: /path/to/tee_receivers_b.lst
! tee_transparent: true
!
! pre_tag_map: /path/to/pretag.map
!
plugin_buffer_size: 10240
plugin_pipe_size: 1024000

An example of content of a tee_receivers map, ie. /path/to/tee_receivers_a.lst,
is as follows ('id' is the pool ID and 'ip' a comma-separated list of receivers
for that pool):

id=1    ip=X.X.X.X:2100
id=2    ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100
! id=1  ip=X.X.X.X:2100                 tag=0
! id=2  ip=Y.Y.Y.Y:2100,Z.Z.Z.Z:2100    tag=100

Selective teeing allows to filter which pool of receivers has to receive which
datagrams. Tags are applied via a pre_tag_map, the one illustrated below applies
tag 100 to packets exported from agents A.A.A.A, B.B.B.B and C.C.C.C; in case
there was also an agent D.D.D.D exporting towards the replicator, its packets
would intuitively remain untagged. Tags are matched by a tee_receivers map, see
above the two pool definitions commented out containing the 'tag' keyword: the
definition would cause untagged packets (tag=0) to be replicated only to pool
#1 whereas packets tagged as 100 (tag=100) to be replicated only to pool #2.
More examples in the pretag.map.example and tee_receivers.lst.example files in
the examples/ sub-tree:

set_tag=100     ip=A.A.A.A
set_tag=100     ip=B.B.B.B
set_tag=100     ip=C.C.C.C

To enable the transparent mode, the tee_transparent should be commented out. It
preserves the original IP address of the NetFlow/sFlow sender while replicating
by essentially spoofing it. This feature is not global and can be freely enabled
only on a subset of the active replicators. It requires super-user permissions
in order to run.

Concluding note: 'tee' plugin is not compatible with different plugins - within
the same daemon instance. So if in the need of using pmacct for both collecting
and replicating data, two separate instances must be used (intuitively with the
replicator instance feeding the collector one).

XIV. Quickstart guide to setup the IS-IS daemon

pmacct 0.14.0 integrates an IS-IS daemon into the IP accounting collectors part
of the toolset. Such daemon is run as a thread within the collector core process.
The idea is to receive data-plane information, ie. via NetFlow, sFlow, etc., and
control-plane information via IS-IS. Currently a single L2 P2P neighborship, ie.
over a GRE tunnel, is supported. The daemon is currently used for the purpose of
route resolution. A sample scenario could be that more specific internal routes
might be configured to get summarized in BGP while crossing cluster boundaries.  

Pre-requisite for the use of the IS-IS daemon is that the pmacct package has to
be configured for compilation with threads, this line will do it:

./configure --enable-threads

XIVa. Preparing the collector for the L2 P2P IS-IS neighborship

It's assumed the collector sits on an Ethernet segment and has not direct link
(L2) connectivity to an IS-IS speaker, hence the need to establish a GRE tunnel. 
While extensive literature and OS specific examples exist on the topic, a brief
example for Linux, consistent with rest of the chapter, is provided below:

ip tunnel add gre2 mode gre remote 10.0.1.2 local 10.0.1.1 ttl 255
ip link set gre2 up

The following configuration fragment is sufficient to set up an IS-IS daemon
which will bind to a network interface gre2 configured with IP address 10.0.1.1
in an IS-IS area 49.0001 and a CLNS MTU set to 1400:

isis_daemon: true
isis_daemon_ip: 10.0.1.1
isis_daemon_net: 49.0001.0100.0000.1001.00
isis_daemon_iface: gre2
isis_daemon_mtu: 1400
! isis_daemon_msglog: true

XIVb. Preparing the router for the L2 P2P IS-IS neighborship

Once the collector is ready, the remaining step is to configure a remote router
for the L2 P2P IS-IS neighborship. The following bit of configuration (based on
Cisco IOS) will match the above fragment of configuration for the IS-IS daemon:

interface Tunnel0
 ip address 10.0.1.2 255.255.255.252
 ip router isis
 tunnel source FastEthernet0
 tunnel destination XXX.XXX.XXX.XXX
 clns mtu 1400
 isis metric 1000
!
router isis
 net 49.0001.0100.0000.1002.00
 is-type level-2-only
 metric-style wide
 log-adjacency-changes
 passive-interface Loopback0
!

XV. Running the print plugin to write to flat-files

XV. Running the print plugin to write to flat-files
Print plugin was originally conceived to display data on standard output; with
pmacct 0.14 a new 'print_output_file' configuration directive is introduced to
allow the plugin to write to flat-files aswell. Dynamic filenames are supported.
Output is text-based (no binary proprietary format) and can be JSON, CSV or
formatted ('print_output' directive). Interval between writes can be configured
via the 'print_refresh_time' directive. An example follows on how to write to
files on a 15 mins basis in CSV format:

print_refresh_time: 900
print_history: 15m
print_output: csv
print_output_file: /path/to/file-%Y%m%d-%H%M.txt
print_history_roundoff: m

Which, over time, would produce a would produce a series of files as follows:

-rw-------  1 paolo paolo   2067 Nov 21 00:15 blabla-20111121-0000.txt
-rw-------  1 paolo paolo   2772 Nov 21 00:30 blabla-20111121-0015.txt
-rw-------  1 paolo paolo   1916 Nov 21 00:45 blabla-20111121-0030.txt
-rw-------  1 paolo paolo   2940 Nov 21 01:00 blabla-20111121-0045.txt

JSON output requires compiling pmacct against Jansson library, which can be
found at the following URL: http://www.digip.org/jansson/ . pmacct can be
configured for compilation against the library using the --enable-jansson
switch. Please refer to the configure script help screen to supply custom
locations of Jansson library and/or headers.

Splitting data into time bins is supported via print_history directive. When
enabled, time-related variable substitutions of dynamic print_output_file names
are determined using this value. It is supported to define print_refresh_time
values shorter than print_history ones by setting print_output_file_append to
true (which is generally also recommended to prevent that unscheduled writes to
disk, ie. due to caching issues, overwrite existing file content). A sample
config follows:

print_refresh_time: 300
print_history: 5m
print_output: csv
print_output_file: /path/to/%Y/%Y-%m/%Y-%m-%d/file-%Y%m%d-%H%M.txt
print_history: 15m
print_history_roundoff: m
print_output_file_append: true

XVI. Quickstart guide to setup GeoIP lookups

From pmacct 0.14.2 it is possible to perform GeoIP country lookups against a
Maxmind library database - and as a result of that two new traffic aggregation
primitives have been added to the set: src_host_country and dst_host_country.
Pre-requisite for the feature to work are: a) a working installed Maxmind GeoIP
library and headers and b) a Maxmind GeoIP country database (freely available).
Two steps to quickly start with GeoIP lookups in pmacct:

* to compile the pmacct package with support for GeoIP lookups, the code must
  be configured for compilation as follows: ./configure --enable-geoip [ ... ]
  The switches --with-geoip-libs and --with-geoip-includes can be of help if
  the library is installed in some non-standard location.

* include as part of the pmacct configuration the following fragment:

  ...
  geoip_ipv4_file: /path/to/GeoIP/GeoIP.dat
  aggregate: src_host_country, dst_host_country, ...
  ...

Concluding note: more fine-grained GeoIP lookup primitives (ie. cities, states,
counties, metro areas, zip codes, etc.) are not yet supported: should you be
interested into any of these, please get in touch.

XVII. Using pmacct as traffic/event logger

pmacct was originally conceived as a traffic aggregator. From pmacct 0.14.3
it is now possible to use pmacct as a traffic/event logger, such development
being fostered particularly by the use of NetFlow/IPFIX as generic transport,
see for example Cisco NEL and Cisco NSEL. Key to logging are time-stamping
primitives, timestamp_start and timestamp_end: the former records the likes
of libpcap packet timestamp, sFlow sample arrival time, NetFlow observation
time and flow first switched time; timestamp_end currently only makes sense
for logging flows via NetFlow. Still, the exact boundary between aggregation
and logging can be defined via the aggregation method, ie. no assumptions are
made. An example to log traffic flows follows:

! ...
!
plugins: print[traffic]
!
aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags
print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt
print_output[traffic]: csv
print_history[traffic]: 5m
print_history_roundoff[traffic]: m
print_refresh_time[traffic]: 300
! print_cache_entries[traffic]: 9999991
print_output_file_append[traffic]: true
!
! ...

An example to log specifically CGNAT (Carrier Grade NAT) events from a
Cisco ASR1K box follows:

! ...
!
plugins: print[nat]
!
aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start
print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt
print_output[nat]: json
print_history[nat]: 5m
print_history_roundoff[nat]: m
print_refresh_time[nat]: 300
! print_cache_entries[nat]: 9999991
print_output_file_append[nat]: true
!
! ...

The two examples above can intuitively be merged in a single configuration
so to log down in parallel both traffic flows and events. To split flows
accounting from events, ie. to different files, a pre_tag_map and two print
plugins can be used as follows:

! ...
!
pre_tag_map: /path/to/pretag.map
!
plugins: print[traffic], print[nat]
!
pre_tag_filter[traffic]: 10
aggregate[traffic]: src_host, dst_host, peer_src_ip, peer_dst_ip, in_iface, out_iface, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, src_mask, dst_mask, src_as, dst_as, tcpflags
print_output_file[traffic]: /path/to/traffic-%Y%m%d_%H%M.txt
print_output[traffic]: csv
print_history[traffic]: 5m
print_history_roundoff[traffic]: m
print_refresh_time[traffic]: 300
! print_cache_entries[traffic]: 9999991
print_output_file_append[traffic]: true
!
pre_tag_filter[nat]: 20
aggregate[nat]: src_host, post_nat_src_host, src_port, post_nat_src_port, proto, nat_event, timestamp_start
print_output_file[nat]: /path/to/nat-%Y%m%d_%H%M.txt
print_output[nat]: json
print_history[nat]: 5m
print_history_roundoff[nat]: m
print_refresh_time[nat]: 300
! print_cache_entries[nat]: 9999991
print_output_file_append[nat]: true
!
! ...

In the above configuration both plugins will log their data in 5 mins files
basing on the 'print_history[<plugin name>]: 5m' configuration directive, ie.
traffic-20130802-1345.txt traffic-20130802-1350.txt traffic-20130802-1355.txt
etc. Granted appending to output file is set to true, data can be refreshed
at shorter intervals than 300 secs. This is a snippet from /path/to/pretag.map
referred above:

set_tag=10      ip=A.A.A.A      sample_type=flow
set_tag=20      ip=A.A.A.A      sample_type=event
set_tag=10      ip=B.B.B.B      sample_type=flow
set_tag=20      ip=B.B.B.B      sample_type=event
!
! ...

pmacct Wiki: OfficialExamples (last edited 2014-01-12 13:46:52 by paolo)