There are lots of resources related to SNORT, but in most cases it proposed to be used as tool to watch on network activity. This article describes how to use SNORT as Intrusion Prevention System (IPS) to watch and controll not all network traffic but the only can be described with iptables (Linux firewall) rules. I will not teach you how to write iptables rules but describe how to build and prepare SNORT to work as IPS.
How it works ?
After network package is received by SNORT it passed throw decoders, preprocessors and only after it is taken to detector to apply rules on it. The main purpose on that stage is to extract all data of Transport network level (IP, TCP, UDP) from channel level (Ethernet, 802.11, Token Ring etc..)
Preprocessor prepares the data from Transport and Network level for rules applying. We will use preprocessor for TCP protocol to perform the following:
- State control (watch for protocol honour)
- Build session (by merging the data from different session packages)
- Protocil normalization (by fixnig package headers on fly)
Setting up preprocessors in proper way may give you sensible performance growth and reduce junk data for detector. In addition you can easily plugin your own preprocessor.
As the result "superpackages" are formed before sending to detector. Rule aplying process is only stricted to search rule signatures in this packages. Rules are consists from traffic description, signature to search, description of the treat and system reaction when it is detected.
The following instructions are tested on Ubuntu 11.04 x86. First dowload tarballs:
daq-0.5.tar.gz libdnet-1.11.tar.gz libnetfilter_queue-1.0.0.tar libnfnetlink-1.0.0.tar libpcap-1.1.1.tar.gz pcre-8.12.zip snort-220.127.116.11.tar.gz snortrules-snapshot-2903.tar.gz
Install necessary packages:
$sudo apt-get install bison flex gcc g++ zlib1g-dev
Configure and install in the following order:
pcre-8.12.zip libpcap-1.1.1.tar.gz libdnet-1.11.tar.gz libnfnetlink-1.0.0.tar libnetfilter_queue-1.0.0.tar daq-0.5.tar.gz
You'd better to force install prefix to avoid surprizes:
./configure --libdir=/usr/lib --includedir=/usr/include
If you seen message "Build NFQ DAQ module...: yes" and had no compilations well then everything goes well. Now building SNORT:
./configure --libdir=/usr/lib --includedir=/usr/include --enable-ipv6 --enable-gre --enable-targetbased --enable-decoder-preprocessor-rules --enable-active-response --enable-normalizer --enable-reload --enable-react --enable-zlib
--enable-ipv6 – IP v6 support (Thanks Cap!).
--enable-gre – GRE incapsulation support.
--enable-targetbased – enables assembling of fragmented packages.
--enable-decoder-preprocessor-rules – enables preprocessor and decoder rules for abnormal traffic activities.
--enable-active-response – support for session interuption when rule is matched.
--enable-normalizer – protocol normalizer support.
--enable-reload – load/upload rules without restarting SNORT.
--enable-react – support for braking session (RST) when rule is matched.
--enable-zlib– traffic compression support.
Now we can do make && make install or better use checkinstall tool to build native packages.
SNORT configuration file can be quite stunnig for people new to it, but after you understand it's structure thing will became much easier. Here is the shortened example that will suite our needs:
# 1: Global variable to be used in configuration and rules var HOME_NET any var RULE_PATH ../rules # 2: Decoder configuration config disable_decode_alerts …… # 3: Detector setup config pcre_match_limit: 3500 …… # 4: Preprocessor setup # Be carefull: preprocessors will normalize packages on fly, so you cat get unexpected results preprocessor normalize_ip4 …. # Preprocessor for fragmented packages preprocessor frag3_global: max_frags 65536 … # State control and session build preprocessors preprocessor stream5_global: max_tcp 8192, track_tcp yes, track_udp …. … # 6: Enable detailed output libraries include classification.config ….. # 7: Load rules include $RULE_PATH/test.rules
To simplify this guide we will use only one rule that will tell system to send RST and break the session if one of tcp packages contains string "abc123". Here is the rule to add to test.rules file:
reject tcp any any -> any any (msg:"Test pattern for snort abc123"; content:"abc123"; classtype:shellcode-detect; sid:310; rev:1;)
Now we can start SNORT:
snort -Q --daq nfq --daq-var queue=2 -c /home/ubuntu/Downloads/snort/etc/snort.conf -l /var/log/snort -A full
-Q – IPS mode
--daq – package source
--daq-var – package source params
-с – config file path
-l – log file path
-A full – logs to be verbose (with details and dumps)
-D – daemon (service) mode (use it when all seems to be configured fine and working)
If you get erros don't panic just recheck everything.
Package filter (iptables) can be configured to pass received package form kernel level to userspace where it will be handled by 3rd party application and turned back to kernel level. In our case 3rd party app is SNORT. As it configured to work with numbered queue (NFQUEUE) we should route traffic to this queue:
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j NFQUEUE --queue-num 2
Why PREROUTING ?
In Iptables process model received package is routed to PREROUTING queue before any routing rules applied. Using SNORT on this stage allows us to decide process it locally or redirect somewhere else, e.g. using NAT. The benefit of using numbered queues it that you can create a number of them and pass each of to individual rule set. But the disadvantage is that if SNORT was fallen, the service it protects became unavailable. This happens because thereis noone to pass data between kernel and userspace. Check if everything is ok immediatly ater SNORT is started. You can do this using netcat to send signature from rule ('abc123'). If connection is broken then everything works fine.
SNORT that launched as descripted before will write alert to logs everytime rule is matched.
[**] [1:310:1] Test pattern for snort abc123 [**] [Classification: Executable Code was Detected] [Priority: 1] 01/19-12:03:12.155213136 172.16.249.1:56473 -> 172.16.249.130:8080 TCP TTL:64 TOS:0x0 ID:1241 IpLen:20 DgmLen:59 DF ***AP*** Seq: 0x9510F391 Ack: 0xC40C0E14 Win: 0x8218 TcpLen: 32 TCP Options (3) => NOP NOP TS: 125531844 9470333
In addition it will write some dumped info from package where signature was matched. Then SNORT job is finished and time for log analysers. There are some scripts with even with kind a web UIs (BASE, ACID ...). The main pain of them is unability to perform flexible analysis and uneffecitve databse interaction that causes unability to operate on high load. To my mind and experience Splunk is the only tool to do this. This application manages your logs, it stores, index and provide promising user interface to interact with it. This tool is proprietary tool that limits you to have 500 mb of logs if you want to use it for free (this is quite enough). In additioin it behaves fine on high load and the main thing is that it has plugin for SNORT logs. You can download it here and install as follows:
dpkg -i splunk-4.2.1-98164-linux-2.6-intel.deb
Then use your broswer to get to web interface and install SNORT plugin:
You should add data source as file. The rest is easy, but not that you should manyally set Source Type to snort_alert_full. Test it with sending package to protected port with netcat as proposed before after done with setup. You should have something like:
At the left column you can see the data that was recognised by parser, this data is used to index input data. You can analyse this data using own query language, build graphs, layout sources on map and many many more.
On the offical site you can find guies to write your own plugins.
Before lanching SNORT in productioin you'd better to do the following:
- Manage versioined rule storage somewhere outside (e.g. on server with SVN or other VCS)
- Prepare plan of operation in emergency situations in case SNORT became down and traffic is not flowing
- Prepare recovery plan in case something hardwared happens
- Create test rule (e, g. our 'abc123' matching) and test system everytime you made changes.