Syed Jahanzaib Personal Blog to Share Knowledge !

July 29, 2021

MySQL: DROP tables older than X Period using BASH Script



Notes for Self:

Following script can delete single or multiple table older than X time from the mysql DB. It was pretty useful for DMASOFTLAB RADIUS MANAGER CONNTRACK table OR customized  SYSLOG-NG logging system, where table is daily created automagically in database using current date using YYYY_MM_DD format (dates have underscore sign). There are other solutions as well like creating procedure etc, but since this was older MySQL version , therefore I took this route.


1# FOR DMASOFTLAB RADIUS MANAGER

DMASOFTLAB Radius manager have its own connection tracking module which stores date wise table to store data. (YYYY-M-DD format for table), & deleting it using bash script is not possible because older versions gives syntax error, therefore we had to wrap the table name in BACKQUOTE.

Also most importantly, if we delete tables older then x period, then there is a table that dma creates to hold the conntrack details called tabxid, & eventually with date criteria, it will be deleted too, therefore I EXEMPTED in the mysql statement so that it should remain safe else conntrack table will not work.

mkdir /temp
touch /temp/conntrack_trim.sh
chmod +x /temp/conntrack_trim.sh
nano /temp/conntrack_trim.sh

Now paste following data, and modify accordingly

#!/usr/bin/env bash
####!/bin/sh
#set -x
#################
# Script to delete TABLES OLDER THEN X period from particular DB
# Made for syslog-ng to save disk space
# Created on: 2019
# Last Modified: 10-SEP-2021
# Syed Jahanzaib / aacable at hotmail dot com / aacable.wordpress.com
#################
#### MODIFY BELOW
#MYSQL DETAILS
SQLUSER="SQL_ID"
SQLPASS="SQL_PASSWORD"
DB="conntrack"
DATE=`date +'%Y-%m-%d'`
# You can change the months to days also by
#DAYS="30 DAYS" (or maybe syntax is DAY)
DAYS="3 MONTH"
TMPFILE="/tmp/conntrack_tables_list-$DATE"
#################
#################
# Don't modify below
export MYSQL_PWD=$SQLPASS
CMD="mysql -u$SQLUSER --skip-column-names -s -e"
logger $DB Trimmer Script started $DATE , IT WILL DELETE tables from $DB older then $DAYS
echo "syslog_ng script started $DATE , IT WILL DELETE tables from $DB older then $DAYS"
# This is one time step.
echo " Script Started @ $DATE "
# --- Now Delete $DEL_TABLE TABLE from $DB table ...
$CMD "SELECT create_time FROM INFORMATION_SCHEMA.TABLES WHERE table_schema = '$DB' AND table_name NOT LIKE '%tabidx%' AND create_time < NOW() - INTERVAL $DAYS;" | awk '{print $1}' > $TMPFILE
# Apply Count Loop Formula
num=0
cat $TMPFILE | while read tables
do
num=$[$num+1]
TABLE_TO_DELETE=`echo $tables`
# In below CMD , I wrapped the backquote, it took an hour to sort this issue that DASH - sign is not supported in DB/table for older mysql version)
$CMD "use $DB; DROP TABLE \`$TABLE_TO_DELETE\`;"
DATE=`date +'%Y-%m-%d'`
echo "$DB_TRIMMING script deleted TABLE $TABLE_TO_DELETE FROM $DB , confirm it Please. Jz"
logger $DB TRIMMING script ENDED $DATE , $TABLE_TO_DELETE got deleted $DB , confirm it plz
done
echo "$DB TRIMMING script ended $DATE "

You can now schedule it to run daily in night at 00:00 hours by editing CRONTAB

crontab -e

& add following entry

/temp/conntrack_trim.sh

Save & Exit.


2# FOR SYSLOG-NG, to delete SINGLE table created 30 days before – JUNK TEST CODE

Syslog-NG generally creates tables with underscore _ sign, therefore I modified the script as per below

mkdir /temp
touch /temp/syslog_trim.sh
chmod +x /temp/syslog_trim.sh
nano /temp/syslog_trim.sh

Now paste following data, and modify accordingly

#!/usr/bin/env bash
####!/bin/sh
#set -x
# Script to delete XX days older single table from particular DB
# Made for syslog-ng to save disk space
# Syed Jahanzaib / aacable at hotmail dot com / aacable.wordpress.com
#################
#################
#### MODIFY BELOW
#MYSQL DETAILS
SQLUSER="sql_user"
SQLPASS="sql_password"
DB="syslog"
DAYS=30
#################
#################

# Don't modify below
export MYSQL_PWD=$SQLPASS
CMD="mysql -u$SQLUSER --skip-column-names -s -e"
DATE=`date +'%Y-%m-%d'`
DEL_TABLE=`date +'%Y_%m_%d' -d "$DAYS day ago"`
logger syslog_ng script started $DATE , IT WILL DELETE $DEL_TABLE TABLE FROM $DB
# This is one time step.
echo " Script Started @ $DATE "
# --- Now Delete $DEL_TABLE TABLE from $DB table ...
$CMD "use $DB; SHOW TABLES;"
$CMD "use $DB; DROP TABLE $DEL_TABLE;"
DATE=`date +'%Y-%m-%d'`
logger TABLE_TRIMMING script ENDED $DATE , $DEL_TABLE TABLE FROM $DB deleted, confirm it plz
$CMD "use $DB; SHOW TABLES;"
echo "TABLE_TRIMMING  script ENDED at $DATE , $DEL_TABLE TABLE FROM $DB deleted, confirm it Please. Jz"

Regard’s
Syed JAHANZAIB

March 5, 2021

Ubuntu Default 200GB Partition & it’s extension

Filed under: Linux Related — Tags: , , , — Syed Jahanzaib / Pinochio~:) @ 8:53 AM

Thanks Mr. GerardBeekmans for detailed guidance.

Scenario:

I have create one VM guest (on esxi) & 900GB disk is assigned to it. Ubuntu 16 server is installed with default installation options. But when I see disk report, it shows only 200 GB of disk space

as shown below …

root@XXX-log:/temp# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 900K 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 196G 92G 94G 50% /
tmpfs 7.9G 8.0K 7.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
root@XXX-log:/temp#

But when I run FDISK or other tools, it shows below

root@XXX-log:/temp# fdisk -l
Disk /dev/sda: 920 GiB, 987842478080 bytes, 1929379840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2C824CE0-94E5-4515-B28D-8FA40983CFF5

Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 1929377791 1927276544 919G Linux filesystem

Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Ubuntu DEFAULT regular installer does it by default. It’ll provision about a 200 GB LVM based Logical Volume (LV) for use. The rest of the space is not used until you decide what to do with it (assign to the existing root and extend it, or create additional volumes later).

If you use the alternative installer you get more advanced abilities regarding partitioning and can configure this right at the start vs. making changes after installation.

It looks like your /dev/sda3 is our LVM’s physical volume. Some commands to run to check for size details:

pvdisplay
vgdisplay

The first command will show you the physical partition details and the volume group (vg) that is attached to it. The second command will show you the volume group details. Check for “VG Size” and “Free PE / Size”.

If the VG itself already spans the entire LVM (ie around that 900 GB size of your actual disk) then you can simply expand the Logical Volume named /dev/mapper/ubuntu–vg-ubuntu–lv and afterwards expand the filesystem on top of it.

If you simply want that single LV to take up the entire volume group’s space without needing to create more volumes for now, this should get the trick done:

lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

& afterwards DF showed correct space.

root@bbi-log:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 900K 1.6G 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 904G 94G 771G 11% /
tmpfs 7.9G 8.0K 7.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda2 976M 146M 764M 16% /boot
tmpfs 1.6G 0 1.6G 0% /run/user/0
root@bbi-log:~#

Thanks Mr. GerardBeekmans for detailed guidance.


These steps were based on default Ubuntu behaviour but your setup may be different.

Regard’s
Syed Jahanzaib

March 2, 2021

Bash Script for General Customizable Report via Email

Filed under: Linux Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 12:37 PM

Note for MySelf:

This post contains bash script sample , which upon executed, can query various system components & send the report via email. Useful to monitor remote server. Further functions can be added or existing can be customized according to the requirements. I opted for LOOP Formula to show mysql DB sizes in MB/GB using IF ELSE statements & some other fun stuff for myself as well.

The script is bit messy & scrambled in terms of proper organized display, but it works fine. You may customized or trim as per your taste

Feel free to use as you like …

Regard’s
Syed Jahanzaib


#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/etc
#set -x
# Version 1.1 / 10th January, 2014
# Last Modified / 5th-MARCH-2021
# Syed Jahanzaib / Web: https://aacable.wordpress.com / Email: aacabl AT hotmail DOT com
# This script generalized & customized DISK reports and email to admin
# Adjust below DATA fields accordingly. remove / add desired tasks.
# Settings various VARIABLES for the script
clear
# Colors Config ... [[ JZ ... ]]
COMPANY="ZAIB_LTD"
CREDITS="Powered by Syed Jahanzaib / 0333.3021.909 / aacable at hotmail dot com / https:// aacable . wordpress .com"
#MYSQL DETAILS
SRV="mysql"
SQLUSER="root"
SQLPASS="XXXXXX"
export MYSQL_PWD=$SQLPASS
CMD="mysql -uroot --skip-column-names -e"
ALL_DB_TEMP_LIST="/tmp/mysq_all_dbs.txt"
DB1="radius"
DB2="conntrack"
DB3="syslog"
SQL_ACCOUNTING_TABLE="radacct"
CMD="mysql -u$SQLUSER --skip-column-names -s -e"
EMAILMSG="/tmp/report1.log"
DB_HOLDER="/temp/temp_db_size_holder.log"
> $EMAILMSG
> $DB_HOLDER
HOSTNAME=`hostname`
INT_IP1=`hostname -I`
INT_IP2=`ip route get 1 | awk '{print $NF;exit}'`
EXT_IP=`dig +short myip.opendns.com @resolver1.opendns.com`
URL="google.com"
DNS=$(cat /etc/resolv.conf | sed '1 d' | awk '{print $2}')
# Check OS Type
os=$(uname -o)
###################################
# Check OS Release Version and Name
###################################
OS=`uname -s`
REV=`uname -r`
MACH=`uname -m`
GetVersionFromFile()
{
VERSION=`cat $1 | tr "\n" ' ' | sed s/.*VERSION.*=\ // `
}
if [ "${OS}" = "SunOS" ] ; then
OS=Solaris
ARCH=`uname -p`
OSSTR="${OS} ${REV}(${ARCH} `uname -v`)"
elif [ "${OS}" = "AIX" ] ; then
OSSTR="${OS} `oslevel` (`oslevel -r`)"
elif [ "${OS}" = "Linux" ] ; then
KERNEL=`uname -r`
if [ -f /etc/redhat-release ] ; then
DIST='RedHat'
PSUEDONAME=`cat /etc/redhat-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/redhat-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/SuSE-release ] ; then
DIST=`cat /etc/SuSE-release | tr "\n" ' '| sed s/VERSION.*//`
REV=`cat /etc/SuSE-release | tr "\n" ' ' | sed s/.*=\ //`
elif [ -f /etc/mandrake-release ] ; then
DIST='Mandrake'
PSUEDONAME=`cat /etc/mandrake-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/mandrake-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/os-release ]; then
DIST=`awk -F "PRETTY_NAME=" '{print $2}' /etc/os-release | tr -d '\n"'`
elif [ -f /etc/debian_version ] ; then
DIST="Debian `cat /etc/debian_version`"
REV=""
fi
if ${OSSTR} [ -f /etc/UnitedLinux-release ] ; then
DIST="${DIST}[`cat /etc/UnitedLinux-release | tr "\n" ' ' | sed s/VERSION.*//`]"
fi
OSSTR="${OS} ${DIST} ${REV}(${PSUEDONAME} ${KERNEL} ${MACH})"
fi
# Check Architecture
architecture=$(uname -m)
# Check Kernel Release
kernelrelease=$(uname -r)
#SET DATE TIME
set $(date)
time=`date |awk '{print $4}'`
DT=`date +%d.%b.%Y_time_%H.%M`
DATE=$(date +%Y-%m-%d)
DT_HMS=$(date +'%H:%M:%S')
FULL_DATE=`date`
TODAY=$(date +"%Y-%m-%d")
TODAYYMD=`date +"%d-%b-%Y"`
#Get ip which have default route
logger General report has been started @ $DATE / $DT_HMS
# Check FREERADIUS online sessions
#SESSIONS=`$CMD "use radius; SELECT username FROM $SQL_ACCOUNTING_TABLE WHERE acctstoptime IS NULL;" |wc -l`
# Adding OS level Details in email message
# modify below disk name we want to monitor, make sure to change this
DISK="/dev/sda2"
DISKTOT=`df -h $DISK |awk '{print $2}'| sed -n 2p`
DISKUSED=`df -h $DISK |awk '{print $3}'| sed -n 2p`
DISKAVA=`df -h $DISK |awk '{print $4}'| sed -n 2p`
DISKUSEPER=`df -h $DISK |awk '{print $5}'| sed -n 2p`
MEMTOT=`free -m |awk '{print $2}'| sed -n 2p`
MEMUSED=`free -m |awk '{print $3}'| sed -n 2p`
MEMAVA=`free -m |awk '{print $4}'| sed -n 2p`
MEMUSEDPER=`free -m | grep Mem | awk '{print $3/$2 * 100.0}'`
MEMAVAPER=`free -m | grep Mem | awk '{print $4/$2 * 100.0}'`
#GMAIL Details
GMAILID="XXXX@gmail.com"
GMAILPASS="XXXXX"
TO1="aacableXhotmail.com"
SMTP="smtp.gmail.com:587"
#Collect all data in file
echo "
General Report for $HOSTNAME - $INT_IP - $EXT_IP

===============
NETWORK DETAILS:
===============

HOST: $HOSTNAME
Operating System Type $os
$OSSTR
Architecture : $architecture
Kernel Release : $kernelrelease
INT_IP1: $INT_IP1
INT_IP2: $INT_IP2
EXT_IP: $EXT_IP
DNS: $DNS

================
MYSQL DB REPORT:
================
" >> $EMAILMSG
# Fetch ALL DB's & calculate there sizes and convert sizes in MB/GB
MYSQLALLDB=`$CMD "show databases;" > $ALL_DB_TEMP_LIST`
num=0
cat $ALL_DB_TEMP_LIST | while read database
do
num=$[$num+1]
DB=`echo $database | awk '{print $1}'`
MYSQLDBSIZE=`$CMD "SELECT table_schema '$DB', sum(data_length + index_length)/1024/1024 FROM information_schema.TABLES WHERE table_schema='$DB' GROUP BY table_schema;" | cut -f1 -d"." | sed 's/[^0-9]*//g'`
if [ "$MYSQLDBSIZE" -ge 1024 ]; then
MYSQLDBSIZE_FINAL=`echo "scale=2; $MYSQLDBSIZE/1024" |bc -l`
echo "$DB / $MYSQLDBSIZE_FINAL GB" | column -t >> $DB_HOLDER
fi
if [ "$MYSQLDBSIZE" -le 1024 ]; then
MYSQLDBSIZE_FINAL=`echo "scale=2; $MYSQLDBSIZE" |bc -l`
echo "$DB / $MYSQLDBSIZE_FINAL MB" | column -t >> $DB_HOLDER
fi
done
cat $DB_HOLDER | column -t >> $EMAILMSG
echo "
=============
Disk Details:
=============
" >> $EMAILMSG
df -h |grep sda2 | column -t >> $EMAILMSG
echo "

==============
MEMORY_REPORT:
==============
Total_RAM = $MEMTOT MB
Total_RAM_Used = $MEMUSED MB
Total_RAM_Available = $MEMAVA MB
Total_RAM_Used_Percent = $MEMUSEDPER %
Total_RAM_Available_Percent = $MEMAVAPER %
" > /tmp/temp_memory_report.log

cat /tmp/temp_memory_report.log | column -t >> $EMAILMSG

echo "
$CREDITS
" >> $EMAILMSG
# PRINT INFO SECTION #########
# Print Fetched Information on Screen , for info to see
cat $EMAILMSG
# EMAIL SECTION ##############
# Make sure you install sendEMAIL tool and test it properly before using email section.
#SEND EMAIL Alert As well using sendEMAIL tool using GMAIL ADDRESS.
# If you want to send email , use below ...
echo " - Sending SMS/EMAIL info ..."
#curl "http://$KHOST/cgi-bin/sendsms?username=$KID&password=$KPASS&to=$CELL2+$CELL3+$CELL4" -G --data-urlencode text@$SMSMSG
sendemail -u "$HOSTNAME - $EXT_IP - General Report- $DATE " -o tls=yes -s $SMTP -t $TO1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$EMAILMSG -o message-content-type=text
# log entry in /var/log/syslog
logger General Report have been end @ $DATE / $DT_HMS

Make sure to install BC to calculate size

apt-get -y install bc

Sample snapshot for Email Reporting !


Howto install ‘sendemail’ tool to send email via Gmail ID.

Very Well with Tested For UBUNTU 12.x , may work on other ubuntu versions too

Quick copy paste …

1
2
apt-get -y install libio-socket-ssl-perl libnet-ssleay-perl perl
apt-get -y install sendemail

 

January 8, 2020

Syslog-ng – Part 3: Minimized logging to mysql with dynamic tables & trimming

Filed under: Linux Related, Mikrotik Related — Tags: , , — Syed Jahanzaib / Pinochio~:) @ 1:27 PM

syslog cgnat

Revision: 7th-JAN-2020


In continuation to existing posts related to syslog-ng, Following post illustrates on how you can log only particular messages with pattern matching and let syslog-ng creates dynamic table based on the dates so that searching/querying becomes easy.

This task was required in relation to CGNAT logging. you may want to read it here

https://aacable.wordpress.com/2020/01/01/mikrotik-cgnat/

Hardware Software used in this post:

  • Mikrotik Routerboard – firmware 6.46.x
  • Ubuntu 16.4 Server x64 along with syslog-ng version 3.25.1 on some decent hardware

Requirements:

Ubuntu OS


Ref: Installing latest version of syslog-ng

#Make sure to change the version, I have used this CMD on Ubuntu 16.04 , for version 18, you may change this to 18.04

wget -qO - http://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/xUbuntu_16.04/Release.key | sudo apt-key add -
touch /etc/apt/sources.list.d/syslog-ng-obs.list
echo "deb http://download.opensuse.org/repositories/home:/laszlo_budai:/syslog-ng/xUbuntu_16.04 ./" &gt; /etc/apt/sources.list.d/syslog-ng-obs.list
apt-get update
apt-get -y install apache2 mc wget make gcc mysql-server mysql-client curl phpmyadmin libdbd-pgsql aptitude libboost-system-dev libboost-thread-dev libboost-regex-dev libmongo-client0 libesmtp6 syslog-ng-mod-sql libdbd-mysql libdbd-mysql syslog-ng

Note: during above packages installation, it will ask you to enter mysql/phpmyadmin password, you can use your root password to continue the installations. It may download around  after installation finishes, you can check syslog-ng version.

At the time I did installation I got this

syslog-ng -V

root@nab-syslog:~# syslog-ng -V
syslog-ng 3 (3.30.1)
Config version: 3.29
Installer-Version: 3.30.1
Revision: 3.30.1-2
Compile-Date: Nov 19 2020 16:33:22
Module-Directory: /usr/lib/syslog-ng/3.30
Module-Path: /usr/lib/syslog-ng/3.30
Include-Path: /usr/share/syslog-ng/include
Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory'
Available-Modules: syslogformat,azure-auth-header,hook-commands,linux-kmsg-format,kafka,afmongodb,json-plugin,cef,secure-logging,afsocket,pseudofile,kvformat,add-contextual-data,afamqp,riemann,http,appmodel,stardate,tfgetent,redis,cryptofuncs,sdjournal,afuser,pacctformat,graphite,confgen,geoip2-plugin,affile,basicfuncs,xml,mod-python,examples,afsmtp,timestamp,map-value-pairs,disk-buffer,afsnmp,system-source,afsql,afstomp,csvparser,tags-parser,afprog,dbparser
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on

Status:

root@nab-syslog:~# service syslog-ng status
syslog-ng.service - System Logger Daemon
Loaded: loaded (/lib/systemd/system/syslog-ng.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-01-25 00:20:55 EST; 1min 26s ago
Docs: man:syslog-ng(8)
Main PID: 21596 (syslog-ng)
CGroup: /system.slice/syslog-ng.service
21596 /usr/sbin/syslog-ng -F

Jan 25 00:20:55 nab-syslog systemd[1]: Starting System Logger Daemon...
Jan 25 00:20:55 nab-syslog systemd[1]: Started System Logger Daemon.

Create Database in mySQL to store dynamic tables

Create Base Database for storing dynamically created date wise tables

mysql -uroot -pXXX -e "create database syslog;"

Now edit the syslog-ng file

nano /etc/syslog-ng/syslog-ng.conf

& use following as sample. I would recommend that you should add only relevant part, just dont do blind copy paste. This is just sample for demonstration purposes only …


Syslog-ng Sample File

@version: 3.30
@include "scl.conf"
# First, set some global options.
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
dns_cache(no); owner("root"); group("adm"); perm(0640);
stats_freq(0); bad_hostname("^gconfd$");
};
########################
# Sources
########################
# This is the default behavior of sysklogd package
# Logs may come from unix stream, but not from another machine.
#
source s_src {
system();
internal();
};
########################
# Destinations
########################
# First some standard logfile
#
destination d_auth { file("/var/log/auth.log"); };
destination d_cron { file("/var/log/cron.log"); };
destination d_daemon { file("/var/log/daemon.log"); };
destination d_kern { file("/var/log/kern.log"); };
destination d_lpr { file("/var/log/lpr.log"); };
destination d_mail { file("/var/log/mail.log"); };
destination d_syslog { file("/var/log/syslog"); };
destination d_user { file("/var/log/user.log"); };
destination d_uucp { file("/var/log/uucp.log"); };
destination d_mailinfo { file("/var/log/mail.info"); };
destination d_mailwarn { file("/var/log/mail.warn"); };
destination d_mailerr { file("/var/log/mail.err"); };
destination d_newscrit { file("/var/log/news/news.crit"); };
destination d_newserr { file("/var/log/news/news.err"); };
destination d_newsnotice { file("/var/log/news/news.notice"); };
destination d_debug { file("/var/log/debug"); };
destination d_error { file("/var/log/error"); };
destination d_messages { file("/var/log/messages"); };
destination d_console { usertty("root"); };
destination d_console_all { file(`tty10`); };
destination d_xconsole { pipe("/dev/xconsole"); };
destination d_ppp { file("/var/log/ppp.log"); };
########################
# Filters
########################
# Here's come the filter options. With this rules, we can set which
# message go where.

filter f_dbg { level(debug); };
filter f_info { level(info); };
filter f_notice { level(notice); };
filter f_warn { level(warn); };
filter f_err { level(err); };
filter f_crit { level(crit .. emerg); };
filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); };
filter f_error { level(err .. emerg) ; };
filter f_messages { level(info,notice,warn) and
not facility(auth,authpriv,cron,daemon,mail,news); };
filter f_auth { facility(auth, authpriv) and not filter(f_debug); };
filter f_cron { facility(cron) and not filter(f_debug); };
filter f_daemon { facility(daemon) and not filter(f_debug); };
filter f_kern { facility(kern) and not filter(f_debug); };
filter f_lpr { facility(lpr) and not filter(f_debug); };
filter f_local { facility(local0, local1, local3, local4, local5,
local6, local7) and not filter(f_debug); };
filter f_mail { facility(mail) and not filter(f_debug); };
filter f_news { facility(news) and not filter(f_debug); };
filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); };
filter f_user { facility(user) and not filter(f_debug); };
filter f_uucp { facility(uucp) and not filter(f_debug); };

filter f_cnews { level(notice, err, crit) and facility(news); };
filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); };
filter f_ppp { facility(local2) and not filter(f_debug); };
filter f_console { level(warn .. emerg); };
########################
# Log paths
########################
log { source(s_src); filter(f_auth); destination(d_auth); };
log { source(s_src); filter(f_cron); destination(d_cron); };
log { source(s_src); filter(f_daemon); destination(d_daemon); };
log { source(s_src); filter(f_kern); destination(d_kern); };
log { source(s_src); filter(f_lpr); destination(d_lpr); };
log { source(s_src); filter(f_syslog3); destination(d_syslog); };
log { source(s_src); filter(f_user); destination(d_user); };
log { source(s_src); filter(f_uucp); destination(d_uucp); };
log { source(s_src); filter(f_mail); destination(d_mail); };
log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); };
log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); };
log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); };
log { source(s_src); filter(f_debug); destination(d_debug); };
log { source(s_src); filter(f_error); destination(d_error); };
log { source(s_src); filter(f_messages); destination(d_messages); };
log { source(s_src); filter(f_console); destination(d_console_all);
destination(d_xconsole); };
log { source(s_src); filter(f_crit); destination(d_console); };
@include "/etc/syslog-ng/conf.d/*.conf"

######## Zaib Section Starts here
# Accept connection on UDP
source s_net { udp (); };

# Adding filter for our Mikrotik Routerboard, store logs in FILE as primary
# MIKROTIK ###########

# This entry will LOG all information coming from this IP, change this to match your mikrotik NAS
filter f_mikrotik_192.168.0.1 { host("192.168.0.1"); };
# add info in LOG (Part1)
destination df_mikrotik_192.168.0.1 {
file("/var/log/zlogs/${HOST}.${YEAR}.${MONTH}.${DAY}.log"
template-escape(no));
};
source s_mysql {
udp(port(514));
tcp(port(514));
};

# Store Logs in MYSQL DB as secondary # add info in MYSQL (Part2)
destination d_mysql {
sql(type(mysql)
host("localhost")
# MAKE SURE TO CHANGE CREDENTIALS
username("root")
password("XXXXX")
database("syslog")
table("${R_YEAR}_${R_MONTH}_${R_DAY}")
columns( "id int(11) unsigned not null auto_increment primary key", "host varchar(40) not null", "date datetime", "message text not null")
values("0", "$FULLHOST", "$R_YEAR-$R_MONTH-$R_DAY $R_HOUR:$R_MIN:$R_SEC", "$MSG")
indexes("id"));
};
log {
source(s_net);
filter(f_mikrotik_192.168.0.1);
destination(d_mysql);
};

IMPORTANT:

Create ‘zlogs‘ folder in /var/log , so that mikrotik logs will be saved in separate file if required by you

mkdir /var/log/zlogs

Mikrotik rule to LOG Forward chain

Now we need to create a rule in mikrotik FILTER section so that it can log all packets being forward to/from pppoe users. Make sure you in source address list you select your local pppoe users pool there to avoid un-related excessive logging. In below example we are doing only TCP base connection for NEW tcp connections only.

LOG SIZE Example: at one ISP who had around 1200+ online users , its log size for TCP connection was around 25 GB. to lower the size, I configured it log only new TCP connections which reduced the DB Size by 50%.

/ip firewall filter
add action=log chain=forward connection-state=new protocol=tcp src-address-list=pppoe_allowed_users

Mikrotik rule to send LOG to SYSLOG-NG Server

/system logging action
add name=syslogng remote=192.168.101.1 target=remote
# Change IP address pointed towards syslog server

/system logging
set 0 topics=info,!firewall
add action=syslogng topics=firewall

Restart Syslog-ng server

Now restart syslog-ng service

service syslog-ng restart

and you will see the dynamic tables created as follows

mysql -uroot -pXXXXX
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 411
Server version: 5.7.28-0ubuntu0.18.04.4-log (Ubuntu)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql&gt; use syslog;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql&gt; show tables;
+------------------+
| Tables_in_syslog |
+------------------+
| 2020_01_08 |
+------------------+
1 row in set (0.00 sec)

mysql&gt; describe 2020_01_08;
+---------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+------------------+------+-----+---------+----------------+
| id | int(11) unsigned | NO | PRI | NULL | auto_increment |
| host | varchar(40) | NO | | NULL | |
| date | datetime | YES | | NULL | |
| message | text | NO | | NULL | |
+---------+------------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)

& you can then see data insertion into the table as soon LOG is received from remote devices

2020-01-08T07:49:43.020811Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:28', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK,PSH), 172.16.0.2:57193-&gt;172.217.19.174:443, NAT (172.16.0.2:57193-&gt;101.11.11.252:2244)-&gt;172.217.19.174:443, len 79')
2020-01-08T07:49:43.031281Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:28', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK,FIN), 172.16.0.2:57096-&gt;3.228.94.102:443, NAT (172.16.0.2:57096-&gt;101.11.11.252:2219)-&gt;3.228.94.102:443, len 40')
2020-01-08T07:49:43.041420Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:38', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49247-&gt;216.58.208.234:443, NAT (172.16.0.2:49247-&gt;101.11.11.252:2202)-&gt;216.58.208.234:443, len 1378')
2020-01-08T07:49:43.051112Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:38', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49247-&gt;216.58.208.234:443, NAT (172.16.0.2:49247-&gt;101.11.11.252:2202)-&gt;216.58.208.234:443, len 1378')
2020-01-08T07:49:43.061280Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:39', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49760-&gt;172.217.19.1:443, NAT (172.16.0.2:49760-&gt;101.11.11.252:2202)-&gt;172.217.19.1:443, len 1378')
2020-01-08T07:49:43.071449Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:39', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:49760-&gt;172.217.19.1:443, NAT (172.16.0.2:49760-&gt;101.11.11.252:2202)-&gt;172.217.19.1:443, len 1378')
2020-01-08T07:49:44.828993Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:44', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:53503-&gt;216.58.208.234:443, NAT (172.16.0.2:53503-&gt;101.11.11.252:2203)-&gt;216.58.208.234:443, len 827')
2020-01-08T07:49:44.851034Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:49:44', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto UDP, 172.16.0.2:53503-&gt;216.58.208.234:443, NAT (172.16.0.2:53503-&gt;101.11.11.252:2203)-&gt;216.58.208.234:443, len 827')
2020-01-08T07:51:37.518276Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:51:37', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK), 172.16.0.2:57202-&gt;91.195.240.126:80, NAT (172.16.0.2:57202-&gt;101.11.11.252:2260)-&gt;91.195.240.126:80, len 41')
2020-01-08T07:51:37.522015Z 430 Query INSERT INTO 2020_01_08 (id, host, date, message) VALUES ('0', '101.11.11.252', '2020-01-08 12:51:37', 'forward: in: out:ether1-agp-wan, src-mac d0:bf:9c:f7:88:76, proto TCP (ACK), 172.16.0.2:57202-&gt;91.195.240.126:80, NAT (172.16.0.2:57202-&gt;101.11.11.252:2260)-&gt;91.195.240.126:80, len 41')

syslog-ng dynamic table data from phpmyadmin.PNG


TIPS

Deleting all tables inside particular DB


#!/bin/bash
# drop tables matching filter
force=1;
u=root;
p=SQLPASS;
db=syslog;
filter=users_;
for t in $(mysql -u $u -p$p -D $db -Bse 'show tables' | grep $filter); do
echo Dropping $t;
[[ $force -eq 1 ]] && mysql -u root -p$p -D $db -Bse "drop table \`$t\`"
done

Regard’s
Syed Jahanzaib

December 10, 2019

Short notes for UNBOUND Caching DNS Server under Ubuntu 18

Filed under: Linux Related — Tags: , , , , — Syed Jahanzaib / Pinochio~:) @ 12:05 PM

unbound.PNG

Installation of UNBOUND dns server for local network is fairly simple but I encountered some hurdles setting it up with Ubuntu 18 therefore I took notes on how I resolved it in this post for reference purposes.

After fresh installation of Ubuntu 18, It’a a good idea to keep your system TIME with any NTP source.

apt-get -y install ntp ntpdate
# Change timezone as per your local
cp /usr/share/zoneinfo/Asia/Karachi /etc/localtime
sudo /etc/init.d/ntp restart

Install UNBOUND DNS Server

Step#1

apt-get install -y unbound

Step#2

#Additional notes for Ubuntu 18 version

The problem with Ubuntu 18.04 is the systemd-resolved service which is listening on port 53 and therefore conflicts with unbound service

Edit the file /etc/systemd/resolved.conf

nano /etc/systemd/resolved.conf 

& modify this

DNSStubListener=no

Now reboot

shutdown -r now

You can now confirm if 53 port is now free up

netstat -tulpn | grep :53

Step#3

Some housekeeping stuff

sudo service systemd-resolved stop
sudo rm -f /etc/resolv.conf
sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
sudo service systemd-resolved start

Step#4

Edit the existing UNBOUND configuration file for customization…

nano /etc/unbound/unbound.conf

Example of unbound.conf

# Unbound configuration file for Debian.
server:
# Use the root servers key for DNSSEC
#auto-trust-anchor-file: "/var/lib/unbound/root.key"
# Enable logs
chroot: ""
#verbosity (log level from 0 to 4, 4 is debug)
#verbosity: 1
#logfile: /var/log/unbound/unbound.log
#log-queries: yes
#use-syslog: (do not write logs in syslog file in ubuntu /var/log/syslog -zaib)
use-syslog: no
#interface (interfaces on which Unbound will be launched and requests will be listened to)
# Respond to DNS requests on all interfaces
interface: 0.0.0.0
# DNS request port, IP and protocol
port: 53
do-ip4: yes
do-ip6: no
do-udp: yes
do-tcp: yes

# Authorized IPs to access the DNS Server / access-control (determines whose requests are allowed to be processed)
access-control: 127.0.0.0/8 allow
access-control: 10.0.0.0/8 allow
access-control: 172.16.0.0/16 allow
access-control: 192.168.0.0/16 allow
access-control: 101.0.0.0/8 allow

# Root servers information (To download here: ftp://ftp.internic.net/domain/named.cache)
#root-hints: "/var/lib/unbound/root.hints"

# Hide DNS Server info
hide-identity: yes
hide-version: yes

# Improve the security of your DNS Server (Limit DNS Fraud and use DNSSEC)
harden-glue: yes
harden-dnssec-stripped: yes

# Rewrite URLs written in CAPS
use-caps-for-id: yes

# TTL Min (Seconds, I set it to 7 days)
cache-min-ttl: 604800
# TTL Max (Seconds, I set it to 14 days)
cache-max-ttl: 1209600
# Enable the prefetch
prefetch: yes

# Number of maximum threads CORES to use / zaib
num-threads: 4

### Tweaks and optimizations
# Number of slabs to use (Must be a multiple of num-threads value)
msg-cache-slabs: 8
rrset-cache-slabs: 8
infra-cache-slabs: 8
key-cache-slabs: 8
# Cache and buffer size (in mb)
rrset-cache-size: 51m
msg-cache-size: 25m
so-rcvbuf: 1m

# Make sure your DNS Server treat your local network requests
#private-address: 101.0.0.0/8

# Add an unwanted reply threshold to clean the cache and avoid when possible a DNS Poisoning
unwanted-reply-threshold: 10000

# Authorize or not the localhost requests
do-not-query-localhost: no

# Use the root.key file for DNSSEC
#auto-trust-anchor-file: "/var/lib/unbound/root.key"
val-clean-additional: yes
include: "/etc/unbound/unbound.conf.d/*.conf"

Example of /etc/unbound/myrecords.conf

You can use this file to add your custom records as well.

Create new file at

nano /etc/unbound/myrecords.conf
local-zone: "doubleclick.net" redirect
local-data: "doubleclick.net A 127.0.0.1"
local-zone: "googlesyndication.com" redirect
local-data: "googlesyndication.com A 127.0.0.1"
local-zone: "googleadservices.com" redirect
local-data: "googleadservices.com A 127.0.0.1"
local-zone: "google-analytics.com" redirect
local-data: "google-analytics.com A 127.0.0.1"
local-zone: "ads.youtube.com" redirect
local-data: "ads.youtube.com A 127.0.0.1"
local-zone: "adserver.yahoo.com" redirect
local-data: "adserver.yahoo.com A 127.0.0.1"
local-zone: "1.com" redirect
local-data: "1.com A 0.0.0.0"
local-data: "zaib.com A 1.2.3.4"
local-data: "zaib2.com A 1.2.3.4"

Once all done, restart the unbound service by

service unbound restart
OR
service unbound reload

Test if UNBOUND service is started successfully.

service unbound status

Result:

â unbound.service - Unbound DNS server
Loaded: loaded (/lib/systemd/system/unbound.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-12-10 12:28:59 PKT; 2s ago
Docs: man:unbound(8)
Process: 1588 ExecStartPre=/usr/lib/unbound/package-helper root_trust_anchor_update (code=exited, status=0/SUCCESS)
Process: 1576 ExecStartPre=/usr/lib/unbound/package-helper chroot_setup (code=exited, status=0/SUCCESS)
Main PID: 1610 (unbound)
Tasks: 4 (limit: 2290)
CGroup: /system.slice/unbound.service
ââ1610 /usr/sbin/unbound -d

Dec 10 12:28:58 u18 systemd[1]: Starting Unbound DNS server...
Dec 10 12:28:59 u18 package-helper[1588]: /var/lib/unbound/root.key has content
Dec 10 12:28:59 u18 package-helper[1588]: success: the anchor is ok
Dec 10 12:28:59 u18 unbound[1610]: [1575962939] unbound[1610:0] warning: so-rcvbuf 1048576 was not granted. Got 425984. To fix: start with root permissions(linux) or sysctl bigger net.core.rmem_max
Dec 10 12:28:59 u18 unbound[1610]: [1575962939] unbound[1610:0] notice: init module 0: subnet
Dec 10 12:28:59 u18 unbound[1610]: [1575962939] unbound[1610:0] notice: init module 1: validator
Dec 10 12:28:59 u18 unbound[1610]: [1575962939] unbound[1610:0] notice: init module 2: iterator
Dec 10 12:28:59 u18 unbound[1610]: [1575962939] unbound[1610:0] info: start of service (unbound 1.6.7).
Dec 10 12:28:59 u18 systemd[1]: Started Unbound DNS server.

Test if DNS server is responding to DNS queries

dig @127.0.0.1 bbc.com

1st Result: [check the Query time]

;  DiG 9.11.3-1ubuntu1.11-Ubuntu  @127.0.0.1 bbc.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16313
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;bbc.com. IN A

;; ANSWER SECTION:
bbc.com. 86400 IN A 151.101.192.81
bbc.com. 86400 IN A 151.101.128.81
bbc.com. 86400 IN A 151.101.0.81
bbc.com. 86400 IN A 151.101.64.81

;; Query time: 971 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Dec 10 07:04:21 UTC 2019
;; MSG SIZE rcvd: 100

2nd Result: [check the Query time]

root@u18:/etc/unbound/unbound.conf.d# dig @127.0.0.1 bbc.com

;  DiG 9.11.3-1ubuntu1.11-Ubuntu  @127.0.0.1 bbc.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14171
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;bbc.com. IN A

;; ANSWER SECTION:
bbc.com. 86398 IN A 151.101.192.81
bbc.com. 86398 IN A 151.101.128.81
bbc.com. 86398 IN A 151.101.0.81
bbc.com. 86398 IN A 151.101.64.81

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Dec 10 07:04:23 UTC 2019
;; MSG SIZE rcvd: 100

See the difference between 1st & second response which shows that cache is working

 


Enabling LOG File [recommended for troubleshoot purposes only]

Create a Log file and assign rights to write logs:

mkdir /var/log/unbound
touch /var/log/unbound/unbound.log
chmod -R 777  /var/log/unbound/

Now enable it in the unbound config file. I have commented it in the configuration file.

An example of viewing logs:

sudo tail -f /var/log/unbound/unbound.log
sudo tail -f /var/log/syslog

UNBOUND.LOG

[1575963664] unbound[1962:3] info: 101.11.11.161 bbc.com.agp1. A IN
[1575963664] unbound[1962:3] info: resolving bbc.com.agp1. A IN
[1575963664] unbound[1962:3] info: response for bbc.com.agp1. A IN
[1575963664] unbound[1962:3] info: reply from  193.0.14.129#53
[1575963664] unbound[1962:3] info: query response was NXDOMAIN ANSWER
[1575963664] unbound[1962:3] info: validate(nxdomain): sec_status_secure
[1575963664] unbound[1962:3] info: validation success bbc.com.agp1. A IN
[1575963664] unbound[1962:3] info: 101.11.11.161 bbc.com.agp1. AAAA IN
[1575963664] unbound[1962:3] info: resolving bbc.com.agp1. AAAA IN
[1575963664] unbound[1962:3] info: response for bbc.com.agp1. AAAA IN
[1575963664] unbound[1962:3] info: reply from  199.7.83.42#53
[1575963664] unbound[1962:3] info: query response was NXDOMAIN ANSWER
[1575963664] unbound[1962:3] info: validate(nxdomain): sec_status_secure
[1575963664] unbound[1962:3] info: validation success bbc.com.agp1. AAAA IN
[1575963664] unbound[1962:1] info: 101.11.11.161 bbc.com. A IN
[1575963664] unbound[1962:1] info: resolving bbc.com. A IN
[1575963664] unbound[1962:1] info: resolving bbc.com. DS IN
[1575963664] unbound[1962:1] info: NSEC3s for the referral proved no DS.
[1575963664] unbound[1962:1] info: Verified that unsigned response is INSECURE
[1575963672] unbound[1962:0] info: 101.11.11.161 bbc.com. AAAA IN

Example of cache export and import:

unbound-control dump_cache > backup
unbound-control load_cache < backup

#Clear one site from cache

unbound-control flush_zone google.com

# View cached DNS contents or count

unbound-control dump_cache
unbound-control dump_cache | wc -l

Start UNBOUND in DEBUG mode

unbound -d -vvvv

Regard’s
Syed Jahanzaib

November 26, 2019

Assigning friendly/fix name to USB device

Filed under: Linux Related — Tags: , , , , , , — Syed Jahanzaib / Pinochio~:) @ 9:34 AM

depositphotos_86584980-stock-photo-select-confusion-sign

Teltonika Usb Modem

Teltonika Usb Modem


Scenario:

We have Teltonika USB modem connected to our Linux box (Ubuntu) & Kannel is configured as SMS gateway for our network devices so that they can send Info/Alert SMS incase of any failure via HTTP method.

/dev/ttyACM0 is default port name on which this modem is detected by default BUT it happens Often that when the Linux box reboots, modem is detected on different port name like ttyACM1 which result in failure of kannel modem detection as the name is hardcoded in /etc/kannel/kannel.conf

To settle this issue, we fixed the USB device with our customized name , and in kannel we used this fixed name which sorted the modem port name change issue.

Step#1

Run following CMD

udevadm info -a -p $(udevadm info -q path -n /dev/ttyACM1)

& look for following attributes & note down both

  • idVendor
  • idProduct

Step#2

now Edit or create new file in /etc/udev by

nano /etc/udev/rules.d/99-usb-serial.rules

& paste following

SUBSYSTEM=="tty", ATTRS{idVendor}=="1d12", ATTRS{idProduct}=="3e11", SYMLINK+="gsm"

Make sure to change the idVendor & idProduct numbers that you noted in step #1.

Save & Exit.

Now issue below CMD to reload udev rules

sudo udevadm trigger

if all goes well, then look for your new device name gsm in /dev folder

ls -l /dev/gsm

Ok result would be similar to this

lrwxrwxrwx 1 root root 7 Nov 26 09:04 /dev/gsm -> ttyACM1

Regard’s
Syed Jahanzaib

 

July 17, 2019

BASH: Exporting MYSQL DB to Remote Server

Filed under: Linux Related — Tags: , , — Syed Jahanzaib / Pinochio~:) @ 10:28 AM

mysql-export-import

Disclaimer: This post is shared just for reference & learning purposes. You must modify and add more failsafe check before using it in production.

Regards
Syed Jahanzaib


Scenario:

We are using Freeradius server which uses mySQL as its backend DB. Ideally the mysql server should have replica server so that if Primary goes down dueto any fault, the secondary replica should come in action.

For high availability purposes we we want to have a standby server. Mysql Master-Slave or Master-Master replication is ideal for real time replication. We successfully implemented this model at few sites, but yes replication requires constant monitoring, and at one place the secondary replica server backfired & caused data loss.

For one particular Remote Site we wanted to avoid the complications of REPLICATION. What we wanted is a standby server, and the DB from primary should be exported to secondary replica server daily in morning and emails for the actions taken by the script should be emailed to us.

We made custom script that is running successfully from quite some time.

The BASH script performs following function …

  • Checks secondary server PING response
  • Check secondary server SSH access
  • Checks primary server MYSQL DB access
  • Checks secondary server MYSQL DB access
  • Check if exported DB is of valid size, (I set it to min 10 KB, yes you may want to adjust it according to your setup)
  • If all OK, then export primary server DB, and import it to secondary server

Script Requirements:

https://aacable.wordpress.com/2011/11/25/howto-login-on-remote-mikrotik-linux-without-password-to-execute-commands/


BASH Script Code:

  • touch /temp/update_radius_from_10.0.0.1__TO__10.0.0.2.sh
  • chmod +x /temp/update_radius_from_10.0.0.1__TO__10.0.0.2.sh
  • nano /temp/update_radius_from_10.0.0.1__TO__110.0.0.2.sh
#!/bin/bash
clear
#set -x
# Version 1.0 / 10-July-2019
# Syed Jahanzaib / Web: https://aacable.wordpress.com / Email: aacable@hotmail.com
# This script exports mysqldb and restores it to second remote server
# Requires passwordless login on remote server using SSH keys
# Settings various VARIABLES for the script
# adding dns for resolving
echo "nameserver 8.8.8.8" > /etc/resolv.conf
#SET DATE TIME
set $(date)
time=`date |awk '{print $4}'`
YESTERDAY=`date --date='yesterday' +%Y-%m-%d`
TODAY=`date +"%d-%b-%Y__%T"`
SCRIPTST=`date +"%d-%b-%Y__%T"`
HOSTNAME=`hostname | sed 's/ //g'`
IP1=10.0.0.1
IP2=10.0.0.2
IP2ROLE="RADIUS"
IP2_SSH_PORT=22
SQL_DIR="sql_replica"
#MYSQL DETAILS
SQLUSER="root"
SQLPASS="TYPE.YOUR.SQL.ROOT.PASS"
export MYSQL_PWD=$SQLPASS
CMD="mysql -u$SQLUSER --skip-column-names -s -e"
DB="radius"
FILE="/$SQL_DIR/$HOSTNAME.$TODAY.IP.$IP1.sql"
GMAILID="YOUR_SENDER_GMAILID@gmail.com"
GMAILPASS="GMAIL_PASS"
ADMINMAIL1="aacableATATAThotmail.com"
COMPANY="ZAIB"
RESULT="/tmp/$IP2.$IP2ROLE.txt"
CLIENTS_FILE="/usr/local/etc/raddb/clients.conf"
PING_ATTEMPTS="2"
PING_RESULT="/tmp/$IP2.$IP2ROLE.ping.result.txt"
IP2_SSH_CHK="/tmp/$IP2.ssh.chk.txt"
touch $RESULT
touch $PING_RESULT
> $RESULT
> $PING_RESULT
rm -f /$SQL_DIR/*.sql
# Test PING to device
count=$(ping -c $PING_ATTEMPTS $IP2 | awk -F, '/received/{print $2*1}')
if [ $count -eq 0 ]; then
echo "- $COMPANY ALERT: $IP2 - $IP2ROLE is not responding to PING Attempts, cannot continue without it , Please check !"
echo "- $COMPANY ALERT: $IP2 - $IP2ROLE is not responding to PING Attempts, cannot continue without it , Please check !" > $PING_RESULT
sendemail -t $email -u "ALERT: $IP2 $IP2ROLE NOT RESPONDING!" -o tls=yes -s smtp.gmail.com:587 -t $ADMINMAIL1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$PING_RESULT -o message-content-type=text
exit 1
fi
echo "- Script start time: $SCRIPTST

This report contains DB export results.

- Source Server : $HOSTNAME / $IP1
- Destination Server : $IP2

- PING Result to $IP2 : OK"

echo "- Script start time: $SCRIPTST

This report contains DB export results.

- Source Server : $HOSTNAME / $IP1
- Destination Server : $IP2
- PING Result to $IP2 : OK" >> $RESULT

#Cehck if SSH is accessible
scp -q -P $IP2_SSH_PORT root@$IP2:/etc/lsb-release $IP2_SSH_CHK
# Verify if file is downloaded from remote server via ssh
if [ ! -f $IP2_SSH_CHK ]; then
echo -e "- $COMPANY ALERT: $IP2 - $IP2ROLE is not responding to passwordless SSH ACCESS, cannot continue without it , Please check !"
exit 1
fi
echo -e "- SSH Access to $IP2 : OK"
echo -e "- SSH Access to $IP2 : OK" >> $RESULT

# Check if $DB (in this case radius )is accessible or not, if NOT, then exit the script
RESULT_DB_CHK=`$CMD "SHOW DATABASES LIKE '$DB'"`
if [ "$RESULT_DB_CHK" != "$DB" ]; then
echo "- ALERT: $IP1 - DB $DB not accessible !!!"
echo "- ALERT: $IP1 - DB $DB not accessible !!!" >> $RESULT
sendemail -t $email -u "- ALERT: $IP1 - DB $DB not accessible" -o tls=yes -t $ADMINMAIL1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$RESULT -o message-content-type=text
exit 1
fi

echo "- $DB - Database accessed on $IP1 : OK" >> $RESULT

#############################################
######## START the BACKUP PROCESS ... #######
#############################################
# Checking if $SQL_DIR folder is previously present or not . . .
{
if [ ! -d "/$SQL_DIR" ]; then
echo -e "- ALERT: /$SQL_DIR folder not found, Creating it MYSQL EXPORT/DUMP backup should be placed there . . ."
mkdir /$SQL_DIR
else
echo -e "- INFO: $SQL_DIR folder is already present , so no need to create it, Proceeding further . . ."
fi
}

mysqldump -u$SQLUSER -p$SQLPASS --single-transaction=TRUE --ignore-table={radius.radacct} $DB > $FILE
# CHECK FILE SIZE AND COMPARE, IF ITS LESS , THEN ALERT
SIZE=`ls -lh $FILE | awk '{print $5}'`
SIZEB=`ls -l $FILE | awk '{print $5}'`
if [ $SIZEB -lt 1 ]
then
echo "- ALERT: DB export failed on $IP1 - Size = $SIZE OR $SIZEB Bytes"
echo "- ALERT: DB export failed on $IP1 - Size = $SIZE OR $SIZEB Bytes" >> $RESULT
sendemail -t $email -u "ALERT: DB export failed on $IP1 - Size = $SIZE OR $SIZEB Bytes" -o tls=yes -s smtp.gmail.com:587 -t $ADMINMAIL1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$RESULT -o message-content-type=text
exit 1
fi
#ssh -p $IP2_SSH_PORT root@$IP2 mkdir /$SQL_DIR
#scp -P $IP2_SSH_PORT $FILE_FINAL root@$IP2:/$SQL_DIR
#ssh -p $IP2_SSH_PORT root@$IP2 ls -lh /$SQL_DIR
# Import file in secondary radius
#ssh -p $IP2_SSH_PORT root@$IP2 "mysql -u$SQLUSER -p$SQLPASS $DB < $FILE
#mysql -h $IP2 -u$SQLUSER -p$SQLPASS $DB < $FILE
ssh -p $IP2_SSH_PORT root@$IP2 mysql -u$SQLUSER -p$SQLPASS $DB  output
#scp -P $IP2_SSH_PORT $CLIENTS_FILE root@$IP2:/usr/local/etc/raddb/
ssh -p $IP2_SSH_PORT root@$IP2 'service freeradius restart'
SCRIPTET=`date +"%d-%b-%Y___%T"`

echo "- FILE NAME : $FILE
- FILE SIZE : $SIZE

- DONE : Backup from $IP1 to $IP2 have been Exported OK

- Script End Time: $SCRIPTET

Regard's
Syed Jahanzaib"

echo "- FILE NAME : $FILE
- FILE SIZE : $SIZE

- DONE : Backup from $IP1 to $IP2 have been Exported OK

- Script End Time: $SCRIPTET

Regard's
Syed Jahanzaib" >> $RESULT

sendemail -t $email -u "$TODAY $HOSTNAME DB Exported from $IP1 to $IP2 Report OK" -o tls=yes -s smtp.gmail.com:587 -t $ADMINMAIL1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$RESULT -o message-content-type=text

#cat $RESULT
rm $IP2_SSH_CHK
rm $RESULT
rm $PING_RESULT
rm $FILE

Email Report Sample:

replica export done.PNG


Cron schedule to run the script Daily at 7am

# To run the script daily at 7 AM in morning
00 07 * * * /temp/update_radius_from_10.0.0.1__TO__10.0.0.2.sh

# To run the script every 6th hours 30 mnts
30 */6 * * * /temp/update_radius_from_10.0.0.1__TO__10.0.0.2.sh

Regard's
Syed Jahanzaib

April 22, 2019

MySql Database Recovery from Raw Files

Filed under: Linux Related, Radius Manager — Tags: , , , , , — Syed Jahanzaib / Pinochio~:) @ 2:31 PM

mysql recovery.PNG


Disclaimer: This worked under particular case. It may or may not work for everyone.

Scenario:

OS: Ubuntu 12.4 Servedit Edition / x86

MYSQL: Ver 14.14 Distrib 5.5.54, for debian-linux-gnu (i686) using readline 6.2

The OP was running radius for AAA. The disk got faulty for some unknown reasons and were unable to boot from it. There was no database backup [Real example of bad practices] So restoration from mysqldump to new system was not an option there !

Requirements:

We need to restore the Database using mysql raw files. Luckily the faulty disk was able to got attached to other system & we were able to copy the core /var/lib/mysql/ folders (along with all sub folders in it)


Quick & Dirty Restoration Step !

Requires some good level of Linux / DB knowledge]

  • Setup a test SANDBOX, Install same level of OS along with MYSQL on new system/disk. Create databases / tables as required. Verify all is working by logging to mysql
  • Stop the MYSQL service.
  • Copy the folder /var/lib/mysql [copied from faulty disk] to this new box under /var/lib/mysql/  
  • Set the permission on newly copied files/folders
    chown mysql -R /var/lib/mysql/

After this point Try to start the MYSQL service , IF it starts successfully & you can see your DATA , then skip below steps , ELSE continue through below steps …

  • Edit the /etc/mysql/my.cnf & add following line under [mysqld] section
    innodb_force_recovery = 6
  • Start MYSQL service & the service will start in Safe Mode with limited working support. Verify if you can able to login to MYSQL service by
    mysql -uroot -pPASS
  • If above step works, Export the Database backup using mysqldump cmd e.g:
    mysqldump -uroot -pSQLPASS   radius  >  radius_db_dump_.sql
  • Once done, Open the file in nano or any other text editor, & verify if it contains the required data.

Now copy the radius_db_dump_.sql to safe location & you know what to do next 🙂

  • Import this mysqldump file to your working radius system !

TIPS:

best-practice2

Make sure you have multistage backup strategies in place for any mission critical server.

Example for mysql Database, You can do following

  • If your server is VM, then VEEAM B&R will be your best friend & guardian, go for it
  • 1st Stage Backup: [Highly recommended for live replication]
    ideally, you should have at least 2 Replica servers & configure either Master-Master or Master-Slave Replication
  • 2nd Stage backup:
    Create bash scripts to export DB backup in local folder on a daily basis, (or hourly basis if required]
  • 3rd Stage backup:
    Attach external USB disk to the server, and in your backup script, add this usb as additional backup repository
  • 4th Stage backup:
    Configure DROPBOX and add it as additional backup repository
  • 5th Stage backup:
    The admin should manually copy the backup folders to his desktop so that if all other backups gets failed , this should come in handy.

Regard’s
Syed Jahanzaib

 

 

 

January 16, 2019

BASH script to monitor Cisco Switch Port Status

Filed under: Cisco Related, Linux Related — Tags: , , , , , , — Syed Jahanzaib / Pinochio~:) @ 10:55 AM

portmonitor

2019-01-17 10.05.47.jpg

Following script was designed for an OP who wanted to monitor his cisco switch ports status via linux base bash script.

  • Created: February, 2016
  • Revision: January, 2019

 

OP Requirements:

  • We need a bash script that can acquire ports status of Cisco switch using SNMP query & act accordingly based on the results, example send sms/email etc,
  • The script should first check target device network connectivity by ping, if PING NOT responding, Exit,
  • If ping OK, then check SNMP status, if SNMP NOT responding, then error report, & Exit,
  • If Ping / SNMP responds OK, then check the port status, if port status is NOT UP , then send email/sms alert 1 time until next status change.

Hardware / Software Used in this post:

  • Cisco 3750 24 Gigabit Ports Switch
  • Ubuntu 12.4 Server Edition
  • Bash Script
  • SNMP support enabled on Cisco switch to query port status using MIB names

Solution:

I made following script which checks PING/SNMP status, and then Port Status of Cisco 3750 Switch. This is just an example. You can use your own techniques to acquire the same result. This is fully tested and working script. There are many other ways to do the same like using any NMS app like Nagios, or DUDE which have good GUI control so no need to do coding in the dark : )

Surely this contains too much junk or some unwanted sections, so you may want to trim it according to your taste and requirements.

Regard’s
Syed Jahanzaib


  1. Install SNMP MIBS

First we need to make sure that MIB are installed, Do so by

sudo apt-get install -y snmp
apt-get install -y snmp-mibs-downloader
sudo download-mibs

After this , Add SNMP Mibs entry in

/etc/snmp/snmp.conf

by adding this line

mibs +ALL

Save & Exit

Now query your switch by following command to see if snmpwalk is working …

root@Radius:/temp# snmpwalk -v1 -c wl 10.0.0.1 IF-MIB::ifOperStatus

& you should see something line below if SNMP is working …

IF-MIB::ifOperStatus.1 = INTEGER: up(1)
IF-MIB::ifOperStatus.17 = INTEGER: up(1)
IF-MIB::ifOperStatus.5182 = INTEGER: down(2)
IF-MIB::ifOperStatus.5183 = INTEGER: down(2)
IF-MIB::ifOperStatus.5184 = INTEGER: down(2)
IF-MIB::ifOperStatus.10601 = INTEGER: up(1)
IF-MIB::ifOperStatus.10602 = INTEGER: down(2)
IF-MIB::ifOperStatus.10603 = INTEGER: down(2)
IF-MIB::ifOperStatus.10604 = INTEGER: down(2)
IF-MIB::ifOperStatus.10605 = INTEGER: up(1)
IF-MIB::ifOperStatus.10606 = INTEGER: up(1)
IF-MIB::ifOperStatus.10607 = INTEGER: up(1)
IF-MIB::ifOperStatus.10608 = INTEGER: up(1)
IF-MIB::ifOperStatus.10609 = INTEGER: up(1)
IF-MIB::ifOperStatus.10610 = INTEGER: up(1)
IF-MIB::ifOperStatus.10611 = INTEGER: up(1)
IF-MIB::ifOperStatus.10612 = INTEGER: up(1)
IF-MIB::ifOperStatus.10613 = INTEGER: up(1)
IF-MIB::ifOperStatus.10614 = INTEGER: up(1)
IF-MIB::ifOperStatus.10615 = INTEGER: up(1)
IF-MIB::ifOperStatus.10616 = INTEGER: up(1)
IF-MIB::ifOperStatus.10617 = INTEGER: up(1)
IF-MIB::ifOperStatus.10618 = INTEGER: up(1)
IF-MIB::ifOperStatus.10619 = INTEGER: up(1)
IF-MIB::ifOperStatus.10620 = INTEGER: up(1)
IF-MIB::ifOperStatus.10621 = INTEGER: up(1)
IF-MIB::ifOperStatus.10622 = INTEGER: up(1)
IF-MIB::ifOperStatus.10623 = INTEGER: up(1)
IF-MIB::ifOperStatus.10624 = INTEGER: up(1)
IF-MIB::ifOperStatus.10625 = INTEGER: down(2)
IF-MIB::ifOperStatus.10626 = INTEGER: down(2)
IF-MIB::ifOperStatus.10627 = INTEGER: down(2)
IF-MIB::ifOperStatus.10628 = INTEGER: down(2)
IF-MIB::ifOperStatus.14501 = INTEGER: up(1)

OR getting UP/DOWN result for particular port (port 10)

snmpwalk -v1 -c wl 10.0.0.1 IF-MIB::ifOperStatus.10610 -Oqv

Output Result:

up

 

 


the Script!

  • mkdir /temp
  • cd /temp
  • touch monitor_sw_port.sh
  • chmod +x monitor_sw_port.sh
  • nano monitor_sw_port.sh

and paste following, make sure to edit all info accordingly…

#!/bin/bash
#set -x
# Script to check Cisco Switch Port Status and send alert accordingly
# It will first check PING, then SNMP Status, then PORT status & act accordingly
# Email: aacable at hotmail dot com / http : // aacable . wordpress . com
# 15-Jan-2019
HOST="$1"
PORT="$2"
SNMP="public"
DEVNAME="ZAIB_Main_Switch"
HOSTNAME=`hostname`
TEMP="temp"
COMPANY="ZAIB (Pvt) Ltd."
DATE=`date`
# GMAIL DETAILS
GMAILID="MYGMAIL@gmail.com"
GMAILPASS="GMAIL_PASS"
ADMINMAIL1="aacableAThotmail.com"
SENDMAIL="/temp/sendEmail-v1.56/sendEmail"
# SMS RELATED and KANNEL INFO
# KANNEL SMS Gateway Info
KANNELURL="127.0.0.1:13013"
KANNELID="kannel"
KANNELPASS="KANNEL_PASS"
CELL1="03333021909"
PING_ATTEMPTS="2"
HOST_PING_STATUS="/$TEMP/$HOST.$PORT.ping"
HOST_PORT_STATUS="/$TEMP/$HOST.$PORT.port"
LAST_DOWNTIME_HOLDER="/$TEMP/$HOST.$PORT.last_down.status.txt"
touch $HOST_PING_STATUS
touch $HOST_PORT_STATUS
touch $LAST_DOWNTIME_HOLDER
# If ip parameters are missing, then inform & exit
if [ -z "$HOST" ];then
echo "Error: IP missing, Please use this,
./monitor_sw_port.sh 10.0.0.1 10601"
exit 1
fi
# If port parameters are missing, then inform & exit
if [ -z "$PORT" ];then
echo "Error: PORT number missing, Please use this,
./monitor_sw_port.sh 10.0.0.1 10601"
exit 1
fi
# Test PING to device
count=$(ping -c $PING_ATTEMPTS $HOST | awk -F, '/received/{print $2*1}')
if [ $count -eq 0 ]; then
echo "$HOST $DEVNAME is not responding to PING Attempts, cannot continue without , por disable ping check] !"
exit 1
else
echo "- PING Result : OK"
fi
# Test SNMP Result of device
snmpwalk -v1 -c $SNMP $HOST SNMPv2-MIB::sysDescr.0 > /tmp/$HOST.$PORT.snmp.status.txt
if [ ! -f "/tmp/$HOST.$PORT.snmp.status.txt" ]; then
echo "- ALERT: ..... $HOST $DEVNAME is not responding to SNMP Request, Cannot continue without it ... Exit"
exit 1
else
echo "- SNMP Result : OK"
fi
# If all OK, then pull Port Description
PORT_DERSCRIPTION=`snmpwalk -v1 -c $SNMP $HOST IF-MIB::ifDescr.$PORT -Oqv`
# Check if folder exists, if not create one and continue
if [ ! -d "/$TEMP" ]; then
echo
echo
echo "/$TEMP folder not found, Creating it so all ping results should be saved there . . ."
mkdir /$TEMP
fi
### START ACTION
################################
### CHECK PORT STATUS - for UP #
################################
CHKPORT=`snmpwalk -v1 -c $SNMP $HOST IF-MIB::ifOperStatus.$PORT -Oqv`
#CHKPORT="up"
# If Port number does not exists, then inform and exit
if [ -z "$CHKPORT" ]; then
echo "ALERT: .... Port number $PORT NOT found on $HOST $DEVNAME , Please check Port Number, Exiting ..."
exit 1
fi
#########################################
# SMS/EMAIL Messages for PORT UP / DOWN #
#########################################
# Temporary file holder for PORT DOWN/UP storing sms/email
PORT_DOWN_MSG_HOLDER="/$TEMP/$HOST.$PORT.down.msg"
PORT_UP_MSG_HOLDER="/$TEMP/$HOST.$PORT.up.msg"
echo "ALERT:
$DEVNAME $HOST port $PORT $PORT_DESCRIPTION is DOWN @ $DATE
$COMPANY" > $PORT_DOWN_MSG_HOLDER
echo "INFO:
$DEVNAME $HOST port $PORT $PORT_DESCRIPTION is OK @ $DATE!
$COMPANY" > $PORT_UP_MSG_HOLDER

PORT_DERSCRIPTION=`snmpwalk -v1 -c $SNMP $HOST IF-MIB::ifDescr.$PORT -Oqv`
HOST_PORT_DOWN_ALERTONSCREEN="ALERT: .... $HOST $DEVNAME port nummber $PORT $PORT_DERSCRIPTION is DOWN @ $DATE"
HOST_PORT_UP_ALERTONSCREEN="INFO: .... $HOST $DEVNAME port nummber $PORT $PORT_DERSCRIPTION is OK @ $DATE"
# Check if port is UP
if [ "$CHKPORT" = "up" ]; then
echo -e "$HOST_PORT_UP_ALERTONSCREEN"
# Check if port isUP and its previous state was DOWN, then send UP sms/email
if [ $(grep -c "$HOST" "$HOST_PORT_STATUS") -eq 1 ]; then
echo "INFO: This port was previosuly DOWN, and now its UP ,Sending UP SMS 1 time only"
# Sending PORT DOWN ALERT via EMAIL
$SENDMAIL -u "$HOST_PORT_UP_ALERTONSCREEN" -o tls=yes -s smtp.gmail.com:587 -t $ADMINMAIL1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$PORT_UP_MSG_HOLDER -o message-content-type=text
# Sending PORT DOWN ALERT via SMS using KANNEL SMS Gateway
cat $PORT_UP_MSG_HOLDER | curl "http://$KANNELURL/cgi-bin/sendsms?username=$KANNELID&password=$KANNELPASS&to=$CELL1" -G --data-urlencode text@-
sed -i "/$HOST/d" "$HOST_PORT_STATUS"
fi
fi
##################################
### CHECK PORT STATUS - for DOWN #
##################################
if [ "$CHKPORT" = "down" ]; then
echo "$HOST_PORT_DOWN_ALERTONSCREEN"
#check if port staus was previosly UP, then act
if [ $(grep -c "$HOST" "$HOST_PORT_STATUS") -eq 1 ]; then
echo "ALERT: ..... $HOST $DEVNAME port $PORT $PORT_DERSCRIPTION is DOWN. SMS have already been sent."
fi
if [ $(grep -c "$HOST" "$HOST_PORT_STATUS") -eq 0 ]; then
echo "ALERT: ..... $HOST $DEVNAME port $PORT $PORT_DERSCRIPTION is now down! - SENDING PORT DOWN SMS ..."
echo "$HOST" > $HOST_PORT_STATUS
echo "SMS Sent FOR $HOST $DEVNAME port $PORT $PORT_DERSCRIPTION DOWN have been sent only 1 time until next status change ..."
# Sending PORT DOWN ALERT via EMAIL
$SENDMAIL -u "$HOST_PORT_DOWN_ALERTONSCREEN" -o tls=yes -s smtp.gmail.com:587 -t $ADMINMAIL1 -xu $GMAILID -xp $GMAILPASS -f $GMAILID -o message-file=$PORT_DOWN_MSG_HOLDER -o message-content-type=text
# Sending PORT UP ALERT via SMS
cat $PORT_DOWN_MSG_HOLDER | curl "http://$KANNELURL/cgi-bin/sendsms?username=$KANNELID&password=$KANNELPASS&to=$CELL1" -G --data-urlencode text@-
fi
fi
####################
# SCRIPT ENDS HERE #
# SYED JAHANZAIB #
####################

Usage:

change the IP and port number.

  • /temp/monitor_sw_port.sh 10.0.0.1 10610

You can add entry in cron like this

# Check for Service remote host port status
*/5 * * * * /temp/portmon.sh 10.0.0.1 10610

RESULT:

SMS result:
2019-01-17 10.05.47.jpgEmail Result:

email alert on port down vlan.PNG

# Monitoring Port # 10 , when port is DOWN ...

root@Radius:/temp# ./monitor_sw_port.sh 10.0.0.1 10610
- PING Result : OK
- SNMP Result : OK
ALERT: .... 10.0.0.1 WL_Main_Switch port nummber 10610 GigabitEthernet2/0/10 is DOWN @ Tue Jan 15 12:44:45 PKT 2019
ALERT: ..... 10.0.0.1 WL_Main_Switch port 10610 GigabitEthernet2/0/10 is DOWN. SMS have already been sent.

root@Radius:/temp# ./monitor_sw_port.sh 10.0.0.1 10610
- PING Result : OK
- SNMP Result : OK
ALERT: .... 10.0.0.1 WL_Main_Switch port nummber 10610 GigabitEthernet2/0/10 is DOWN @ Tue Jan 15 12:44:51 PKT 2019
ALERT: ..... 10.0.0.1 WL_Main_Switch port 10610 GigabitEthernet2/0/10 is DOWN. SMS have already been sent.

# Monitoring Port # 10 , when port is UP now ...
root@Radius:/temp# ./monitor_sw_port.sh 10.0.0.1 10610
- PING Result : OK
- SNMP Result : OK
INFO: .... 10.0.0.1 WL_Main_Switch port nummber 10610 GigabitEthernet2/0/10 is OK @ Tue Jan 15 12:45:01 PKT 2019
INFO: This port was previosuly DOWN, and now its UP ,Sending UP SMS 1 time only
Jan 15 12:45:11 radius sendEmail[18700]: Email was sent successfully!
0: Accepted for delivery

# Monitoring Port # 10 , when port is working UP ...
root@Radius:/temp# ./monitor_sw_port.sh 10.0.0.1 10610
- PING Result : OK
- SNMP Result : OK
INFO: .... 10.0.0.1 WL_Main_Switch port nummber 10610 GigabitEthernet2/0/10 is OK @ Tue Jan 15 12:45:12 PKT 2019

September 27, 2018

DNSMASQ Short Notes to self

Filed under: Linux Related — Tags: , , , , — Syed Jahanzaib / Pinochio~:) @ 10:15 AM

dnsmasq.jpg

Dnsmasq is a lightweight, easy to configure DNS forwarder, designed to provide DNS (and optionally DHCP and TFTP) services to a small-scale network.

As compared to `​BIND`​, which is a bit complex to configure for beginners, `DNSMASQ` is very easy and requires minimum configuration. This post is just a reference guide for myself.


Install DNSMASQ in Ubuntu !

apt-get update
sudo apt-get install -y dnsmasq

After this edit /etc/dnsmasq.conf file ,  For Shorter version, I am pasting required important options only, (by removing all lines & using these one only …)

nano /etc/dnsmasq.conf

domain-needed
#IF you want to listen on multiple interface, add them (by duplicating following line)
interface=enp6s0
bogus-priv
strict-order
expand-hosts
cache-size=10000

After every change in the config, make sure to restart DNSMASQ service.

service dnsmasq restart


Forwarding Queries to Upstream DNS

By default, DNSMASQ forwards all requests which are not able to be resolved in /etc/hosts to the upstream DNS servers defined in /etc/resolve.conf like below

cat /etc/resolv.conf

nameserver 127.0.0.1
nameserver 8.8.8.8

Make Sure to setup 127.0.0.1 in your network adapter as well exampledns-nameservers 127.0.0.1 8.8.8.8



Add DNS Records (static dns entries if required for local servers like media sharing etc)

Adding customized domain entries, dns spoofing i guess. Add the records in /etc/hosts file

cat /etc/hosts

127.0.0.1 localhost
1.2.3.4 zaib.com

 


Restart DNSMASQ Service

After every change in the config, make sure to restart dnsmasq service.

service dnsmasq restart

Monitor DNS traffic

DSNTOP is your best friend. for full details read

http://dns.measurement-factory.com/tools/dnstop/dnstop.8.html


# ACL / Secure you DNS from open relay / flooding

To allow only specific ip series to query your dns server, you can use following bash script.

We have multiple ip pools, and we have made a small text file , we can small bash script to read from the file and add iptables rules accordingly

Sample of /temp/localips.txt

10.0.0.0/8
172.16.0.0/16
192.168.0.0/16

Now you can execute the bash script manually or add it in /etc/rc.local file to execute on every reboot.

cat /etc/fw.sh

#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Very Basic Level of Firewall to allow DNS only for some ip range
# Script by Syed Jahanzaib
# 26-SEP-2018
#set -x

# Setting various Variables

#Local IP files which contains ip/ranges
IPFILE="/temp/localips.txt"

#Destination Port we want to restrict
DPORT="53"

#Destination Port type we want to restrict
DPORT_TYPE1="udp"
DPORT_TYPE2="tcp"

# Flush all previous iptables Rules
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X

# Allow localhost access to query DNS service
iptables -A INPUT -s 127.0.0.1 -p $DPORT_TYPE1 --dport $DPORT -j ACCEPT
iptables -A INPUT -s 127.0.0.1 -p $DPORT_TYPE2 --dport $DPORT -j ACCEPT

# LOOP - Read from localip.txt file , and apply iptables rules
for IP in $(cat $IPFILE); do echo "Allowing $IP for $DPORT_TYPE1 $DPORT Server queries access ..."; iptables -A INPUT -s $IP -p $DPORT_TYPE1 --dport $DPORT -j ACCEPT; done
for IP in $(cat $IPFILE); do echo "Allowing $IP for $DPORT_TYPE2 $DPORT Server queries access ..."; iptables -A INPUT -s $IP -p $DPORT_TYPE2 --dport $DPORT -j ACCEPT; done

# DROP all other requests going to DNS service
iptables -A INPUT -p $DPORT_TYPE1 --dport $DPORT -j DROP
iptables -A INPUT -p $DPORT_TYPE1 --dport $DPORT -j DROP

# Script ends here
# Syed Jahanzaib

add this in /etc/rc.local so that it can run on every reboot!

Also note that if you have large ip pool, its better to use IPSET which is more efficient

SIMPLE QUEUE for traffic going to/from DNS Server

/queue simple
add max-limit=10M/10M name="NS1 DNS Traffic 10mb limit" target=X.X.X.X/32

NSLOOKUP with details on Windows BOX

nslookup -all - 8.8.8.8

Regard’s
Syed Jahanzaib

Older Posts »

%d bloggers like this: