Syed Jahanzaib Personnel Blog to Share Knowledge !

August 26, 2014

MRTG graph for FREERADIUS Online Users

Filed under: Linux Related, Radius Manager — Tags: , — Syed Jahanzaib / Pinochio~:) @ 10:34 AM

radiusRecently at a network where multiple NAS were implemented with single centralized billing system(radius Manager with Free radius as backend engine) , I had many mrtg base graphs for each NAS, and DUDE system to monitor various instances of the target systems, but there was no single graph to monitor overall ONLINE users of all NASES. MRTG was configured on main Billing ssytem, to sort this I used the following bash script and tag it with the mrtg cfg script.

.

SCRIPT TO PRINT ONLINE SESSIONS IN FREERADIUS

First create the script

mkdir /temp
touch /temp/online.sh

chmod +x /temp/online.sh
nano /temp/online.sh

Now paste the following code, [make sure to change the IP, ID and Password]

Note: I used this script for Radius Manager base freeradius billing system.


#!/bin/bash

SQL_USERNAME=radius_username
SQL_DATABASE=radius
SQL_PASSWORD=your_password
SQL_SERVER=127.0.0.1
SQL_ACCOUNTING_TABLE=radacct
BACK_DAYS=3

SESSIONS=`mysql -BN -u$SQL_USERNAME -p$SQL_PASSWORD -h $SQL_SERVER $SQL_DATABASE -e \
"SELECT COUNT(*) FROM $SQL_ACCOUNTING_TABLE \
WHERE acctstoptime IS NULL \
AND Acctstarttime > NOW() - INTERVAL $BACK_DAYS DAY;"`

echo $SESSIONS
echo $SESSIONS

Save & Exit.

.

MRTG.CFG FILE TO GENERATE MRTG GRAPH

Now create a MRTG cfg file and tag it with your master mrtg config file or run it as individual , its up to you and your local design.


#Radius.cfg
# Total Radius Users
Target[Radius.users]: `/temp/online.sh`
Title[Radius.users]: Central Billing System Logged in Users (Total)
PageTop[Radius.users]: <H1> Central Billing System  Logged in Users (Total)</H1>
MaxBytes[Radius.users]: 1000
Colours[Radius.users]: B#8888ff,B#8888ff,B#5398ff,B#5398ff
Options[Radius.users]: gauge,nopercent,noo,integer,growright
LegendI[Radius.users]: Radius Logged in Users
LegendO[Radius.users]:
YLegend[Radius.users]: Radius Logged in Users (Total)
Legend1[Radius.users]: Radius Logged in Users (Total)
Legend2[Radius.users]:
Unscaled[Radius.users]: ymwd

.

.

Regard’s
Syed Jahanzaib

August 18, 2014

Vmware ESXI: You cannot use the vSphere client to edit the settings of virtual machines of version 10 or higher

Filed under: VMware Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 10:38 AM

Few days back, At remote location, when I converted a physical Linux machine into virtual machine (based on ESXI 5.5 , machine ver 10) I received following error when tried to edit its properties to add new interface card.

Editing virtual machine settings fails with the error: You cannot use the vSphere client to edit the settings of virtual machines of version 10 or higher ...

 

esxi-error.

I had the option to downgrade it using v-converter client, but Time was really short as whole network was down and old physical machine was also out of order, so I used following hack to add the interface quickly and make it online.

.

  • Turn OFF the required Guest,
  • Remove the guest from the inventory (Right-click -> remove from inventory)
  • Browse your ESXI datastore where guest files are placed,
  • Now Download the .vmx file from your the location where your guest files are placed (Example guest’s name.vmx file)
  • Open it in any text editor (Example NOTEPAD PLUS+),
  • Change the following … 

virtualHW.version = “10”

to

 

virtualHW.version = “8”

As showed in the image below …

123.

  • Save this file and upload back to original location.
  • Add the guest back to your inventory by right clicking the vmx file and selecting “Add to inventory”

Now try to edit the guest properties, and this time you will be able to do it.

There were some other workarounds too but in that particular situation, I found this method the most quickest and above all it worked well :)

.

Regard’s
Syed Jahanzaib

July 18, 2014

Odd Results with Scheduled Batch Files in Windows Server 2008 R2

Filed under: Microsoft Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 8:44 AM

MS DOS BATCH FILE VIA 2008 R2 Scheduled Task / zaib

MS DOS BATCH FILE VIA 2008 R2 Scheduled Task / zaib

Recently I upgraded one of our old File server previously running Windows 2003 with Windows 2008 R2 64bit. this server was a member of AD and was logging with domain admin account.  Everything went smooth, but after few days I faced an strange issue that few scheduled BATCH files were not running properly at given time. If I try to execute batch file manual, they give proper result, but from schedule they dont, even by right click on the task and selecting RUN dont actually execute the batch file. To resolve this issue I added the admin account in Domain Group Policy and every thign now running fine as expected.

  • Edit Group Policy at Domain Controller
  • Goto “Computer ConfigurationPolicies > Windows Settings > Security Settings > Local Policies > Users Rights Assignment
  • Now on Right side menu, Double click on  “Log on as a batch job” to take its properties,
  • then click button “Add user or Group
  • then click button “Browse”
  • then click button “Advanced”
  • then button “Find now
  • Add your required user ID / Account here like “administrator” or likewise
  • and then “OK
  • Force by gpupdate /force at DC and Client as well.
  • (Or if pc is stand alone, then goto “Start” > Administrative tools > local security policy”)

.

This solved my problem of BATCH files not running via Scheduled Task,

Regard’s

 

July 7, 2014

Smokeping to Monitor Network Latency in UBUNTU

Filed under: Linux Related — Tags: — Syed Jahanzaib / Pinochio~:) @ 11:41 AM

ping

Recently I was troubleshooting a network where concerned Admin complained that they frequently lost connectivity with the Internet. Sometimes pings replies works okay but latency gets high or timeout / breaks occurs. So I decided to setup mrtg base ping graph to monitor ping latency. The custom made mrtg ping probe worked fine and can provide an overview on target ping / rtt and Downtime in a nice manner,

BUT . . . . . . . . . . . . . . . . . . .

I was thinking far ahead , I was thinking for much more advanced latency and pin point graphs which can show ping latency / rtt / loss in much more detailed way. I recalled my memory from old days when I used to monitor my old network with variety of tools and scripts and suddenly a name popped in my mind ” SMOKEPING ” , yes this was the tool I was looking for.

SmokePing generates graphs that can reveal the quality (packet loss and latency variability) & reach-ability of your IP address from several distributed locations. SmokePing is a network latency monitor. It measures network latency to a configurable set of destinations on the network, and displays its findings in easy-to-read Web pages. It uses RRDtool as its logging and graphing back-end, making the system very efficient. The presentation of the data on the Web is done through a CGI with some AJAX capabilities for interactive graph exploration.


  • In this article I will show you howto install smokeping on UBUNTU 10/12

 

First install required components along with smokeping and apache2 (you can remove Apache or any other component if its not required or already installed)

aptitude install smokeping curl libauthen-radius-perl libnet-ldap-perl libnet-dns-perl libio-socket-ssl-perl libnet-telnet-perl libsocket6-perl libio-socket-inet6-perl apache2

Once all is installed, we have to modify few configuration files.

Open following following …

nano /etc/smokeping/config.d/pathnames

now remove sendmail entry by adding # sign to to comment the sendmail line, usually the first line.
Save and exit.

Now open following file

nano /etc/smokeping/config.d/Targets

Now REMOVE all previous lines , and copy paste following

*** Targets ***
probe = FPing

menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing website of <b>ZAIB (Pvt) Ltd.</b> <br> Here you will learn all about the latency of our network.<br><br><br><br><br> This page is maintained by ZAIB. (Pvt) ltd . <br><br>Support Email: aacable@hotmail.com<br>Web: http://aacable.wordpress.com

### YOU CAN CHANGE THE FOLLOWING ACCORDING TO YOUR NETWORK ###

+ Ping

menu = WAN Connectivity
title = WAS Side Network

++ yahoo

menu = yahoo
title = yahoo ping report
host = yahoo.com

++ google

menu = google
title = Google ping report
host = google.com

### YOU CAN CHANGE FOLLOWING ACCORDING TO YOUR NETWORK ###
+ Ping2

menu = LAN Connectivity
title = LAN Side Network

++ Mikrotik

menu = Mikrotik
title = Mikrotik PPP ping report
host = 10.10.0.1

++ Billing

menu = Billing
title = Radius billing Server ping report
host = 10.0.0.2

save and exit.

now restart smokeping service by

service smokeping restart

and access it via browser.

http://yourip/smokeping/smokeping.cgi

Results should be something like below image…

lan

 

wan-report

 

More info on previous smokeping article based on FEDORA 10 , (Old version) Just for idea

http://aacable.wordpress.com/tag/aacable-smokeping/

 

 

MRTG Monitoring with ESXi Hosted Guest Return ‘interface is commented * has no ifSpeed property’

Filed under: Linux Related, Mikrotik Related — Tags: , , — Syed Jahanzaib / Pinochio~:) @ 9:09 AM

Recently at a network, I migrated the mikrotik base RB configuration to esxi base VM guest. Everything went fine, this Mikrotik have snmp configured, and it is monitored via linux base MRTG for various probes. after migration, mrtg graph for itnerfaces stopped with following (when i re run the cfgmaker)

### The following interface is commented out because:
### * has no ifSpeed property

After playing with the itnerfaces & mrtg values, I found two solutions

Solution # 1

Network adapter need to be “E1000″ rather then  “flexible”.  Then SNMP will see the ifspeed correctly.
To make changes, its recommended to turn off the guest.

Solution # 2

Assign this speed in bits-per-second to all interfaces which return 0 for ifSpeed and ifHighSpeed

Create the cfg file with following syntax “–zero-speed=100000000 “

 cfgmaker -zero-speed=100000000 snmp_community@192.168.1.1 > mikrotik.cfg

[192.168.1.1 is mikrotik ip]

 

July 2, 2014

LUSCA Automated Install Scriptt

Filed under: Linux Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 12:22 PM

lusca_image


 

Following is an automated script to install LUSCA r14942 for UBUNTU with aggressive content caching support including some video web sites like YOUTUBE and few others as described in my other article @
http://aacable.wordpress.com/2014/04/21/howto-cache-youtube-with-squid-lusca-and-bypass-cached-videos-from-mikrotik-queue/

I will add more n more functions as soon as I get some free time, like configurable options via choice menu like cache size, mem, and other variables.


 

SCRIPT FUNCTIONS . . . 

This script will do the following

  • Update Ubuntu
  • Install some components required for Compilation of Lusca/Squid package
  • Backup squid.conf if already in /etc/squid.conf with squid.conf.old, stop any running squid instance
  • Download LUSCA r14942 source package to /temp folder and compile it
  • Download squid.conf and storeurl.pl from the internet and place them in /etc/squid.conf
  • Create cache directory like in /cache-1 and default cache size is 5 GB
  • add squid in /etc/rc.local so it may start auto upon system reboot

Note: You should modify all options in /etc/squid.conf after installation , like cache_dir, cache_mem and others as per your network and hardware specifications.


 

REQUIREMENTS . . .

1- Fresh Installation of UBUNTU OS and Configure Internet Access
2- root access to execute script
3- REMOVE ANY KIND OF PREVIOUSLY INSTALLED SQUID INSTALLATION IF ANY
4- Upload or create script in any folder of Ubuntu box,

or create new script with following commands

mkdir /temp
cd /temp
touch lusca_install.sh
chmod +x lusca_install.sh

nano lusca_install.sh

and paste the following code . . .


 


#!/bin/bash
# Version 1.0 / 2nd July, 2014
# LUSCA r14942 Automated Installation Script for Ubuntu flavor / jz
# Syed Jahanzaib / aacable @ hotmail.com  / http://aacable.wordpress.com

# Setting Variables . . . [JZ]
#URL=http://aacable.rdo.pt/files/linux_related/
URL=http://wifismartzone.com/files/linux_related/lusca
SQUID_DIR="/etc/squid"
CACHE_DIR="/cache-1"
pid=`pidof squid`
osver=`cat /etc/issue |awk '{print $1}'`
squidlabel="LUSCA_HEAD-r14942"

# Colors Config  . . . [[ JZ . . . ]]
ESC_SEQ="\x1b["
COL_RESET=$ESC_SEQ"39;49;00m"
COL_RED=$ESC_SEQ"31;01m"
COL_GREEN=$ESC_SEQ"32;01m"

# OS checkup for UBUNTU
echo -e "$COL_GREEN Lusca r14942 Automated Installation Script ver 1.0 for Ubuntu . . .$COL_RESET"
echo -e "$COL_GREEN Checking OS version, as it must be Ubuntu in order to Continue . . .$COL_RESET"
if [[ $osver == Ubuntu ]]; then
echo
echo -e "$COL_GREEN Ubuntu is installed with following information fetched. $COL_RESET"
lsb_release -a
sleep 3
else
echo -e "$COL_RED Sorry, it seems your Linux Distribution is not UBUNTU . Exiting ...$COL_RESET"
exit 1
fi

# Make sure only root can run our script / Checking if user is root, otherwise exit with error [[Jz]]
echo
echo -e "$COL_GREEN Verifying if you are logged in with root privileges  . . .$COL_RESET" 1>&2
FILE="/tmp/out.$$"
GREP="/bin/grep"
if [ "$(id -u)" != "0" ]; then
echo
echo -e "$COL_RED This script must be run as root, switch to root now . . .$COL_RESET" 1>&2
exit 1
fi

# Clearing previous download if any in /tmp folder
echo
echo -e "$COL_GREEN Clearing previous downloads if any in /tmp folder to avoid duplication$COL_RESET"
sleep 3

rm -fr /tmp/squid.conf
rm -fr /tmp/storeurl.txt
rm -fr /tmp/storeurl.pl
rm -fr /tmp/LUSCA_HEAD-r14942*

# Checking IF $URL is accessible m if YES then continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
echo
echo -e "$COL_GREEN Checking if $URL is accessible in order to proceed further. . .!! $COL_RESET"
cd /tmp
wget -q $URL/squid.conf
{
if [ ! -f /tmp/squid.conf ]; then
echo
echo -e "$COL_RED ERROR: Unable to contact $URL, or possibly internet is not working or your IP is in black list at destination server  !! $COL_RESET"
echo -e "$COL_RED ERROR: Please check manual if $URL is accessible or not or if it have required files, JZ  !! $COL_RESET"
exit 0
fi
}
rm -fr /tmp/squid.conf
sleep 6
# Moving further . . .

clear
echo -e "$COL_GREEN You are logged in with root ID, Ok to proceed further . . .!! $COL_RESET"
echo

################################################################## [zaib]
echo
echo -e "$COL_GREEN Updating Ubuntu first . . . !! $COL_RESET"
apt-get update
echo
echo
echo -e "$COL_GREEN Installing required components . . . !! $COL_RESET"
sleep 3
apt-get install  -y gcc  build-essential   libstdc++6   unzip    bzip2   sharutils  ccze  libzip-dev  automake1.9  libfile-readbackwards-perl  dnsmasq

# Clearing OLD data files . . .
{
if [ -f $SQUID_DIR/squid.conf ]; then
echo
echo
echo -e "$COL_RED Previous SQUID configuration file found in $SQUID_DIR ! renaming it for backup purpose . . . $COL_RESET"
mv $SQUID_DIR/squid.conf $SQUID_DIR/squid.conf.old
else
echo
echo
echo -e "$COL_GREEN No Previous Squid configuration have been found in $SQUID_DIR. Proceeding further $COL_RESET"
fi
}

# Checking SQUID status if its already running - check by PID
if [ "$pid" == "" ]; then
echo
echo
echo -e "$COL_GREEN No SQUID instance found in memory , so it seems we are good to GO !!! $COL_RESET"
else
echo
echo -e "$COL_RED SQUID is already running, probably you have some previous copy of SQUID installation, Better to stop and remove all previous squid installation !! $COL_RESET"
echo
echo -e "$COL_RED KILLING PREVIOUS SQUID INSTANCE by killall -9 squid command  !! $COL_RESET"
killall -9 squid
sleep 3
fi

# Downloading Squid source package [zaib]
echo
echo
echo -e "$COL_GREEN Downloading SQUID source package in /tmp folder. . . !! $COL_RESET"
sleep 3

# Checking if /tmp folder is previously present or not . . .
{
if [ ! -d "/tmp" ]; then
echo
echo
echo -e "$COL_RED /tmp folder not found, Creating it so all downloads will be placed here  . . . $COL_RESET"
mkdir /tmp
else
echo
echo -e "$COL_GREEN /tmp folder is already present , so no need to create it, Proceeding further . . . $COL_RESET"
fi
}

cd /tmp

# Checking IF LUSCA_HEAD-r14942.tar.gz  installation file have been ALREADY downloaded in /tmp to avoid duplication! [[ JZ .. . .]]
{
if [ -f /tmp/LUSCA_HEAD-r14942.tar.gz ]; then
rm -fr /tmp/LUSCA_HEAD-r14942.tar.gz
fi
}

wget -c http://wifismartzone.com/files/linux_related/lusca/LUSCA_HEAD-r14942.tar.gz

# Checking IF LUSCA_HEAD-r14942 installation file have been downloaded properly. if YEs continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
{
if [ ! -f /tmp/LUSCA_HEAD-r14942.tar.gz ]; then
echo
echo

echo -e "$COL_RED ERROR: SQUID source code package File could not be download or not found in /tmp/ !! $COL_RESET"
exit 0
fi
}
echo
echo

echo -e "$COL_GREEN Extracting Squid from tar archive. . . !! $COL_RESET"
sleep 3
tar zxvf LUSCA_HEAD-r14942.tar.gz
cd LUSCA_HEAD-r14942/
mkdir /etc/squid

echo -e "$COL_GREEN Executing $squidlabel Compiler [jz] . . . !! $COL_RESET"
echo
cd /tmp/LUSCA_HEAD-r14942
./configure --prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin --sbindir=/usr/sbin --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid --localstatedir=/var/spool/squid --datadir=/usr/share/squid --enable-async-io=24 --with-aufs-threads=24 --with-pthreads --enable-storeio=aufs --enable-linux-netfilter --enable-arp-acl --enable-epoll --enable-removal-policies=heap --with-aio --with-dl --enable-snmp --enable-delay-pools --enable-htcp --enable-cache-digests --disable-unlinkd --enable-large-cache-files --with-large-files --enable-err-languages=English --enable-default-err-language=English --enable-referer-log --with-maxfd=65536
echo
echo -e "$COL_GREEN Executing MAKE and MAKE INSTALL commands . . . !! $COL_RESET"
sleep 3
make
make install
echo
echo
echo -e "$COL_GREEN Creating SQUID LOGS folder and assiging permissions . . . !! $COL_RESET"
sleep 3

# Checking if log folder is previously present or not . . .
{
if [ -d "/var/log/squid" ]; then
echo
echo
echo -e "$COL_GREEN LOGS folder found. No need to create, proceeding Further . . . $COL_RESET"
else
echo
echo
echo -e "$COL_GREEN Creating LOG Folder in /var/log/squid and setting permissions accordingly (to user proxy) $COL_RESET"
mkdir /var/log/squid
fi
}
chown proxy:proxy /var/log/squid
## ** DOWNLOAD SQUID.CONF
echo
echo
echo -e "$COL_GREEN Downloading SQUID.CONF file from $URL and copy it to $SQUID_DIR. . . !! $COL_RESET"
sleep 3

# Checking IF SQUID.CONF File have been ALREADY downloaded in /tmp to avoid duplication! [[ JZ .. . .]]
{
if [ -f /tmp/squid.conf ]; then
rm -fr /tmp/squid.conf
fi
}

cd /tmp
wget $URL/squid.conf

# Checking IF SQUID.CONF file have been downloaded. if YEs continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
{
if [ ! -f /tmp/squid.conf ]; then
echo
echo
echo -e "$COL_RED ERROR: SQUID.CONF File could not be download or not found in /tmp/ !! $COL_RESET"
exit 0
fi
}
cp -fr squid.conf $SQUID_DIR

## ** DOWNLOAD SQUID.CONF
echo
echo
echo -e "$COL_GREEN Downloading STOREURL.PL file from $URL and copy it to $SQUID_DIR. . . !! $COL_RESET"
sleep 3
cd /tmp

{
if [ -f /tmp/storeurl.txt ]; then
rm -fr /tmp/storeurl.txt
fi
}

wget $URL/storeurl.txt

{
if [ -f /tmp/storeurl.pl ]; then
rm -fr /tmp/storeurl.pl
fi
}

mv storeurl.txt storeurl.pl

# Checking IF STOREURL.PL file have been downloaded. if YEs continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
{
if [ ! -f /tmp/storeurl.pl ]; then
echo
echo
echo -e "$COL_RED ERROR: STOREURL.PL File could not be download or not found in /tmp/ !! $COL_RESET"
exit 0
fi
}
cp -fr storeurl.pl $SQUID_DIR

echo
echo
echo -e "$COL_GREEN Setting EXECUTE permission for storeurl.pl . . . !! $COL_RESET"
chmod +x $SQUID_DIR/storeurl.pl

# Creating CACHE folders
echo
echo
echo -e "$COL_GREEN Creating CACHE directory in $CACHE_DIR , in this example,I used 5GB for cache for test ,Adjust it accordingly  . . . !! $COL_RESET"
sleep 3

# Checking if /cache-1 folder exist  . . .
{
if [ ! -d "$CACHE_DIR" ]; then
echo
echo
echo -e "$COL_GREEN Creating cache folder in $CACHE_DIR , Default size is 5GB, you should set it accordingly to your requirements  . . . $COL_RESET"
mkdir $CACHE_DIR
chown proxy:proxy $CACHE_DIR
chmod 777 -R $CACHE_DIR
squid -z
else
echo
echo -e "$COL_RED $CACHE_DIR folder already exists , Clearing it before proceeding. . . $COL_RESET"
rm -fr $CACHE_DIR/*
chown proxy:proxy $CACHE_DIR
echo -e "$COL_GREEN $CACHE_DIR Initializing Cache Directories as per the config  . . . $COL_RESET"
echo
squid -z
chmod 777 -R $CACHE_DIR
fi
}

echo
echo
echo -e "$COL_GREEN Adding squid in /etc/rc.local for auto startup . . . !! $COL_RESET"
sed -i '/exit/d' /etc/rc.local
sed -i '/[/usr\/sbin\/squid]/d' /etc/rc.local
echo /usr/sbin/squid >> /etc/rc.local
echo exit 0 >> /etc/rc.local
echo
echo -e "$COL_GREEN Starting SQUID (and adding 10 seconds Pause for proper initialization). . . !! $COL_RESET"
squid
sleep 5

# Checking SQUID status via PID [zaib]
#if [ "$pid" == "" ]; then
#echo
#echo -e "$COL_RED ERROR: UNABLE to start SQUID, try to run with -d1N syntax and see where its showing error !! $COL_RESET"
#else
ps aux |grep squid
echo
echo -e "$COL_GREEN $squidlabel is Running OK with PID number "$pid", no further action required, EXITING  . . .$COL_RESET"
echo
echo To view squid web access activity log, use command
echo -e "$COL_GREEN tail -f /var/log/squid/access.log $COL_RESET"
echo OR
echo -e "$COL_GREEN tail -f /var/log/squid/access.log |ccze $COL_RESET"
echo
echo -e "$COL_GREEN Regard's / Syed Jahanzaib . . . !! $COL_RESET"
echo


ALL DONE.

now execute the script by running

/temp/lusca_install.sh

It will start installation and will show you the progress with all the action its doing [in colored rows, RED color shows error, Green Color shows Ok/INFO].


 

TIP:

To start SQUID Server in Debug mode, to check any errors, use following

squid -d1n

if squid is successfully started , you can see its process via PS command

ps aux |grep squid

as showed in the image below …

squid-start-process

June 19, 2014

SAN attached windows 2008 hangs on boot

Filed under: IBM Related, Microsoft Related — Tags: , , , — Syed Jahanzaib / Pinochio~:) @ 9:37 AM

Just for reference purpose:

Recently I was testing some disaster recovery scenario of restoring Server A to Server B with identical hardware using Symantec Backup EXEC 2014 Simplified Disaster Recovery [SDR]CD. The hardware specs were as follows …

IBM Xseries 3650 M4, with RAID1
Dual Q.Logic Fiber Channel cards Mode: QLE2560 connected with two FC switches for multi path and failover
32 GB RAM,
IBM v3700 storewize SAN Storage

The restore went fine , system boot fine for the first time with everything intact, but when I rebooted it again , it failed to boot and shows only cursor blinking,  As showed in the image below …

123

I tried to boot it several times but with no results. I then removed the FC cables from the server’s Qlogic FC cards, and this time windows booted fine.

Solution:

I started the server without FC cables attached, then I removed the Windows MPIO features from ADD REMOVE FEATURES, and rebooted again with FC cables attached, and this time it works fine but showed duplicate SAN partitions. Then I applied IBM’s SSDM MPIO driver (MPIO_Win2008_x64_SDDDSM_64_2434-4_130816 for v3700 storewize)  and everything went fine :)

You may also want to read the IBM’s article.

http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5081613

 

.

Regard’s
Syed Jahanzaib

June 12, 2014

Mikrotik WAN monitoring script with multiple host check

Filed under: Mikrotik Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 2:31 PM

eagle_map

Recently I added a mikrotik’s base netwatch script on a network to monitor WAN link , and if no ping received from the the WAN host (Example: 8.8.8.8), the down script changes the backup link route to take priority over primary link. But the issue is NETWATCH is kind of un reliable method to check internet connectivity, because it can check only single host at a time, also if your wan link is week or heavily used resulting in few ping timed out which is sometimes common (for example 3 out of 10 replies misses) Netwatch sometimes consider the target link DOWN. the Netwatch gives a “DOWN” status immediately upon a missed ping – irregardless of the Timeout setting.

So to prevent that we must use a method via which we can check at least two or more hosts on Internet like IPS Gateway IP and any other reliable host like 8.8.8.8 (or any other host in your particular region) , if it fails to receive at least 5 replies from each of host, then it will consider the link DOWN. If one host is working and second is down, it will also consider it as UP. kind of cross verification.If 2 out of 5 ping misses, it will still consider the link UP.

Multiple HOST check is recommended, Because if you are using single host check script or netwatch,then some times it can happen that 8.8.8.8 ping reply is not receiving dueto various reason (either its down or isp have blocked ), but rest of internet is working fine, but even then the script/netwatch will consider the LINK is down dueto its single host check. That’s why multi host check is recommended.

 

ROS SCRIPT CODE: (Script name= monitor)


# Following script is copied from the Mikrotik forum.
# Thanks to mainTAP and rextended for sharing
# http://forum.mikrotik.com/viewtopic.php?f=9&t=85505
# Modified few contents to suite local requirements and added descriptions
# Regard's / Syed Jahanzaib / http://aacable.wordpress.com

# Script Starts here...
# Internet Host to be checked You can modify them as per required, JZ
:local host1   "8.8.8.8"
:local host2   "208.67.222.123"

# Do not modify data below without proper understanding.
:local i 0;
:local F 0;
:local date;
:local time;
:global InternetStatus;
:global InternetLastChange;

# PING each host 5 times
:for i from=1 to=5 do={
if ([/ping $host1 count=1]=0) do={:set F ($F + 1)}
if ([/ping $host2 count=1]=0) do={:set F ($F + 1)}
:delay 1;
};

# If both links are down and all replies are timedout, then link is considered down
:if (($F=10)) do={
:if (($InternetStatus="UP")) do={
:log error "WARNING : The INTERNET link seems to be DOWN. Please Check";
:set InternetStatus "DOWN";

##      ADD YOUR RULES HERE, LIKE ROUTE CHANGE OR WHAT EVER IS REQUIRED, Example is below ...
##     /ip route set [find comment="Default Route"] distance=3
##     /ip firewall nat disable [find comment="Your Rules, Example"]

:set date [/system clock get date];
:set time [/system clock get time];
:set InternetLastChange ($time . " " . $date);
} else={:set InternetStatus "DOWN";}
} else={

##      If reply is received , then consider the Link is UP
:if (($InternetStatus="DOWN")) do={
:log warning "WARNING :The INTERNET link have been restored";
:set InternetStatus "UP";

##      ADD YOUR RULES HERE, LIKE ROUTE CHANGE OR WHAT EVER IS REQUIRED, Example is below ...
##     /ip route set [find comment="Default Route"] distance=1
##     /ip firewall nat enable  [find comment="Your Rules, Example"]

:set date [/system clock get date];
:set time [/system clock get time];
:set InternetLastChange ($time . " " . $date);
} else={:set InternetStatus "UP";}
}

# Script Ends Here.
# Thank you

.

Scheduler to run script auto

To add scheduler to run script after every 5 minutes (or as required), use following code


/system scheduler
add disabled=no interval=5m name="Monitor WAN connectivity Scheduler / JZ" on-event=monitor policy=ftp,reboot,read,write,policy,test,winbox,password,sniff,sensitive,api start-date=jun/12/2014 start-time=\
00:00:00

Don’t forget to change the script name monitor in above scheduler to match the name you set for the script.
Example: on-event=monitor

.

Define Static Routes for Monitoring Host – for Route Changing

If  you are using this script to change internet route to backup link, then you must define static routes for the host you are monitoring. So that your monitored hosts should always (forcefully) go via Primary Link.


/ip route
add comment="Force this HOST via Primary Link" disabled=no distance=1 dst-address=8.8.8.8/32 gateway=192.168.1.1 scope=30 target-scope=10
add comment="Force this HOST via Primary Link" disabled=no distance=1 dst-address=208.67.222.123/32 gateway=192.168.1.1 scope=30 target-scope=10

Note: Make sure to change gateway 192.168.1.1 to primary internet link gateway.

.

.

Regard’s
Syed Jahanzaib

June 5, 2014

IBM Storewize v3700 SAN Duplicate partitions showing in Windows 2008

Filed under: Uncategorized — Tags: , , , — Syed Jahanzaib / Pinochio~:) @ 10:13 AM

v3700

Recently one of our IBM Xseries 3650 M4 server faced hardware failure related to local storage. Two partitions from IBM Storwize v3700 were assigned to this system, connected with 2 QLogic FC cards connected with 2 BROCADE fiber switches for fail over.

After doing re installation of Windows 2008 R2, SAN partitions were appearing duplicate. Windows MPIO feature was enabled but still partitions were twice appearing. After applying IBM base SDDDSM MPIO updated driver, problem got solved.

Subsystem Device Driver Device Specific Module (SDDDSM) is IBM’s multipath IO solution based on Microsoft MPIO technology, it’s a device specific module specifically designed to support IBM storage devices. Together with MPIO, it’s designed to support the multipath configuration environments in the IBM Storage.

Download link is as follosw. Just a small patch , apply and restart :)

http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000350#Storwize3700

 

DESCRIPTION
DOWNLOAD
RELEASE DATE
CERTIFIED
Platform Windows Server 2008/2008
(R2 / 32bit /64bit)
SDDDSM v2.4.3.4-4
SDDDSM 2.4.3.4-4 for Windows Server 2008
English
Byte Size 577711
8/16/13
Yes

 

Regard’s
Syed Jahanzaib

June 4, 2014

Radius Manager Dealer Panel

Filed under: Radius Manager — Tags: , , — Syed Jahanzaib / Pinochio~:) @ 1:38 PM

In Radius Manager, we have an option to add MANAGER (Dealer/Reseller) so that the Dealer/Reseller can have access to his own management panel (similar to ACP but with some limitations). The Dealer/Reseller can create new users, disable , add deposit/credit in user account, invoice access and stuff like this.

You can assign various permissions to the dealer as per requirements. Following is an example of creating NEW MANAGER with minimum rights.

Goto Managers , and select NEW Manager

As showed in the image below …

d3.

Assign necessary permissions, this is important :)

d2.

Permission Explanation:

Permissions:
• List users – Can list users.
• Register users – Can register new users.
• Edit users – Can edit basic user data (name, address etc.).
• Edit privileged user data – Allows editing privileged fields (credits, static IP).
• Delete users – Can delete users.
• List managers – Can list managers.
• Register managers – Can register new managers.
• Edit managers – Can edit managers.
• Delete managers – Can delete managers.
• List services – Can list services.
• Register services – Can register new services.
• Edit services – Can edit services.
• Delete services – Can delete services.
• Billing functions –
Can generate invoices.
• Allow negative balance – Can refill prepaid accounts even if the reseller account is in negative balance.
• Allow discount prices – Can form the service price freely (discount).
• Enable canceling invoices – Enable canceling invoices (enter negative amount in Add credits form to cancel an invoice).
• Access invoices – Can access invoicing functions.
• Access all invoices – Can access all invoices not only the own ones.
• Shown invoice totals – Display the totals in List invoices view.
• Edit invoices – Can enter the payment date for postpaid invoices.
• Access all users – Can access all users in the system.
• List online users – Can list online users.
• Disconnect users – Can disconnect users.

Card system and IAS -

• Card system and IAS – Can access prepaid card and IAS system.
• Connection report – Can access CTS functions.
• Overall traffic report – Can access traffic report.
• Maintain APs – Can access AP functions.
Click the Update manager button to store the manager data.

 

Now by default this Dealer/Reseller will have zero balance, so he wont be able to add credits in users account (although he can create new accounts but these accounts are by by default EXPIRED, so in order to renew user account, the Dealer/Reseller MUST have deposit in his account)

Now add some AMOUNT in his account. Open Manager and edit that dealer.
As showed in the image below …

d1

.

Now test it via login with dealer ID and add new user. by default the new user added will be expired, and the dealer must add credit in user account. (He can also add DEPOSIT, but then user have to himself login with his user id and password to user management panel and refresh his account (with the deposited amount added by dealer).

As showed in the image below …

d4.

d5.

d6.

.

.

Binding Dealer/Reseller to Use Only Specific Services

You can also bind specific Service with specific Dealer/Reseller too. for example You dont want Dealer A to use all services, instead you want to show him specific services only. Login to ACP using ADMIN, goto Services, Open your desired services that you do or dont want to to be displayed at Dealer/Reseller A panel,

As showed in the image below …

d7

.

result can be seen here…

d8

I will write more in some free time.

.

Regard’s
Syed Jahanzaib

Older Posts »

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 2,369 other followers