Syed Jahanzaib Personnel Blog to Share Knowledge !

July 18, 2014

Odd Results with Scheduled Batch Files in Windows Server 2008 R2

Filed under: Microsoft Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 8:44 AM

MS DOS BATCH FILE VIA 2008 R2 Scheduled Task / zaib

MS DOS BATCH FILE VIA 2008 R2 Scheduled Task / zaib

Recently I upgraded one of our old File server previously running Windows 2003 with Windows 2008 R2 64bit. this server was a member of AD and was logging with domain admin account.  Everything went smooth, but after few days I faced an strange issue that few scheduled BATCH files were not running properly at given time. If I try to execute batch file manual, they give proper result, but from schedule they dont, even by right click on the task and selecting RUN dont actually execute the batch file. To resolve this issue I added the admin account in Domain Group Policy and every thign now running fine as expected.

  • Edit Group Policy at Domain Controller
  • Goto “Computer ConfigurationPolicies > Windows Settings > Security Settings > Local Policies > Users Rights Assignment
  • Now on Right side menu, Double click on  “Log on as a batch job” to take its properties,
  • then click button “Add user or Group
  • then click button “Browse”
  • then click button “Advanced”
  • then button “Find now
  • Add your required user ID / Account here like “administrator” or likewise
  • and then “OK
  • Force by gpupdate /force at DC and Client as well.
  • (Or if pc is stand alone, then goto “Start” > Administrative tools > local security policy”)


This solved my problem of BATCH files not running via Scheduled Task,



July 7, 2014

Smokeping to Monitor Network Latency in UBUNTU

Filed under: Linux Related — Tags: — Syed Jahanzaib / Pinochio~:) @ 11:41 AM


Recently I was troubleshooting a network where concerned Admin complained that they frequently lost connectivity with the Internet. Sometimes pings replies works okay but latency gets high or timeout / breaks occurs. So I decided to setup mrtg base ping graph to monitor ping latency. The custom made mrtg ping probe worked fine and can provide an overview on target ping / rtt and Downtime in a nice manner,

BUT . . . . . . . . . . . . . . . . . . .

I was thinking far ahead , I was thinking for much more advanced latency and pin point graphs which can show ping latency / rtt / loss in much more detailed way. I recalled my memory from old days when I used to monitor my old network with variety of tools and scripts and suddenly a name popped in my mind ” SMOKEPING ” , yes this was the tool I was looking for.

SmokePing generates graphs that can reveal the quality (packet loss and latency variability) & reach-ability of your IP address from several distributed locations. SmokePing is a network latency monitor. It measures network latency to a configurable set of destinations on the network, and displays its findings in easy-to-read Web pages. It uses RRDtool as its logging and graphing back-end, making the system very efficient. The presentation of the data on the Web is done through a CGI with some AJAX capabilities for interactive graph exploration.

  • In this article I will show you howto install smokeping on UBUNTU 10/12


First install required components along with smokeping and apache2 (you can remove Apache or any other component if its not required or already installed)

aptitude install smokeping curl libauthen-radius-perl libnet-ldap-perl libnet-dns-perl libio-socket-ssl-perl libnet-telnet-perl libsocket6-perl libio-socket-inet6-perl apache2

Once all is installed, we have to modify few configuration files.

Open following following …

nano /etc/smokeping/config.d/pathnames

now remove sendmail entry by adding # sign to to comment the sendmail line, usually the first line.
Save and exit.

Now open following file

nano /etc/smokeping/config.d/Targets

Now REMOVE all previous lines , and copy paste following

*** Targets ***
probe = FPing

menu = Top
title = Network Latency Grapher
remark = Welcome to the SmokePing website of <b>ZAIB (Pvt) Ltd.</b> <br> Here you will learn all about the latency of our network.<br><br><br><br><br> This page is maintained by ZAIB. (Pvt) ltd . <br><br>Support Email:<br>Web:


+ Ping

menu = WAN Connectivity
title = WAS Side Network

++ yahoo

menu = yahoo
title = yahoo ping report
host =

++ google

menu = google
title = Google ping report
host =

+ Ping2

menu = LAN Connectivity
title = LAN Side Network

++ Mikrotik

menu = Mikrotik
title = Mikrotik PPP ping report
host =

++ Billing

menu = Billing
title = Radius billing Server ping report
host =

save and exit.

now restart smokeping service by

service smokeping restart

and access it via browser.


Results should be something like below image…





More info on previous smokeping article based on FEDORA 10 , (Old version) Just for idea



MRTG Monitoring with ESXi Hosted Guest Return ‘interface is commented * has no ifSpeed property’

Filed under: Linux Related, Mikrotik Related — Tags: , , — Syed Jahanzaib / Pinochio~:) @ 9:09 AM

Recently at a network, I migrated the mikrotik base RB configuration to esxi base VM guest. Everything went fine, this Mikrotik have snmp configured, and it is monitored via linux base MRTG for various probes. after migration, mrtg graph for itnerfaces stopped with following (when i re run the cfgmaker)

### The following interface is commented out because:
### * has no ifSpeed property

After playing with the itnerfaces & mrtg values, I found two solutions

Solution # 1

Network adapter need to be “E1000″ rather then  “flexible”.  Then SNMP will see the ifspeed correctly.
To make changes, its recommended to turn off the guest.

Solution # 2

Assign this speed in bits-per-second to all interfaces which return 0 for ifSpeed and ifHighSpeed

Create the cfg file with following syntax “–zero-speed=100000000 “

 cfgmaker -zero-speed=100000000 snmp_community@ > mikrotik.cfg

[ is mikrotik ip]


July 2, 2014

LUSCA Automated Install Scriptt

Filed under: Linux Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 12:22 PM



Following is an automated script to install LUSCA r14942 for UBUNTU with aggressive content caching support including some video web sites like YOUTUBE and few others as described in my other article @

I will add more n more functions as soon as I get some free time, like configurable options via choice menu like cache size, mem, and other variables.



This script will do the following

  • Update Ubuntu
  • Install some components required for Compilation of Lusca/Squid package
  • Backup squid.conf if already in /etc/squid.conf with squid.conf.old, stop any running squid instance
  • Download LUSCA r14942 source package to /temp folder and compile it
  • Download squid.conf and from the internet and place them in /etc/squid.conf
  • Create cache directory like in /cache-1 and default cache size is 5 GB
  • add squid in /etc/rc.local so it may start auto upon system reboot

Note: You should modify all options in /etc/squid.conf after installation , like cache_dir, cache_mem and others as per your network and hardware specifications.



1- Fresh Installation of UBUNTU OS and Configure Internet Access
2- root access to execute script
4- Upload or create script in any folder of Ubuntu box,

or create new script with following commands

mkdir /temp
cd /temp
chmod +x


and paste the following code . . .


# Version 1.0 / 2nd July, 2014
# LUSCA r14942 Automated Installation Script for Ubuntu flavor / jz
# Syed Jahanzaib / aacable @  /

# Setting Variables . . . [JZ]
pid=`pidof squid`
osver=`cat /etc/issue |awk '{print $1}'`

# Colors Config  . . . [[ JZ . . . ]]

# OS checkup for UBUNTU
echo -e "$COL_GREEN Lusca r14942 Automated Installation Script ver 1.0 for Ubuntu . . .$COL_RESET"
echo -e "$COL_GREEN Checking OS version, as it must be Ubuntu in order to Continue . . .$COL_RESET"
if [[ $osver == Ubuntu ]]; then
echo -e "$COL_GREEN Ubuntu is installed with following information fetched. $COL_RESET"
lsb_release -a
sleep 3
echo -e "$COL_RED Sorry, it seems your Linux Distribution is not UBUNTU . Exiting ...$COL_RESET"
exit 1

# Make sure only root can run our script / Checking if user is root, otherwise exit with error [[Jz]]
echo -e "$COL_GREEN Verifying if you are logged in with root privileges  . . .$COL_RESET" 1>&2
if [ "$(id -u)" != "0" ]; then
echo -e "$COL_RED This script must be run as root, switch to root now . . .$COL_RESET" 1>&2
exit 1

# Clearing previous download if any in /tmp folder
echo -e "$COL_GREEN Clearing previous downloads if any in /tmp folder to avoid duplication$COL_RESET"
sleep 3

rm -fr /tmp/squid.conf
rm -fr /tmp/storeurl.txt
rm -fr /tmp/
rm -fr /tmp/LUSCA_HEAD-r14942*

# Checking IF $URL is accessible m if YES then continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
echo -e "$COL_GREEN Checking if $URL is accessible in order to proceed further. . .!! $COL_RESET"
cd /tmp
wget -q $URL/squid.conf
if [ ! -f /tmp/squid.conf ]; then
echo -e "$COL_RED ERROR: Unable to contact $URL, or possibly internet is not working or your IP is in black list at destination server  !! $COL_RESET"
echo -e "$COL_RED ERROR: Please check manual if $URL is accessible or not or if it have required files, JZ  !! $COL_RESET"
exit 0
rm -fr /tmp/squid.conf
sleep 6
# Moving further . . .

echo -e "$COL_GREEN You are logged in with root ID, Ok to proceed further . . .!! $COL_RESET"

################################################################## [zaib]
echo -e "$COL_GREEN Updating Ubuntu first . . . !! $COL_RESET"
apt-get update
echo -e "$COL_GREEN Installing required components . . . !! $COL_RESET"
sleep 3
apt-get install  -y gcc  build-essential   libstdc++6   unzip    bzip2   sharutils  ccze  libzip-dev  automake1.9  libfile-readbackwards-perl  dnsmasq

# Clearing OLD data files . . .
if [ -f $SQUID_DIR/squid.conf ]; then
echo -e "$COL_RED Previous SQUID configuration file found in $SQUID_DIR ! renaming it for backup purpose . . . $COL_RESET"
mv $SQUID_DIR/squid.conf $SQUID_DIR/squid.conf.old
echo -e "$COL_GREEN No Previous Squid configuration have been found in $SQUID_DIR. Proceeding further $COL_RESET"

# Checking SQUID status if its already running - check by PID
if [ "$pid" == "" ]; then
echo -e "$COL_GREEN No SQUID instance found in memory , so it seems we are good to GO !!! $COL_RESET"
echo -e "$COL_RED SQUID is already running, probably you have some previous copy of SQUID installation, Better to stop and remove all previous squid installation !! $COL_RESET"
echo -e "$COL_RED KILLING PREVIOUS SQUID INSTANCE by killall -9 squid command  !! $COL_RESET"
killall -9 squid
sleep 3

# Downloading Squid source package [zaib]
echo -e "$COL_GREEN Downloading SQUID source package in /tmp folder. . . !! $COL_RESET"
sleep 3

# Checking if /tmp folder is previously present or not . . .
if [ ! -d "/tmp" ]; then
echo -e "$COL_RED /tmp folder not found, Creating it so all downloads will be placed here  . . . $COL_RESET"
mkdir /tmp
echo -e "$COL_GREEN /tmp folder is already present , so no need to create it, Proceeding further . . . $COL_RESET"

cd /tmp

# Checking IF LUSCA_HEAD-r14942.tar.gz  installation file have been ALREADY downloaded in /tmp to avoid duplication! [[ JZ .. . .]]
if [ -f /tmp/LUSCA_HEAD-r14942.tar.gz ]; then
rm -fr /tmp/LUSCA_HEAD-r14942.tar.gz

wget -c

# Checking IF LUSCA_HEAD-r14942 installation file have been downloaded properly. if YEs continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
if [ ! -f /tmp/LUSCA_HEAD-r14942.tar.gz ]; then

echo -e "$COL_RED ERROR: SQUID source code package File could not be download or not found in /tmp/ !! $COL_RESET"
exit 0

echo -e "$COL_GREEN Extracting Squid from tar archive. . . !! $COL_RESET"
sleep 3
tar zxvf LUSCA_HEAD-r14942.tar.gz
cd LUSCA_HEAD-r14942/
mkdir /etc/squid

echo -e "$COL_GREEN Executing $squidlabel Compiler [jz] . . . !! $COL_RESET"
cd /tmp/LUSCA_HEAD-r14942
./configure --prefix=/usr --exec_prefix=/usr --bindir=/usr/sbin --sbindir=/usr/sbin --libexecdir=/usr/lib/squid --sysconfdir=/etc/squid --localstatedir=/var/spool/squid --datadir=/usr/share/squid --enable-async-io=24 --with-aufs-threads=24 --with-pthreads --enable-storeio=aufs --enable-linux-netfilter --enable-arp-acl --enable-epoll --enable-removal-policies=heap --with-aio --with-dl --enable-snmp --enable-delay-pools --enable-htcp --enable-cache-digests --disable-unlinkd --enable-large-cache-files --with-large-files --enable-err-languages=English --enable-default-err-language=English --enable-referer-log --with-maxfd=65536
echo -e "$COL_GREEN Executing MAKE and MAKE INSTALL commands . . . !! $COL_RESET"
sleep 3
make install
echo -e "$COL_GREEN Creating SQUID LOGS folder and assiging permissions . . . !! $COL_RESET"
sleep 3

# Checking if log folder is previously present or not . . .
if [ -d "/var/log/squid" ]; then
echo -e "$COL_GREEN LOGS folder found. No need to create, proceeding Further . . . $COL_RESET"
echo -e "$COL_GREEN Creating LOG Folder in /var/log/squid and setting permissions accordingly (to user proxy) $COL_RESET"
mkdir /var/log/squid
chown proxy:proxy /var/log/squid
echo -e "$COL_GREEN Downloading SQUID.CONF file from $URL and copy it to $SQUID_DIR. . . !! $COL_RESET"
sleep 3

# Checking IF SQUID.CONF File have been ALREADY downloaded in /tmp to avoid duplication! [[ JZ .. . .]]
if [ -f /tmp/squid.conf ]; then
rm -fr /tmp/squid.conf

cd /tmp
wget $URL/squid.conf

# Checking IF SQUID.CONF file have been downloaded. if YEs continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
if [ ! -f /tmp/squid.conf ]; then
echo -e "$COL_RED ERROR: SQUID.CONF File could not be download or not found in /tmp/ !! $COL_RESET"
exit 0
cp -fr squid.conf $SQUID_DIR

echo -e "$COL_GREEN Downloading STOREURL.PL file from $URL and copy it to $SQUID_DIR. . . !! $COL_RESET"
sleep 3
cd /tmp

if [ -f /tmp/storeurl.txt ]; then
rm -fr /tmp/storeurl.txt

wget $URL/storeurl.txt

if [ -f /tmp/ ]; then
rm -fr /tmp/

mv storeurl.txt

# Checking IF STOREURL.PL file have been downloaded. if YEs continue further , otherwise EXIT the script with ERROR ! [[ JZ .. . .]]
if [ ! -f /tmp/ ]; then
echo -e "$COL_RED ERROR: STOREURL.PL File could not be download or not found in /tmp/ !! $COL_RESET"
exit 0
cp -fr $SQUID_DIR

echo -e "$COL_GREEN Setting EXECUTE permission for . . . !! $COL_RESET"
chmod +x $SQUID_DIR/

# Creating CACHE folders
echo -e "$COL_GREEN Creating CACHE directory in $CACHE_DIR , in this example,I used 5GB for cache for test ,Adjust it accordingly  . . . !! $COL_RESET"
sleep 3

# Checking if /cache-1 folder exist  . . .
if [ ! -d "$CACHE_DIR" ]; then
echo -e "$COL_GREEN Creating cache folder in $CACHE_DIR , Default size is 5GB, you should set it accordingly to your requirements  . . . $COL_RESET"
mkdir $CACHE_DIR
chown proxy:proxy $CACHE_DIR
chmod 777 -R $CACHE_DIR
squid -z
echo -e "$COL_RED $CACHE_DIR folder already exists , Clearing it before proceeding. . . $COL_RESET"
rm -fr $CACHE_DIR/*
chown proxy:proxy $CACHE_DIR
echo -e "$COL_GREEN $CACHE_DIR Initializing Cache Directories as per the config  . . . $COL_RESET"
squid -z
chmod 777 -R $CACHE_DIR

echo -e "$COL_GREEN Adding squid in /etc/rc.local for auto startup . . . !! $COL_RESET"
sed -i '/exit/d' /etc/rc.local
sed -i '/[/usr\/sbin\/squid]/d' /etc/rc.local
echo /usr/sbin/squid >> /etc/rc.local
echo exit 0 >> /etc/rc.local
echo -e "$COL_GREEN Starting SQUID (and adding 10 seconds Pause for proper initialization). . . !! $COL_RESET"
sleep 5

# Checking SQUID status via PID [zaib]
#if [ "$pid" == "" ]; then
#echo -e "$COL_RED ERROR: UNABLE to start SQUID, try to run with -d1N syntax and see where its showing error !! $COL_RESET"
ps aux |grep squid
echo -e "$COL_GREEN $squidlabel is Running OK with PID number "$pid", no further action required, EXITING  . . .$COL_RESET"
echo To view squid web access activity log, use command
echo -e "$COL_GREEN tail -f /var/log/squid/access.log $COL_RESET"
echo OR
echo -e "$COL_GREEN tail -f /var/log/squid/access.log |ccze $COL_RESET"
echo -e "$COL_GREEN Regard's / Syed Jahanzaib . . . !! $COL_RESET"


now execute the script by running


It will start installation and will show you the progress with all the action its doing [in colored rows, RED color shows error, Green Color shows Ok/INFO].



To start SQUID Server in Debug mode, to check any errors, use following

squid -d1n

if squid is successfully started , you can see its process via PS command

ps aux |grep squid

as showed in the image below …


June 19, 2014

SAN attached windows 2008 hangs on boot

Filed under: IBM Related, Microsoft Related — Tags: , , , — Syed Jahanzaib / Pinochio~:) @ 9:37 AM

Just for reference purpose:

Recently I was testing some disaster recovery scenario of restoring Server A to Server B with identical hardware using Symantec Backup EXEC 2014 Simplified Disaster Recovery [SDR]CD. The hardware specs were as follows …

IBM Xseries 3650 M4, with RAID1
Dual Q.Logic Fiber Channel cards Mode: QLE2560 connected with two FC switches for multi path and failover
32 GB RAM,
IBM v3700 storewize SAN Storage

The restore went fine , system boot fine for the first time with everything intact, but when I rebooted it again , it failed to boot and shows only cursor blinking,  As showed in the image below …


I tried to boot it several times but with no results. I then removed the FC cables from the server’s Qlogic FC cards, and this time windows booted fine.


I started the server without FC cables attached, then I removed the Windows MPIO features from ADD REMOVE FEATURES, and rebooted again with FC cables attached, and this time it works fine but showed duplicate SAN partitions. Then I applied IBM’s SSDM MPIO driver (MPIO_Win2008_x64_SDDDSM_64_2434-4_130816 for v3700 storewize)  and everything went fine :)

You may also want to read the IBM’s article.



Syed Jahanzaib

June 12, 2014

Mikrotik WAN monitoring script with multiple host check

Filed under: Mikrotik Related — Tags: , — Syed Jahanzaib / Pinochio~:) @ 2:31 PM


Recently I added a mikrotik’s base netwatch script on a network to monitor WAN link , and if no ping received from the the WAN host (Example:, the down script changes the backup link route to take priority over primary link. But the issue is NETWATCH is kind of un reliable method to check internet connectivity, because it can check only single host at a time, also if your wan link is week or heavily used resulting in few ping timed out which is sometimes common (for example 3 out of 10 replies misses) Netwatch sometimes consider the target link DOWN. the Netwatch gives a “DOWN” status immediately upon a missed ping – irregardless of the Timeout setting.

So to prevent that we must use a method via which we can check at least two or more hosts on Internet like IPS Gateway IP and any other reliable host like (or any other host in your particular region) , if it fails to receive at least 5 replies from each of host, then it will consider the link DOWN. If one host is working and second is down, it will also consider it as UP. kind of cross verification.If 2 out of 5 ping misses, it will still consider the link UP.

Multiple HOST check is recommended, Because if you are using single host check script or netwatch,then some times it can happen that ping reply is not receiving dueto various reason (either its down or isp have blocked ), but rest of internet is working fine, but even then the script/netwatch will consider the LINK is down dueto its single host check. That’s why multi host check is recommended.


ROS SCRIPT CODE: (Script name= monitor)

# Following script is copied from the Mikrotik forum.
# Thanks to mainTAP and rextended for sharing
# Modified few contents to suite local requirements and added descriptions
# Regard's / Syed Jahanzaib /

# Script Starts here...
# Internet Host to be checked You can modify them as per required, JZ
:local host1   ""
:local host2   ""

# Do not modify data below without proper understanding.
:local i 0;
:local F 0;
:local date;
:local time;
:global InternetStatus;
:global InternetLastChange;

# PING each host 5 times
:for i from=1 to=5 do={
if ([/ping $host1 count=1]=0) do={:set F ($F + 1)}
if ([/ping $host2 count=1]=0) do={:set F ($F + 1)}
:delay 1;

# If both links are down and all replies are timedout, then link is considered down
:if (($F=10)) do={
:if (($InternetStatus="UP")) do={
:log error "WARNING : The INTERNET link seems to be DOWN. Please Check";
:set InternetStatus "DOWN";

##     /ip route set [find comment="Default Route"] distance=3
##     /ip firewall nat disable [find comment="Your Rules, Example"]

:set date [/system clock get date];
:set time [/system clock get time];
:set InternetLastChange ($time . " " . $date);
} else={:set InternetStatus "DOWN";}
} else={

##      If reply is received , then consider the Link is UP
:if (($InternetStatus="DOWN")) do={
:log warning "WARNING :The INTERNET link have been restored";
:set InternetStatus "UP";

##     /ip route set [find comment="Default Route"] distance=1
##     /ip firewall nat enable  [find comment="Your Rules, Example"]

:set date [/system clock get date];
:set time [/system clock get time];
:set InternetLastChange ($time . " " . $date);
} else={:set InternetStatus "UP";}

# Script Ends Here.
# Thank you


Scheduler to run script auto

To add scheduler to run script after every 5 minutes (or as required), use following code

/system scheduler
add disabled=no interval=5m name="Monitor WAN connectivity Scheduler / JZ" on-event=monitor policy=ftp,reboot,read,write,policy,test,winbox,password,sniff,sensitive,api start-date=jun/12/2014 start-time=\

Don’t forget to change the script name monitor in above scheduler to match the name you set for the script.
Example: on-event=monitor


Define Static Routes for Monitoring Host – for Route Changing

If  you are using this script to change internet route to backup link, then you must define static routes for the host you are monitoring. So that your monitored hosts should always (forcefully) go via Primary Link.

/ip route
add comment="Force this HOST via Primary Link" disabled=no distance=1 dst-address= gateway= scope=30 target-scope=10
add comment="Force this HOST via Primary Link" disabled=no distance=1 dst-address= gateway= scope=30 target-scope=10

Note: Make sure to change gateway to primary internet link gateway.



Syed Jahanzaib

June 5, 2014

IBM Storewize v3700 SAN Duplicate partitions showing in Windows 2008

Filed under: Uncategorized — Tags: , , , — Syed Jahanzaib / Pinochio~:) @ 10:13 AM


Recently one of our IBM Xseries 3650 M4 server faced hardware failure related to local storage. Two partitions from IBM Storwize v3700 were assigned to this system, connected with 2 QLogic FC cards connected with 2 BROCADE fiber switches for fail over.

After doing re installation of Windows 2008 R2, SAN partitions were appearing duplicate. Windows MPIO feature was enabled but still partitions were twice appearing. After applying IBM base SDDDSM MPIO updated driver, problem got solved.

Subsystem Device Driver Device Specific Module (SDDDSM) is IBM’s multipath IO solution based on Microsoft MPIO technology, it’s a device specific module specifically designed to support IBM storage devices. Together with MPIO, it’s designed to support the multipath configuration environments in the IBM Storage.

Download link is as follosw. Just a small patch , apply and restart :)


Platform Windows Server 2008/2008
(R2 / 32bit /64bit)
SDDDSM v2.4.3.4-4
SDDDSM for Windows Server 2008
Byte Size 577711


Syed Jahanzaib

June 4, 2014

Radius Manager Dealer Panel

Filed under: Radius Manager — Tags: , , — Syed Jahanzaib / Pinochio~:) @ 1:38 PM

In Radius Manager, we have an option to add MANAGER (Dealer/Reseller) so that the Dealer/Reseller can have access to his own management panel (similar to ACP but with some limitations). The Dealer/Reseller can create new users, disable , add deposit/credit in user account, invoice access and stuff like this.

You can assign various permissions to the dealer as per requirements. Following is an example of creating NEW MANAGER with minimum rights.

Goto Managers , and select NEW Manager

As showed in the image below …


Assign necessary permissions, this is important :)


Permission Explanation:

• List users – Can list users.
• Register users – Can register new users.
• Edit users – Can edit basic user data (name, address etc.).
• Edit privileged user data – Allows editing privileged fields (credits, static IP).
• Delete users – Can delete users.
• List managers – Can list managers.
• Register managers – Can register new managers.
• Edit managers – Can edit managers.
• Delete managers – Can delete managers.
• List services – Can list services.
• Register services – Can register new services.
• Edit services – Can edit services.
• Delete services – Can delete services.
• Billing functions –
Can generate invoices.
• Allow negative balance – Can refill prepaid accounts even if the reseller account is in negative balance.
• Allow discount prices – Can form the service price freely (discount).
• Enable canceling invoices – Enable canceling invoices (enter negative amount in Add credits form to cancel an invoice).
• Access invoices – Can access invoicing functions.
• Access all invoices – Can access all invoices not only the own ones.
• Shown invoice totals – Display the totals in List invoices view.
• Edit invoices – Can enter the payment date for postpaid invoices.
• Access all users – Can access all users in the system.
• List online users – Can list online users.
• Disconnect users – Can disconnect users.

Card system and IAS -

• Card system and IAS – Can access prepaid card and IAS system.
• Connection report – Can access CTS functions.
• Overall traffic report – Can access traffic report.
• Maintain APs – Can access AP functions.
Click the Update manager button to store the manager data.


Now by default this Dealer/Reseller will have zero balance, so he wont be able to add credits in users account (although he can create new accounts but these accounts are by by default EXPIRED, so in order to renew user account, the Dealer/Reseller MUST have deposit in his account)

Now add some AMOUNT in his account. Open Manager and edit that dealer.
As showed in the image below …



Now test it via login with dealer ID and add new user. by default the new user added will be expired, and the dealer must add credit in user account. (He can also add DEPOSIT, but then user have to himself login with his user id and password to user management panel and refresh his account (with the deposited amount added by dealer).

As showed in the image below …






Binding Dealer/Reseller to Use Only Specific Services

You can also bind specific Service with specific Dealer/Reseller too. for example You dont want Dealer A to use all services, instead you want to show him specific services only. Login to ACP using ADMIN, goto Services, Open your desired services that you do or dont want to to be displayed at Dealer/Reseller A panel,

As showed in the image below …



result can be seen here…


I will write more in some free time.


Syed Jahanzaib

Non Payment Reminder for Expired Users in RADIUS MANAGER 4.x.x

Filed under: Radius Manager — Tags: , — Syed Jahanzaib / Pinochio~:) @ 12:31 PM


As per requested by many friends, Following is an short guide on howto configure payment reminder for Expired users in DMASOFTLAB RADIUS MANAGER 4.x.x
[I wrote this guide because its better to explain in details with snapshots here, rather then explaining every individual)

This guide will demonstrate that if the user account is expired, he still can login to your Mikrotik / NAS, but when he will try to browse, he will be redirected to Non Payment page showing why his access is blocked. Useful in many scenarios.

Scenario -1 :

[Simple one]Mikrotik as pppoe server

Local Web Server IP =
PPPoE IP Pool =


  • Create a new service according to your requirements, like 1mb / 1 month limitation
  • in IP pool name , type expired
  • in  Next expired service optionSelect EXPIRED as next master service, So when primary service expires, user service will be switched to this one. [Note: EXPIRED service is already available in RM by default, but if you are unable to find it, then you can create it manually, just add new service with EXPIRED name and set ip pool accordingly)

As showed in the image below …




Now Create a user in users section and bind it with the new service you just created above that is 1mb / 1 month limitation







Add IP POOL for Expired Users

Add new IP Pool for EXPIRED pppoe users,

/ip pool

add name=expired ranges=


As showed in the image below …



Enable WEB PROXY and add rules

Now enable WEB PROXY and add deny/redirect rule so that we can redirect the EXPIRED users pool to any web server showing the non payment reminder page. You can also use EXTERNAL proxy to do the redirection like squid proxy. but in this guide i am showing only the mikrotik level things.

# First Enable Mikrotik Web-Proxy (You can use external proxy server also like SQUID)
/ip proxy
set always-from-cache=no cache-administrator=webmaster cache-hit-dscp=4 cache-on-disk=no enabled=yes max-cache-size=unlimited max-client-connections=600 max-fresh-time=3d max-server-connections=600 \
parent-proxy= parent-proxy-port=0 port=8080 serialize-connections=no src-address=

# Add rule to allow access to web server, otherwise user wont be able to access the reminder page. this rule must be on top.
/ip proxy access
add action=allow comment="Allow acess to web server so expired users can view the payment reminder page. it can be locally hosted or external (on internet) as well." disabled=no dst-address= \

# Now add rule to redirect expired ip pool users too local or external web server payment reminder page.
/ip proxy
add action=deny disabled=no dst-port="" redirect-to=

As showed in the image below …






Now add REDIRECT rule in FIREWALL/NAT section, and add only pppoe users pool in default NAT rule.
This is to make sure that users with expired users are redirected to web proxy which will be deny there request and redirect to web server reminder page.
and also add pppoe valid users pool in default NAT rule src-address, so that only valid pppoe users can browse the internet.
As showed in the image below …







Now when the client primary profile expires, it will switch to NEXT MASTER SERVICE which we configured to EXPIRED, thus he will get ip from EXPIRED pool, and then mikrotik will redirect to proxy which will deny its request and redirect to local payment reminder page.
As showed in the image below …





in squid.conf add these on before other ACL. (or on top)

acl expired-clients src
http_access deny expired-clients
deny_info http://web_server_ip/nonpayment/nonpayment.htm expired-clients

Note: Ideally web server should be on same subnet.




Syed Jahanzaib

Symantec Backup Exec Reference Notes


Last Updated: 25th June, 2014

Recently we upgraded our SAP infrastructure with new IBM xSeries server and also replace the old IBM tape library TS3200 with TS100. In previous Windows 2003, we were using classic NTBACKUP solution to take backup on TAPE library system, but with the new windows 2008 R2 upgrade, we found that that the tape drive support have been removed from the new Server Backup tool. Therefore we were looking for some reliable backup solution which can facilitate our tape library. Finally after searching a lot, we selected SYMANTEC BACKUP EXEC 2012 (with SP4 and latest patches) as our backup solution. Last year We tested its demo and it was fulfilling our requirements and fitting under our budget. I did it’s installation and it went smooth without any errors, but it took me few days to understand how it actually works. Its GUI interface looks pretty much simple and easy to navigate, but I found it very typical to configure Tape Library for auto loading function according to job/day.

Following is a short reference notes I am posting. I will keep updating with day to day tasks and issues I face and how I manage to solve them. Symantec have great number of guides, postings at there site too, but sometimes its hard to find the correct solution when its kinda urgent.




1- The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9) [4th June, 2014]

2-  Simplified Disaster Recovery: Howto exclude some Folders with SDR ON  [5th June, 2014]

3- Backup Exec (2012 SP4) Services Credentials Lost on every Reboot [6th June, 2014]

4- V-79-57344-42009 – Failed to load the configuration xml file,  [6th June, 2014]

5- Barcode Labeling   [10th June, 2014]

6- Exclude a sub-folder name “xyz” or end with .ft , from every where in specific folder/drive. [15thth July, 2014]


1- The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9)

If backup failed with following error:

V-79-57344-6523314.0.1798.1364eng-systemstate-backupV-79-57344-65233ENRetailWindows_V-6.1.7601_SP-1.0_PL-0x2_SU-0x112_PT-0×3 – Snapshot Technology: Initialization failure on: “\\YOURSERVER\System?State”. Snapshot technology used: Microsoft Volume Shadow Copy Service (VSS).
Snapshot technology error (0xE000FED1): A failure occurred querying the Writer status. See the job log for details about the error.

Check the Windows Event Viewer for details.

Writer Name: COM+ Class Registration Database, Writer ID: {542DA469-D3E1-473C-9F4F-7847F01FC64F}, Last error: The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9).

Writer Name: Windows Management Instrumentation, Writer ID: {A6AD56C2-B509-4E6C-BB19-49D8F43532F0}, Last error: The VSS Writer timed out (0x800423f2), State: Failed during freeze operation (9).

The following volumes are dependent on resource: “C:” “E:” .
The snapshot technology used by VSS for volume C: – Microsoft Software Shadow Copy provider 1.0 (Version
The snapshot technology used by VSS for volume E: – Microsoft Software Shadow Copy provider 1.0 (Version

        Job ended: Wednesday, June 04, 2014 at 2:49:03 AM
Completed status: Failed
Final error: 0xe000fed1 - A failure occurred querying the Writer status. See the job log for details about the error.



issue this command and see if any writer is failing

vssadmin list writers


if System Writer is TIMED OUT, then simply a system restart would fix the error auto. In my case , windows applied some updates, and when I rebooted the server, it fixed the above issue.




2-  Simplified Disaster Recovery: Howto exclude some Folders with SDR ON

Symantec provide Simplified Disaster Recovery option which you can use to restore the whole backup to bare metal system (from scratch) using SDR boot CD. However SDR forces you to backup every critical components including boot drive, system state, or any folder that SDR thinks its critical. But sometimes even excluding a non-critical component can turn off the SDR (for example in my case I was excluding a ‘backup folder’ from G: drive and SDR was turning off , possibly it was thinking that the whole G: drive was critical component for SDR.

For Example:


So in order to forcefully exclude it, I had to use the following WORKAROUND by adding the drive entry in the REGISTRY manually. IMO, So pathetic that SYMANTEC have not added this option in its Backup Exec GUI, because playing with the windows registry can be very dangerous for normal administrators.

Here is an Example of the registry key. If folders from G: were to be excluded, create a new key called “User-Defined Exclusion Resources“.
Under this key create another empty key called “G:
HKEY_LOCAL_MACHINE\SOFTWARE\Symantec\Backup Exec For Windows\Backup Exec\Engine\Simplified System Protection\User-Defined Exclusion Resources\G:

As showed in the image below …
b2 b2-2.

Now if you try to exclude any folder from the particular drive (in my example it was G:) , SDR will remain ON as showed in the image below ..

3- Backup Exec (2012 SP4) Services Credentials Lost on every Reboot

This was very annoying that on every reboot I had to enter my domain admin credentials in the Backup Exec Services Section Otherwise I receive “Failed to start service dueto Logon Failure”. It seems BE keeps forgetting the credentials or not storing them.


Make sure the account you are using to manage Backup Exec, must have Rights to logon as service (and few others, read the Symantec rights assignment article) add the account in your Domain controller group policy / local security policy / users right assignment. After addition, force update using gpupdate on both ends, first server then client,


To sort this issue, I used BEUTILITY provided with backup exec installation.

For Windows 2008 64bit, Goto C:\Program Files\Symantec\Backup Exec and Open BEUTILITY.EXE

Add your backup exec server in the list (known computers group ,

After adding, Right click on the Server and click con CHANGE SERVICE ACCOUNT

Enter your domain admin account or any account with equivalent rights and click OK,
As showed in the images below …





Now restart and check if the services are starting properly :)

At least this tricked worked for me



4- V-79-57344-42009 – Failed to load the configuration xml file [6th June, 2014]

Using Symantec backup Exec 2012 Sp4 , When I take full backup (SDR ON) , it completes successfully but with following error:

Job ended: Thursday, June 05, 2014 at 9:31:29 AM Completed status: Completed with exceptions

Backup- MYSERVER-79-57344-42009 – Failed to load the configuration xml file.
C:\Program Files\Symantec\Backup Exec\Catalogs\AGPSAPDEV\CatalogProcessTemporaryFolder\{6BCA5C76-6547-430D-A0D5-37251330D96D}\p2v.xml

To solve this, I applied Backup Exec 2012 Revision 1798 Hotfix 216746 and problem got solved. Download it and apply , Also dont forget to update the remote agents as well (via using BE GUI). I had to reboot the BE server also after applying this fix.


 5- BARCODE LABELING  [10th June, 2014]

In our company we have IBM TS3100 Library (which ahve 24 Cartridges slots). Using BE, I wanted to Auto Label every cartridge after the backup. I also used INVENTORY option, but it took much time. During the BE inventory process, the tape is taken from its slot, put into the tape drive to have their internal labels read and then returned to their slots.  This process is repeated for each tape and hence the inventory process for a TS3100 can take a long time. For my IBM TS3100 tape library with 24 tapes (only 5 Used) , an inventory of the 5 slots will take around 15-20 minutes. The tape library can identify a tape from its barcode label without having to read the internal label in the tape drive or doing other action.
When there is a need to update the status of the slots in the library in BE, you can use scan instead of inventory if you have barcode labels.  What scan will do is to read the barcode labels and it is done within a couple of seconds.  Otherwise, you would have to do an inventory

Some Snapshots.



You can download the BARCODE GENERATOR from following link.

Just make sure that you use only 8 Digits code, and the code must be end with L5 letter. (FOR IBM LTO5 drives)




For LTO5 cartridge sticker, I used following size for printing the above label.


Put your tapes with the new barcode labels and do a scan of the entire library.Make sure you don’t have a mix of tapes with and without barcode labels.


6- Exclude a sub-folder name “xyz” from every where in specific folder/drive. [25th June, 2014]

Recently I upgraded my file server from Windows 2003 NT.Backup to Windows 2008 R2 Backup Exec 2014. I have a following directory structure …


-  User1
-  Daily_Data
-  Junk_Data

-  User2
-  Daily_Data
-  Junk_Data

-  User3
-  Daily_Data
-  Junk_Data

and so on , users numbers are around 300. I want to exclude “Junk_Data” from every folder, Exclude them one by one is a lengthy task. I exclude Junk_Data from every sub folder by defining following criteria.

(which means for every user folder Exclude junk_data)


Exclude all sub-folders name end with .ft from every where in specific folder/drive. [15th July, 2014]

Lotus domino have every users folder design data which are not necessary to backup. to exclude every folder which have .ft in end, use following.





aacable at
Older Posts »

The Silver is the New Black Theme. Blog at


Get every new post delivered to your Inbox.

Join 2,269 other followers