Monday, March 30, 2015

russians not employees are on my internal network sniffing around

sometimes you just want to know who looks at your internal site. you know, those ones behind the dmz. places like that. you use google analytics because it is nice.
then all of a sudden you notice hits from places like mozambique and st. lucia and russia, you get this feeling in your stomach that bad things have happened. really bad things
well. the google id you're provided with can be easily guessed. and that's what spammers do. they run through google ids to boost their sites. and it so happens that's what they're doing with your id. no worries. to figure out who really accesses your site, set up some custom ga code and a cookie.

use cookies for tracking:
google analytics
admin

property > Dd Custom Definitions | Custom Dimensions
+New Custom Dimension

Name: sitename-cookie
Scope: User
Active: checked

after creation:
My index is 1

var dimensionValue = 'SOME_DIMENSION_VALUE';
ga('set', 'dimension1', dimensionValue);

in my case:

SOME_DIMENSION_VALUE = sitename-cookie 
dimension1 = tasty-cookie


insert the following in java scriptlet:

var dimensionValue = 'sitename-cookie';
ga('set', 'tasty-cookie', dimensionValue);

google tag manager
https://www.google.com/tagmanager/
account sitename
container sitename
where to use: web pages
tags: google analytics

tag name: sitename-cookie
tag type: universal
tracking id: UA-XXXXXXXX-1 (google analytics id)

more settings | cookie configuration
cookie name sitename-cookie

more settings | custom dimensions
index 1
dimension value sitename-cookie

firing rules: all pages

create & publish

place gtm cookie as directed by google.

google analytics
admin

Filter | create new
Filer name sitename-cookie
include
Filter field sitename-cookie (custom)

Wednesday, March 25, 2015

nmap subnet scan with os details

this just churns and churns.
 #!/bin/bash  
   
 date=`date +%Y%M%d`;  
 OUTDIR=ip_audit_$date;  
   
 SUBNETS="8 9 10 11 16 20 24 28 32 128";  
   
 for net in $SUBNETS; do  
      for ip in `seq 1 254`; do  
          echo -n "192.168.$net.$ip: ";  
          ping -c 1 192.168.$net.$ip > /dev/null;  
          if [ $? == 0 ]; then  
              echo "up"; nmap -O 192.168.$net.$ip | grep 'OS details: ';  
          else  
              echo down;  
          fi;  
      done  
 done  
   

weechat log to html part deux

so. irclog2html thinks my logs are unimportant (heh i read the css). pretty no more.
cat does it.
 !/bin/bash  
 # parse logs to html  
 # remove hashes from names  
 # cat log content to html fie
 # create index from directory list  
   
 title="irc logs"  
 source=~/.weechat/logs  
 working=~/scratch  
 dest=~/public_html/logs  
   
 # copy source  
   
 cd $source  
 cp *.log $working  
   
 # rename  
   
 cd $working  
 rename -f 's/[^A-Za-z0-9._-]//g' *  
   
 # copy contents  
 cd $working  
   
 for i in * ; do  
     touch $i.html  
     echo "<html>" >> $i.html  
     echo "<title>$i</title>" >> $i.html  
     echo "<body>" >> $i.html  
     echo "<a href=index.html>$title</a><br><br>" >> $i.html  
     echo "<h3>$i</h3><pre>" >> $i.html  
     cat $i >> $i.html  
     echo "<br><br><br><br><br><br><a href=index.html>$title</a><br><br>" >> $i.html  
     echo "</pre></body></html>" >> $i.html  
     mv $i.html $dest  
 done  
   
 rm *.log  
   
 # create index  
   
 cd $dest  
   
     echo "<html>" > $dest/index.html  
     echo "<title>$title</title>" >> $dest/index.html  
     echo "<body>" >> $dest/index.html  
     echo "<h3>$title</h3>" >> $dest/index.html  
   
 ls $dest | grep -vE '(index.html|*core*)' | awk -F"." '{print "<a href=" $1 "." $2 "." $3 "." $4 ".html>" $1 "." $2 "." $3 "." $4 "</a><br>"}' \  
 >> $dest/index.html  
   
     echo "</body></html>" >> $dest/index.html  
   

weechat log to html script

and in your handy cron job you run this to convert your logs and make an index from your mosh screen weechat log thing. irclog2html doesn't like ~ by-the-by.
   
 #!/bin/bash  
 # parse logs to html  
 # remove hashes from names  
 # create prettified index from directory list  
   
 title="irc logs"  
 dest=~/public_html/logs  
   
 # parse  
 irclog2html --title=' ' --index-url='index.html' --index-title='irc logs' -o '/full/pwd/to/public_html/logs/' /full/pwd/to/.weechat/logs/*.log  
   
 # rename  
 cd $dest  
 rename -f 's/[^A-Za-z0-9._-]//g' *  
   
 # create index  
 echo "<html>" > $dest/index.html  
 echo "<title>$title</title>" >> $dest/index.html  
 echo "<body>" >> $dest/index.html  
 echo "<h3>$title</h3>" >> $dest/index.html  
   
 ls $dest | grep -vE '(index.html|irclog.css|*core*)' | awk -F"." '{print "<a href=" $1 "." $2 "." $3 "." $4 ".html>" $1 "." $2 "." $3 "." $4 "</a><br>"}' \  
 >> $dest/index.html  
   
 echo "</body></html>" >> $dest/index.html  
   

weechat & logging

a bothersome thing about weechat is that out of the box is that is plops all logs into one big file. setting a file name mask suffices for log rotation. default logging throws in all mode changes and the like. setting the level to 1 catches conversations only.

 logger.conf

 [file]  
 info_lines = off  
 mask = "%Y-%m-%d-$plugin.$name.log"  
   
 [level]  
 irc = 1  
 plugin.server.#channel = 1  

Tuesday, March 24, 2015

irc logs to intranet; or the joy of ngnix, weechat, mosh, screen and irc2html

what did i say again?
 irc logs to intranet  
   
 ngnix homedir  
   
 location ~ ^/~(.+?)(/.*)?$ {  
   alias /home/$1/www$2;  
   autoindex on;  
 }  
   
 ...  
 weechat & mosh  
   
 apt-get update  
 apt-get install weechat  
 apt-get install weechat-curses  
 apt-get install screen  
 apt-get install mosh  
 ufw allow 60000:61000/udp  
   
 logger@host mosh 127.0.0.1 -- screen -D -RR weechat weechat-curses  
     mobile ssh        screen option sessionname program  
   
 ...  
 nb:  
 detach screen session:  
 crtl+A D  
   
 reattach screen session:  
 screen -ls  
 screen -R sessionname  
   
 ex:  
   
 logger@host:~# screen -ls  
 There are screens on:  
     2171.weechat  (01/30/2015 11:21:00 AM)    (Detached)  
 1 Sockets in /var/run/screen/S-logger.  
   
 ...  
 logger@host irclog2html ~/.weechat/logs/*.log ; mv ~/.weechat/logs/*.html *.css ~/public_html/  
   
 ...  
 if ngnix autolog is off:  
   
 logger@host cd ~/public_html/logs ; tree -H . > index.html  
   
 voila.  

Monday, March 23, 2015

more ghettovcb status fun

so. i'd like to know what's there, what's stale and what's gone missing. and email myself.
 #!/bin/bash  
   
 datestamp=$(date +"%x %r %Z")  
   
 echo "virtual machines $datestamp" > /tmp/vm_report1  
 echo "..........................." >> /tmp/vm_report1  
   
 echo " " > /tmp/vm_report2  
 echo "new virtual machines $datestamp" >> /tmp/vm_report2  
 echo "..............................." >> /tmp/vm_report2  
   
 echo " " > /tmp/vm_report3  
 echo "stale repostitory backups $datestamp" >> /tmp/vm_report3  
 echo ".............................." >> /tmp/vm_report3  
   
 echo " " > /tmp/vm_report4  
 echo "repository backups $datestamp" >> /tmp/vm_report4  
 echo "............................." >> /tmp/vm_report4  
   
 ls /opt/vm-repo/vm_backups > /tmp/vm_header2  
 diff -c <(sort /tmp/vm_header1) <(sort /tmp/vm_header2) | sed '/^!/ d' > /tmp/vm_header3  
   
 find /opt/vm-repo/vm_backups -type d -mtime 0 | sort > /tmp/vm_backup2  
 diff -c <(sort /tmp/vm_backup1) <(sort /tmp/vm_backup2) | sed '/^!/ d' > /tmp/vm_backup3  
   
 cat /tmp/vm_report1 /tmp/vm_header2 /tmp/vm_report2 /tmp/vm_header3 /tmp/vm_report3 /tmp/vm_header3 /tmp/vm_report4 /tmp/vm_backup3 /tmp/vm_backup2 | mail -s "vm_backup status" me@there.com  
   
 mv /tmp/vm_backup2 /tmp/vm_backup1  
 mv /tmp/vm_header2 /tmp/vm_header1