Thursday, October 26, 2017

openssl is too old. of course.

 I was having a good morning. I got to work on time and had a cup of coffee.  
 The world was good.  
 Then I see this:  
 Downloading: hostname in certificate didn't match: <> != <> OR <> OR <> OR <> OR <> OR <> OR <>  
 browsing is fascinating, to say the least  
 me@:~/certs$ wget  
 --2017-10-26 12:20:05--  
 Connecting to||:443... connected.  
 ERROR: certificate common name `' doesn't match requested host name `'.  
 To connect to insecurely, use `--no-check-certificate'.  
 I see the same across a bunch of build systems. ffs.  
 Maybe it is the firewall doing something weird.  
 me@:/etc/ssl/certs$ openssl version -a  
 OpenSSL 0.9.8k 25 Mar 2009  
 built on: Thu Mar 19 15:32:30 UTC 2015  
 platform: debian-i386-i686/cmov  
 options: bn(64,32) md2(int) rc4(idx,int) des(ptr,risc1,16,long) blowfish(idx)  
 CN -DHAVE_DLFCN_H -DL_ENDIAN -DTERMIO -O3 -march=i686 -Wa,--noexecstack -g -Wall  
 OPENSSLDIR: "/usr/lib/ssl"  
 All certs are here: /etc/ssl/certs  
 All symlinked to: /usr/share/ca-certificates/  
 $JAVA_HOME/lib/security/cacerts is the same.  
 apt-get reinstall openssl  
 apt-get reinstall ca-certificates  
 cd /usr/lib/ssl/certs  
 me@:~$ openssl s_client -connect  
 depth=2 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority  
 verify error:num=20:unable to get local issuer certificate  
 verify return:0  
 I need the /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority certificate.  
 It is present. Very present.  
 me@:~$ openssl s_client -CApath /etc/ssl/certs/ -connect < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > zlibnet.pem  
 depth=3 /C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root  
 verify return:1  
 depth=2 /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authority  
 verify return:1  
 depth=1 /C=US/ST=TX/L=Houston/O=cPanel, Inc./CN=cPanel, Inc. Certification Authority  
 verify return:1  
 depth=0 /  
 verify return:1  
 cat the output and yep. the pem is pem-a-licious.  
 me@:~$ sudo cp zlibnet.pem /usr/lib/ssl/certs  
 me@:~$ wget  
 --2017-10-26 12:15:48--  
 Connecting to||:443... connected.  
 ERROR: certificate common name `' doesn't match requested host name `'.  
 To connect to insecurely, use `--no-check-certificate'.  
 Nope. Weird. Well, that's new. Let's see what happens if we specify the cert dir.  
 me@:~$ wget --ca-certificate=/usr/lib/ssl/certs  
 --2017-10-26 12:15:48--  
 Connecting to||:443... connected.  
 ERROR: certificate common name `' doesn't match requested host name `'.  
 To connect to insecurely, use `--no-check-certificate'.  
 No? So. certificate common name doesn't match requested host name. Why?  
 OpenSSL is too old.  
 OpenSSL 0.9.8k 25 Mar 2009 <- too old  
 me@:~$ wget --no-check-certificate  
 me@:~$ curl --insecure  
 curl -L --remote-name  

Monday, October 9, 2017

symantec enterprise protection and centos 7 notes

 symantec enterprise protection and centos 7 notes  
 symantec enterprise protection for linux is way less than nice.  
 there is what i would call "glibc disarray."  
 # yum install glibclibgcclibX11  
 # yum install glibc.i686 libgcc.i686 libX11.i686  
 do your install and check up on it:  
 # /opt/Symantec/symantec_antivirus/sav info -a  
 Enabled <- yes  
 # /opt/Symantec/symantec_antivirus/sav manualscan -s /nfs/mount/ <- scan a decade's worth of work  
 # /opt/Symantec/symantec_antivirus/sav info -s <- is the scan running?  
 # tail -f -n 30 /var/symantec/sep/Logs/10666666.log <- tell me more  
 # ls -la /var/symantec/sep/Quarantine/ <- here be viruses  
 to free nfs mounts from the tight grip of sep after you foolishly  
 scan a decade's worth of work.  
 # lsof |grep /nfs/mount |grep rtvscand |awk '{print $3}' |grep -o '[0-9]*' |sort -n |uniq |xargs kill -9  
 # umount /nfs/mount  
 # /opt/Symantec/symantec_antivirus/sav info -a  
 scan engine is malfunctioning  
 # /etc/init.d/rtvscand restart  
 i dislike logs:
 # cd /var/symantec/sep/Logs
 # for i in *.log ; do echo "" > $i ; done  
 # echo "" > /opt/Symantec/LiveUpdate/Logs/lux.log

centos 7 write path & auto eth issue

 centos 7 write path & auto eth issue  
 # dmesg |grep "WRITE SAME"  
 [  6.984034] sda3: WRITE SAME failed. Manually zeroing.  
 # touch /etc/tmpfiles.d/write_same.conf  
 # find /sys | grep max_write_same_blocks >> /etc/tmpfiles.d/write_same.conf  
 # vi write_same.conf  
  # type path mode uid  gid  age argument  
  w /sys/devices/pci0000:00/0000:00:10.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0/max_write_same_blocks -  -  -  - 0  
 next, deal with eth  
 # nmcli d  
 ens160 ethernet disconnected --  
 lo   loopback unmanaged  --  
 change netword script  
 vi /etc/sysconfig/network-scripts/ifcfg-<ethname>  
 restart networking however you do it.  

Wednesday, October 4, 2017

looking at data in a regkey and doing something

 i need to do "stuff" to a lot of systems. some of them i own. some i don't.  
 to make sure i do "stuff" to the ones i own - members of hell, hades or purgatory.  
 all i need to do is figure out their domain membership status.  
 happily, domain names are saved in a system's registry.  
 REG QUERY "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" \
 /v DefaultDomainName | findstr "HELL HADES PURGATORY"   
 IF %ERRORLEVEL% == 1 goto end  
 IF %ERRORLEVEL% == 0 goto dosomething  
 goto end  
 @echo "Hello World"  
 goto end  

 in a nutshell, if an error is returned - that is strings defined in findstr are absent - then the script
 skips to the end and we do nothing. if the strings are present we dosomething. in this case, echo "Hello

 the REG QUERY statement must be one line.

Tuesday, August 22, 2017

find the most recently modified file in a directory and display its contents.

  this is all i want to do today. just this.  
 cat "$(ls -lt `find $PWD -type f -name "*" ` |awk '{print $9}' | head -1)"  
 gross. the output is too long.  
 tail -n 10 "$(ls -lt `find $PWD -type f -name "*" ` |awk '{print $9}' | head -1)"  

Monday, August 21, 2017

Configuring NFS under Linux for Firewall control

because this is still an issue 15 years later from:

Configuring NFS under Linux for Firewall control

By: Chris Lowth <>
Date: April 25th 2003

Looking for Iptables configuration assistance?

If you have been directed to this page because you are searching for general assitance in configuring IPTABLES, Then Click here to try "LinWiz", an on-line IPtables configurator that does the hard work for you. Answer a few simple questions and LinWiz will generate your IPtables rules file for you to download.


This document has been written with reference to RedHat 7.x and 8.x systems but is appropriate for other Linux distributions. The author warmly invites comments, corrections and (in particular) news of using this approach on other Linuxes.

Follow-up contributions.

I have received numerous mails since writing this article, some of which include more information which may be of general use. Thanks to the authors for permission to reproduce their comments here..


When setting up IPTABLES firewalling for Linux systems running the NFS service (network file system), you hit the problem that some of the TCP/IP and UDP ports used by components of the service are randomly generated as part of the “SunRPC” mechanism.
This document is part of the LinWiz tool kit, and describes how to set up NFS in such a way that meaningful firewall rules can be applied to the system.

The LinWiz toolkit.

LinWiz is a free-to-use on-line Linux IPTables configuration wizard, designed for novices and experts alike. LinWiz presents a simple, single-page questionaire for you to fill in, and then generates a personalised iptable configuration file for download onto the Linux server, firewall or router/gateway.
Click here to use this software on line.

Viewing the used ports.

On a system that is up and running with the NFS service active, the ports used by the components of the service can be listed using the command “rpcinfo -p”. The output will look something like this...
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  32814  status
    100024    1   tcp  33024  status
    100011    1   udp    670  rquotad
    100011    2   udp    670  rquotad
    100011    1   tcp    673  rquotad
    100011    2   tcp    673  rquotad
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100021    1   udp  32816  nlockmgr
    100021    3   udp  32816  nlockmgr
    100021    4   udp  32816  nlockmgr
    100005    1   udp  32818  mountd
    100005    1   tcp  33025  mountd
    100005    2   udp  32818  mountd
    100005    2   tcp  33025  mountd
    100005    3   udp  32818  mountd
    100005    3   tcp  33025  mountd
This listing shows the IP ports for the various versions of the service used in the 4th column. If you view this listing on different systems (or even after rebooting the same one) you may well find that the port numbers are different – this is a real problem when configing firewalls, which tend to assume that known port numbers are used for the services being configured.

Setting up NFS to use fixed IP ports.

To make it possible to configure a firewall that controls NFS, it is useful to be able to “tie” down the ports used by these services to fixed values. Luckily this is possible in RedHat Linux versions 7 and 8 (and, I suspect; other linux distributions), although the methods for setting these port numbers are different for each of the daemons.
The following table lists the NFS daemons and summarises the relevant information for them. The sections that follow give more detail.
Daemon Name
Standard Port
Suggested Port
What to Change
Edit /etc/init.d/nfslock
nfs-utils & kernel
Edit /etc/modules.conf
Create or Edit /etc/sysconfig/nfs
Install "quota" package version 3.08 or later
and edit /etc/rpc and /etc/services

Portmapper [Standard port: 111]

The portmapper is implemented by the “portmap” program which is part of the “portmap” RPM package. The service uses port 111 on both the TCP and UDP protocols.
Portmapper provides the mapping between application names and IP ports, and is therefore analogous to the /etc/service file except that it relates to RPC programs only.
Firewall rules that refer to portmapper should refer to TCP/IP and UDP packets on port 111.

Status [Random port. Suggestion: 4000]

The rpc.statd server implements the NSM (Network Status Monitor) RPC protocol. This service is somewhat misnamed, since it doesn't actually provide active monitoring as one might suspect; instead, NSM implements a reboot notification service. It is used by the NFS file locking service, rpc.lockd, to implement lock recovery when the NFS server machine crashes and reboots.
The rpc.statd program is part of the “nfs-utils” RPM package.
While rpc.statd is normally allocated a random port number by the portmapper, it is possible to configure a fixed port number by supplying the “-p” command line option when the program is launched. This can be done as follows ..
Edit the file /etc/init.d/nfslock and change the “start()” procedure to add “-p” and a port number to the line “daemon rpc.statd”. The changed procedure looks like this (this change is coloured in red)..
start() {
        # Start daemons.
        if [ "$USERLAND_LOCKD" ]; then
          echo -n $"Starting NFS locking: "
          daemon rpc.lockd
        echo -n $"Starting NFS statd: "
        daemon rpc.statd -p 4000
        [ $RETVAL -eq 0 ] && touch /var/lock/subsys/nfslock
        return $RETVAL
Once the above change has been made, firewall rules should refer to TCP/IP and UDP packets on the chosen port. (You may find the 'LinWiz://ServerFirewall' wizard helpful when setting up a firewall for Linux).

NFS Daemon [Standard port: 2049]

The rpc.nfsd program implements the user level part of the NFS service. The main functionality is handled by the nfsd.o kernel module; the user space program merely starts the specified number of kernel threads.
The rpc.nfsd program normally listens on port number 2049, so firewall rules can be created to refer to that port (unless it is changed from the default value). (You may find the 'LinWiz://ServerFirewall' wizard helpful when setting up a firewall for Linux).

NFS Lock Manager [Random port. Suggestion: 4001]

The NFS lock manager is a kernel module. It implements the NLM (NFS Lock Manager) part of the NFS subsystem, used for handling file and resource locks of various types. This component is sometimes referred to "rpc.lockd", and shows up in the output of rpcinfo as "nlockmgr"(hey - consistancy would only make life booring!).
On systems where the lock manager is implemented as a loadable module the port number used is set at module load time, and so is configured by adding (or editting) a line in the /etc/modules.conf file, as follows..
     options lockd nlm_udpport=4001 nlm_tcpport=4001
This sets the udp and tcp/ip port numbers. Conventionally, these two numbers should be set to the same value.
If your system has the lockd code compiled into the main kernel binary rather than as a loadable module, then the settings in modules.conf wont work. You need to add the parameters "lockd.udpport=4001 lockd.tcpport=4001" to the kernel command line in the lilo or grub configuration instead.
Note on the linux kernel versions before 2.4.12: - the handling of these parameters was introduced into linux kernel version 2.4.11. But since 2.4.11 is flagged as a "dont use" release, you should verify that your system has kernel 2.4.12 or later installed in order for this to work. Use the command "uname -a" to see the kernel version you are running.
To fix the port used by the NFS Lock Manager, add a line (as above) to /etc/modules.conf or lilo.conf (or grub.conf) as appropriate, and configure the firewall to manage the port number selected. (You may find the 'LinWiz://ServerFirewall' wizard helpful when setting up a firewall for Linux).

mountd [Random port. Suggestion: 4002]

The rpc.mountd program implements the NFS mount protocol. When receiving a MOUNT request from an NFS client, it checks the request againstthe list of currently exported file systems. If the client is permitted to mount the file system, rpc.mountd obtains a file handle for requested directory and returns it to the client.
While rpc.mountd is normally allocated a random port number by the portmapper, it is possible to configure a fixed port number by supplying the “-p” command line option when the program is launched. This can be done by editting or creating the file /etc/sysconfig/nfs and adding the following line..
Once this edit has been made, configure the firewall to manage the port number selected. (You may find the 'LinWiz://ServerFirewall' wizard helpful when setting up a firewall for Linux).

rquotad [Random port. Suggestion: 4003]

rquotad is an rpc(3N) server which returns quotas for a user of a local filesystem which is mounted by a remote machine over the NFS. It also allows setting of quotas on NFS mounted filesystem. The results are used by quota(1) to display user quotas for remote filesystems and by edquota(8) to set quotas on remote filesystems. The rquotad daemon is normally started at boot time from the system startup scripts.

There are two versions of rpc.rquotad that are commonly used with linux systems, one is part of the nfs utilities, and the other comes bundled with the "quota" package. RedHat 7.x and 8.x use the "quota" package - sadly, the version they use does not have any built-in mechanism for tying down the port. Happily - version 3.08 of the quota tools package DOES allow this.

The home page of the linuxquota project is at: To obtain the software, vistit the site, download the sources and build them on your platform. If you have RedHat 8.0, then you can download the RPMs from my web site, and install.
To use this package to update the existing one..
  • First verify that your system is not already running "quota" version 3.08 or later (RedHat may have provided the up-to-date version since this document was written).
  • Download the quota rpm from my web site.
  • Install in "update" mode by using the command: rpm -Uhv quota-3.08-1.rpm
Once the updated "quota" package is installed, you can "fix" the port used by rpc.rquotad as follows..
  • Check that the following line is present in the file /etc/rpc. It should be there, but if isnt, then add it yourself. NB: the number "100011" is NOT the portnumber but the fixed RPC program number - It is important that you dont change it.
    • rquotad 100011 rquotaprog quota rquota
  • Add (or modify) the following two lines to the /etc/services file (replacing the number 4003 with the port number you want rpc.rquotad to listen on).
    • rquotad 4003/tcp
    • rquotad 4003/udp
Once thes changes have been made, configure the firewall to manage the port numbers selected. (You may find the 'LinWiz://ServerFirewall' wizard helpful when setting up a firewall for Linux).

who is talking to my microsoft windows dns server?

 who is talking to my ms dns server?  
 turn on debug logging on the server. we want incoming client requests.  
 your logs will be here: c:\Windows\System32\dns\   
 since we're doing our work on a linux box...  
 $ sudo mount -t cifs -o username=myname,password=mypass\!, //thedamned/C$ /tmp/amount/  
 copy the log.  
 $ cp /tmp/amount/Windows/System32/dns/dns.log ~  
 remove everything except for the ip addresses. sort the results and   
 remove all duplicate entries. write to file for further processing.  
 $ cat dns.log | egrep -o '([0-9]{1,3}\.){3}[0-9]{1,3}' | sort -nu >> pithyresolv  
 ip addresses are fine? since names are even more useful,  
 create the following bash script,  
 while read line  
   dig -x "$line" +short >> resolved  
 feed the list into the script:  
 $ ./ < pithyresolv  
 $ cat resolved | mailx me@hell  
 there. sliced, diced, and emailed.  

Thursday, August 3, 2017

make redhat use centos repos

 [root@satan ~]# yum install traceroute  
 Loaded plugins: rhnplugin, security  
 This system is not registered with RHN.  
 RHN support will be disabled.  
 Setting up Install Process  
 No package traceroute available.  
 Nothing to do 
 [root@satan ~]#  

 well flock.  

 [root@satan ~]# cat /etc/redhat-release  

 RedHat Enterprise Crap 6.crap  

 yes. redhat. not centos. let's use centos for yum, shall we?  

 [root@satan ~]# vi /etc/yum.repos.d/centos.repo  

 name=CentOS $releasever - $basearch  

 nb: after centos place 5 or 6 depending on major version of RHEL.  

 [root@satan ~]# yum install traceroute 

 stuff happens...

Tuesday, July 25, 2017

before you go crazy check dnstracer

 # dnstracer -v  

don't forget the -v

 Tracing to[a] via, maximum of 3 retries ( IP HEADER  
 - Destination address:  
 DNS HEADER (send)  
 - Identifier:      0x3808  
 - Flags:        0x00 (Q )  
 - Opcode:        0 (Standard query)  
 - Return code:     0 (No error)  
 - Number questions:   1  
 - Number answer RR:   0  
 - Number authority RR: 0  
 - Number additional RR: 0  
 QUESTIONS (send)  
 - Queryname:      (12)old-releases(6)ubuntu(3)com  
 - Type:         1 (A)  
 - Class:        1 (Internet)  
 DNS HEADER (recv)  
 - Identifier:      0x3808  
 - Flags:        0x8080 (R RA )  
 - Opcode:        0 (Standard query)  
 - Return code:     0 (No error)  
 - Number questions:   1  
 - Number answer RR:   0  
 - Number authority RR: 4  
 - Number additional RR: 0  
 QUESTIONS (recv)  
 - Queryname:      (12)old-releases(6)ubuntu(3)com  
 - Type:         1 (A)  
 - Class:        1 (Internet)  
 - Domainname:      (6)ubuntu(3)com  
 - Type:         2 (NS)  
 - Class:        1 (Internet)  
 - TTL:         25923 (7h12m3s)  
 - Resource length:   6  
 - Resource data:    (3)ns1(3)p27(6)dynect(3)net  
 - Domainname:      (6)ubuntu(3)com  
 - Type:         2 (NS)  
 - Class:        1 (Internet)  
 - TTL:         25923 (7h12m3s)  
 - Resource length:   6  
 - Resource data:    (3)ns3(3)p27(6)dynect(3)net  
 - Domainname:      (6)ubuntu(3)com  
 - Type:         2 (NS)  
 - Class:        1 (Internet)  
 - TTL:         25923 (7h12m3s)  
 - Resource length:   6  
 - Resource data:    (3)ns4(3)p27(6)dynect(3)net  
 - Domainname:      (6)ubuntu(3)com  
 - Type:         2 (NS)  
 - Class:        1 (Internet)  
 - TTL:         25923 (7h12m3s)  
 - Resource length:   20  
 - Resource data:    (3)ns2(3)p27(6)dynect(3)net  
  |\___ [] (No IP address)  
  |\___ [] (No IP address)  
  |\___ [] (No IP address)  
  \___ [] (No IP address)  

Thursday, July 20, 2017

discover axis webcams when you're clueless

 AXIS cameras have a severe remote compromise bug. I guess the cameras need to be found and patched. But, you know, I don’t recall where they’re at.  
 Let’s find them.  
 I do not remember, off the top of my head, all the subnets around. Happily, I'm in a mixed shop and Active Directory Sites and Services tells me what subnets are which. Cool.  
 On an AD controller, run PowerShell and enable script execution.  
 > Set-ExecutionPolicy RemoteSigned  
 Run the following cmdlet:  
 $Sites = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest().Sites  
 $obj = @()  
 foreach ($Site in $Sites) {  
 foreach($sub in $site.subnets){  
  $obj += New-Object -Type PSObject -Property (  
   "site" = $site.Name  
   "subnet" = $  
 $obj | Export-Csv 'ADsites.csv' –NoType  
 The csv output shows:  
 2. AXIS cameras have the following ports open by default:  
 TCP 21,80,554,49152  
 We can use nmap to discover and filter hosts that have the above:  
 $ nmap -p 21,80,554,49152 10.97.232.* -oG - | grep open | awk '!/closed/ && !/filtered/' >> axis  
 However, scanning UPnP port 49152 is unreliable. We could then narrow the ports, but we would be left with a guessing game as to whether or not the system is an Axis camera.  
 Luckily, Axis cameras all have a banner on FTP 21. It is either Axis or AXIS. This works better:  
 $ nmap -sS -sV -p 21 -n -Pn --script banner IPRANGE/CIDR -oG - | grep -i axis >> axis  
 To scan all the ranges, all we need to do is create a file and feed it the CIDR notated networks. I'm only concerned about my isolated networks, HELL and HELLS-GATE:  
 $ vi axis.subnet  
 Now, the completed command would be:  
 $ nmap -sS -sV -p 21 -n -Pn --script banner -iL axis.subnet -oG - | grep -i axis >> axis  

Wednesday, July 19, 2017

discover axis webcams

i'm just going to leave this here.

 nmap -sS -sV -p 21 -n -Pn --script banner -iL subnet.list -oG - | grep -i Axis > axis  

Wednesday, June 21, 2017

arg list too long. come on.

 bash-3.00# rm -rf *  
 bash: /usr/bin/rm: Arg list too long  
 what? 1000 entries is too much?  
 bash-3.00# find . -name '*' | xargs rm  

zgrep them all

 grep recursively through a whole lot of gz'd files.  
 # find -name \*.gz -print0 | xargs -0 zgrep "\<>\"  

Tuesday, May 16, 2017

no studio 12.5 for you

The situation can be summarized as "you have installed Solaris Studio 12.5 on a platform, T1000, that is not supported". 

To give you a more detailed explanation of what is happening, everything starts from 

Bug 26080816 - "Backport 25993568 - man page for -xarch=generic is wrong for SPARC to 12.5" 

which has as base bug 

Bug 25993568 - man page for -xarch=generic is wrong for SPARC 

This last bug basically says that the man page for "CC", "cc" and "f90" compilers is wrong as it says that they have been compiled with the flag "-xarch=generic" while in the reality it has been compiled with "-xarch=sparcvis2". The usage of this last flag means that the aforementioned binaries coming with the Solaris Studio 12.5 installation need that the HW system they run on MUST advise VIS instruction set to be able to run on the system. 

After an internal discussion with Solaris OS and HW Support, it seems that T1 systems (so T1000 and T2000) do not advise VIS 

The use of VIS instructions for Niagara is deprecated, the performance of even the implemented VIS instructions will often be below a comparable set of non-VIS instructions. so intentionally VIS was not enabled/advertised for T1 in hwcaps/isalist. 
T1 supports a subset of VIS1 and only the siam instruction from VIS2. And the OS fills in the gaps via software emulation for the rest. Those platforms are quite a bit past their "use by" date. And I've found that the very latest compilers don't even run on older versions of Solaris any more. 

From Solaris Studio Support there was lack of information in the product release notes, therefore the following bug has been filed to document that T1000 and T2000 are not supported by Solaris Studio 12.5+ 

Bug 26080849 - Backport 26052198 - release notes need to warn about UltraSPARC T1 to 12.5 docs 

runas someone else

 runas /user:domain\username cmd.exe  

Tuesday, May 9, 2017

rrdtool 32bit to 64bit conversion hack.

 i have a 32-bit system i am retiring and replacing with a 64-bit system.  
 i need to move a scad rrd files. a simple copy to system and pray that it still works would be too easy.  
 on the 32-bit system:  
 ~ cd /dir  
 ~ for f in *.xml; do rrdtool restore ${f} `echo ${f} | cut -f1 -d .`.rrd; done  
 ~ scp *.xml me@64bit:/there  
 convert the xml files to rrd files on the 64-bit system  
 ~ for f in *.xml; do rrdtool restore ${f} `echo ${f} | cut -f1 -d .`.rrd; done  

Friday, May 5, 2017

sudo and sss and email messages

 i've been getting lotsa messages from su about stuff.  
 nano -w /etc/nsswitch.conf  
 sudoers:    files sss  
 remove the sss  
 sudoers:    files  

Tuesday, May 2, 2017

no more mail from root@domain

vi /etc/email-addresses  
 root: someone@somewhere  
 no more root@domain  

Tuesday, March 14, 2017

live netapp volume copy

 copy a netapp volume on a local system  
 create the volume. then:  
 > vol restrict vol_dst  
 > vol copy vol_src vol_dst <- ignoring snapshots  
 once console reports:  
 vol copy restore 0 : 100% done  
 > vol copy status  
 No operations in progress.  
 > vol online vol_dst  

Wednesday, March 8, 2017

ldapsearch and awk follies

ldapsearch -x -D "CN=serviceacct,OU=here" -w n0tagain\!\!\! -h -p 3268 
  -LLL "(&(objectCategory=person)(objectclass=user)(memberOf=CN=slack-here,OU=here))" 
   mail departmentNumber | awk '!/CN=/{print $2}' > yes ; awk 'NF > 0' yes > no ; 
   awk 'NR%2{printf "%s,",$0;next;}1' no > yes ; 
   mv yes output-here.`date +"%y%m%d"` ; rm -f no ;  
ldapsearch -x -D "CN=serviceacct,OU=there" -w alw4ys\@d0h\! -h -p 3268 
   -LLL "(&(objectCategory=person)(objectclass=user)(memberOf=CN=slack-there,OU=there))" 
   mail description | awk '!/CN=/{print $2}' > yes ; awk 'NF > 0' yes > no ; 
   awk 'NR%2{printf "%s,",$0;next;}1' no > yes ; 
   awk -F, '{print $2,$1}' OFS=, yes > maybe ; 
   mv maybe output-there.`date +"%y%m%d"` ; rm -f no ; rm -f yes  
cat output-here.`date +"%y%m%d"` output-there.`date +"%y%m%d"` >> output-all.`date +"%y%m%d"` ;  
mailx -s "service users output.`date +"%y%m%d"`" <output-all.`date +"%y%m%d"` ;  
rm -f output-*  

Friday, February 17, 2017

install via opencsw nagios nrpe 2.15 on solaris 10/11 sparc

 install via opencsw nagios nrpe 2.15 on solaris 10/11 sparc  
 if compiling is not your friend, just do opencsw.  
 pkgadd -d  
 /opt/csw/bin/pkgutil -U  
 /opt/csw/bin/pkgutil -y -i nrpe   
 /usr/sbin/pkgchk -L CSWnrpe # list files  
 # svcs svc:/network/cswnrpe:default  
 online     8:14:44 svc:/network/cswnrpe:default  
 online     8:14:44 svc:/network/cswnrpe:default  
 # cat nrpe 5666/tcp # NRPE >> /etc/services  
 pkgadd -d  
 /opt/csw/bin/pkgutil -U  
 /opt/csw/bin/pkgutil -y -i nrpe_plugin   
 /usr/sbin/pkgchk -L CSWnrpe-plugin # list files  
 pkgadd -d  
 /opt/csw/bin/pkgutil -U  
 /opt/csw/bin/pkgutil -y -i nagios_plugins   
 /usr/sbin/pkgchk -L CSWnagios-plugins # list files  
 ls -la /opt/csw/libexec/nagios-plugins/  
 /opt/csw/libexec/nagios-plugins/check_nrpe -H localhost  
 easy. some nice one did everything for you.  

delete elasticsearch data with curl

this is for me and me alone.

 curl -XDELETE "http://graylog:9200/graylog/message/_query" -d'  
   "query": {  
    "match": {  
      "source": "nginx"  

Friday, February 10, 2017

lazy man script to change an ip everywhere

 change that ip
 find . -type f -print0 | xargs -0 sed -i 's/'


Wednesday, February 8, 2017

linux auth to ad with rfc 2307

Linux-AD Integration with Windows Server 2008

9 July 2007 · Filed in Tutorial
In the event that your organization is considering a migration later this year (or next?) to Windows Server 2008 (formerly “Longhorn”), here are some instructions for integrating Linux login requests against Active Directory on Windows Server 2008. These instructions are based on Linux-AD Integration, Version 4 and utilize Kerberos, LDAP, and Samba.

When this process is complete, AD users can be enabled for use on Linux systems on the network and login to those Linux systems using the same username and password as throughout the rest of Active Directory.

If you are looking for information on using Linux with a previous version of Windows before Windows Server 2008, please refer back to my AD integration index and select the appropriate article. The only significant changes in the process involve the mapping of the LDAP attributes; otherwise, the procedure is very similar between the two versions of Windows.

Preparing Active Directory (One-Time)
The process of installing and configuring Windows Server 2008 is beyond the scope of this article (although I may touch on that in the near future in a separate article). Therefore, I won’t provide detailed instructions on how to perform some of these tasks, but instead provide a high-level overview.

Enable Editing/Display of UNIX Attributes
In order to store UNIX attributes in Active Directory, the schema must be extended. To extend the schema, first install Active Directory (add the Active Directory Domain Services role to an installed server, then use the Active Directory Installation Wizard to setup Active Directory) and then add the “Identity Management for UNIX” role service (this can be done in Server Manager).

Once that role service has been installed, then the AD schema now includes a partially RFC 2307-compliant set of UNIX attributes, such as UID, UID number, GID number, login shell, etc. (Note that it may be that these attributes are already included in the schema for Windows Server 2008; I did not check the schema before installing the Identity Management for UNIX role service. With Windows Server 2003 R2, the schema was present at the time of installation, but the attributes were not visible until installing the UNIX identity services.)

At this point a new tab labeled “UNIX Attributes” will appear in the properties dialog box for users and groups in Active Directory. You’ll use this tab to edit the UNIX-specific attributes that are required for logins to Linux-based systems.

Create an LDAP Bind Account
You’ll also need to create an account in Active Directory that will be used to bind to Active Directory for LDAP queries. This account does not need any special privileges; in fact, making the account a member of Domain Guests and not a member of Domain Users is perfectly fine. This helps minimize any potential security risks as a result of this account. Just be sure that you know the account’s user principal name (UPN) and password.

Prepare Active Directory (Each User)
Each Active Directory account that will authenticate via Linux must be configured with a UID and other UNIX attributes. This is accomplished via the new “UNIX Attributes” tab on the properties dialog box of a user account.

After all the user accounts have been configured, then we are ready to configure Active Directory objects for each of the Linux server(s) that we’ll be integrating with AD.

Prepare Active Directory (Each Server)
Prior to using Samba to join Linux computers to Active Directory and generate a keytab automatically, we had to use the ktpass.exe utility on Windows to generate a keytab. Due to some current Samba-Windows Server 2008 interoperability issues, we can’t use Samba. That means we’ll be back to using ktpass.exe to map service principals onto accounts in Active Directory. Unfortunately, you’ll need to first disable User Account Control (UAC) on your server, since UAC interferes with ktpass.exe. (Nice, huh?)

Once you’ve disabled UAC (and rebooted your server), then you can map the service principal names (SPNs) using the following procedure. First, create a compute account (or a user account; either will work) with the name of the Linux server.

Next, use the following command to map the needed SPN onto this account (backslashes indicate line continuation):

ktpass.exe -princ HOST/server.fqdn@REALM.COM \
-mapuser DOMAIN\AccountName$ -crypto all \
-pass Password123 -ptype KRB5_NT_PRINCIPAL \
-out filename.keytab
Finally, copy the file generated by this command (filename.keytab) to the Linux server (using SCP or SFTP is a good option) and merge it with the existing keytab (if it exists) using ktutil. If there is no existing keytab, simply copy the file to /etc/krb5.keytab and you should be good to go.

Now that Active Directory has computer objects (and, more importantly, SPNs) for the Linux servers and the AD users have been enabled for UNIX (by populating the UNIX attributes), we’re ready to start configuring the Linux server(s) directly.

Prepare Each Linux Server
Follow the steps below to configure the Linux server for authentication against Active Directory. (Note that this configuration was tested on a system running CentOS—a variation of Red Hat Enterprise Linux—version 4.3.)

Edit the /etc/hosts file and ensure that the server’s fully-qualified domain name is listed first after its IP address.

Make sure that the appropriate Kerberos libraries, OpenLDAP, pam_krb5, and nss_ldap are installed. If they are not installed, install them.

Be sure that time is being properly synchronized between Active Directory and the Linux server in question. Kerberos requires time synchronization. Configure the NTP daemon if necessary.

Edit the /etc/krb5.conf file to look something like this, substituting your actual host names and domain names where appropriate.

Edit the /etc/ldap.conf file to look something like this, substituting the appropriate host names, domain names, account names, and distinguished names (DNs) where appropriate.

Configure PAM (this varies according to Linux distributions) to use pam_krb5 for authentication. Many modern distributions use a stacking mechanism whereby one file can be modified and those changes will applied to all the various PAM-aware services. For example, in Red Hat-based distributions, the system-auth file is referenced by most other PAM-aware services. Click here to see a properly edited /etc/pam.d/system-auth file taken from CentOS 4.4.

Edit the /etc/nsswitch.conf file to include “ldap” as a lookup source for passwd, shadow, and groups.

At this point we are now ready to test our configuration and, if successful, to perform the final step: to join the Linux server to Active Directory for authentication.

Test the Configuration
To test the Kerberos authentication, use the kinit command, as in kinit <AD username>@<AD domain DNS name>. This should return no errors. A klist at that point should then show that you have retrieved a TGT (ticket granting ticket) from the AD domain controller. If this fails, go back and troubleshoot the Kerberos configuration. In particular, if you are seeing references to failed TGT validation, check to make sure that both your Linux servers and AD domain controllers have reverse lookup (PTR) records in DNS and that the Linux server’s /etc/hosts file listed the FQDN of the server first instead of just the nodename.

<aside>Some readers and some other articles have suggested the use of the AD domain DNS name in the /etc/krb5.conf file instead of an AD domain controller specifically; I recommend against this. First, I believe it may contribute to TGT validation errors; second, it is possible to list multiple KDCs (AD DCs) in the configuration. Since the only major reason to use the AD domain DNS name instead of the DNS name of one or more DCs would be fault tolerance, then it doesn’t really gain anything.</aside>

To test the LDAP lookups, use the getent command, as in getent passwd <AD username>. This should return a listing of the account information from Active Directory. If this does not work, users will not be able to login, even if Kerberos is working fine. If you run into errors or failures here, go back and double-check the LDAP configuration. One common source of errors is the name of the LDAP bind account, so be sure that is correct.

At this point, SSH logins to the Linux system using an account present in Active Directory (one which has had its UNIX attributes specified properly) should be successful. This will be true as long as you used the ktpass.exe command earlier to map the SPN onto the computer object in Active Directory. Even if you didn’t copy the keytab over to the Linux server, logins will work. Why? Because the PAM Kerberos configuration, by default, does not require a client keytab, and does not attempt to validate the tickets granted by the TGT. This means that as long as the SPN(s) are mapped to the accounts in AD, the keytab is not necessarily required.

(Note, however, that not using a keytab and/or not requiring a keytab does leave the Linux server open to potentially spoofed Kerberos tickets from a fake KDC. In addition, “native” Kerberos authentication—i.e., using a Kerberos ticket to authenticate instead of typing in a password—won’t work without a keytab.)

Deal with Home Directories
Unlike Windows systems, home directories are required on Linux-based systems. As a result, we must provide home directories for each AD user that will log in to a Linux-based system. We basically have three options here:

Manually create home directories and set ownership/permissions properly before users will be able to log in.

Use the PAM module to automatically create local home directories “on the fly” as users log in. To do this, you would add an entry for in the session portion of the PAM configuration file.

Use the automounter to automatically mount home directories from a network server. This process is fairly complicated (too involved to include the information here), so I’ll refer you to this article on using NFS and automounts for home directories. This has the added benefit of providing a foundation for unified home directories across both Windows and Linux systems.

Once you’ve settled on and implemented a system for dealing with home directories, you are finished! UNIX-enabled users in Active Directory can now login to Linux-based systems with their Active Directory username and password.

What’s not addressed in this article? Password management. In this configuration, users will most likely not be able to change their password from the Linux servers and have that change properly reflected in Active Directory. In addition, “native” Kerberos authentication using Kerberos tickets won’t work unless the keytab is present. In my testing, I ran into a number of issues with the keytab and TGT validation, but I’m not sure if those are errors in my process or the result of the beta status of Windows Server 2008.

install solaris 11.3 sparc & add zfs disk via ilom

 -> set /HOST/bootmode script="setenv auto-boot? false"  
 -> reset -force /SYS  
 -> start /HOST/console  
 {0} ok boot cdrom  
 [... install ...]  
 # zpool list  
 rpool  556G 25.0G  531G  4% 1.00x ONLINE -  
 zpool status  
  pool: rpool  
  state: ONLINE  
  scan: none requested  
     NAME            STATE   READ WRITE CKSUM  
     rpool           ONLINE    0   0   0  
      c1t3674DA868583E562d0s0 ONLINE    0   0   0  
 errors: No known data errors  
 # mkdir /zones  
 # format -e  
 0 c1t3674DA868583E562d0 <- in use for rpool  
 1 c3t39742F18D083CBF2d0 <- not in use  
 # zpool create zones c2t3C76E3E06C7010BCd0  
 # zpool list  
 rpool  556G 25.0G  531G  4% 1.00x ONLINE -  
 zones 1.09T  85K 1.09T  0% 1.00x ONLINE -  
 zpool status  
  pool: rpool  
  state: ONLINE  
  scan: none requested  
     NAME            STATE   READ WRITE CKSUM  
     rpool           ONLINE    0   0   0  
      c1t3674DA868583E562d0s0 ONLINE    0   0   0  
 errors: No known data errors  
  pool: zones  
  state: ONLINE  
  scan: none requested  
     NAME           STATE   READ WRITE CKSUM  
     zones          ONLINE    0   0   0  
      c3t39742F18D083CBF2d0 ONLINE    0   0   0  
 errors: No known data errors  
 # echo yay  

Monday, January 30, 2017

oracle solaris studio 12.5 installation on oracle solaris 11.3

 set the package publisher  
 # pkg set-publisher -G solaris  
 list the packages to be installed  
 # pkg list -af 'pkg://solarisstudio/developer/developerstudio-125/*'  
 dryrun install because, you know, this is solaris and all.  
 # pkg install -nv developerstudio-125  
 do the install  
 # pkg install --accept developerstudio-125