Get HTTP arguments when using PHP as CGI

December 17th, 2008 No comments

Normaly when you use PHP as CGI, $_GET and $_POST should work as usual.
But in certain configuration, they may not be set. In this case this code is the base to get args

foreach (split('&',$QUERY_STRING) as $tmp) {
$tmp2 = split('=',$tmp);
$$tmp2[0]=$tmp2[1];
}

Categories: Uncategorized Tags: ,

Counter reset problem with RRDTool

December 17th, 2008 No comments

A common problem with rrdtool is to have a rrd file with one or more datasource configured as COUNTER, but your counter reset and you get an ugly high peak.

There is 2 solutions to solve this

  • In the “script” you use to generate the graph file you have something like DEF:A=/path/to/your/file.rrd:dsname:AVERAGE to get data, then you graph it with something like LINE1:A#…..
    Try to add CDEF:B=A,MINVALUE,GT,A,MAXVALUE,LT,A,UNKN,IF,UNKN,IF and graph B instead of A.
    Replace MINVALUE with the minimun value you want to graph (typically 0) and MAXVALUE with the maximum value you want on the graph (100000000 or something like this).
    This will suppress ugly peak on the graph.
  • The second (better) solution is to change from COUNTER DS to DERIVE DS, to do so follow theses commands:
    # cp file.rrd file.rrd.bkup # this is to backup the file !
    # rrdtool tune file.rrd
    # rrdtool tune file.rrd --data-source-type dsname:DERIVE # this change DS dsname from COUNTER to DERIVE
    # rrdtool tune file.rrd --minimum dsname:0 --maximum dsname:100000000 # Set the min and max value of data you have to graph
    # rrdtool tune file.rrd
    # rrdtool dump file.rrd > file.xml # Dump data to xml file
    # rrdtool restore -r file.xml file.rrd.new # This restore xml data in rrd file with a check on value
    # to suppress value higher than --maximum you choose
Categories: RRDTool Tags: ,

Remote Web Site Incremental Backup

December 17th, 2008 No comments

Problem: You have a web site on a server but can’t connect to it with ssh but only FTP and you want to backup it daily but the website is huge, too huge to make a complete backup daily

My solution is to put a php script on the webserver that list files with a modification date in the last X seconds.
Then within the following shell script I get the result of the php page then I retreive listed file using FTP.

This is, I think, the most generic solution. I see an other one:

  • It would be to create a tar archive in the php script, then download the .tar directly. This as 2 problems: It’s less secure unless you can apply an htaccess file on the .tar file to put an auth, you may not be allowed to execute binary on the remote server to do the tar.
  • You may also use FTPFS, it’s nice, but require that the host making the backup can use ftpfs 🙂

This is the source of my SH script

#! /bin/sh

BK_YEAR="`date +%Y`"
BK_MONTH="`date +%b`"
BK_DAY="`date +%d`"

for file in `lynx -auth=login:password -source http://www.domain.com/admtool/last-modified.php`
do ADL="`echo $file | sed "s|/path/to/www.domain.com/root/document/or/the/ftp/chroot/path/||"`"
mkdir -p "/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page/`dirname $ADL`"
wget -q --output-document="/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page/$ADL" -r "ftp://login:pass@ftp.domain.com/$ADL"
done

tar -czf "/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page.tgz" "/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page"
rm -r "/some/path/wd1a/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page"

This is the source of my PHP script

<?php 

$path = '.';
$time = 86400;      // will print all file modified in the last 86400 seconds

$curtime = time();

GetFileList($path);

function GetFileList($path) {
$curtime = time();
        $handle=opendir($path);
                while($file = readdir($handle)) {
                        if ($file=='.' || $file=='..') continue;
                        if (is_dir($path . '/' . $file)) {
                                GetFileList($path . '/' . $file);
                        }
                        else {
                                if (($curtime - filemtime($path . '/' . $file)) < 86400) {
                                        echo $path . '/' . $file . "n";
                                }
                        }
                }
        closedir($handle);
}

?>
Categories: Backup, Unix Tags: , ,

How big is you MP3 collection all over your hard drives ?

December 17th, 2008 No comments

You want to know the total size of all your .mp3 files ? (or any kind of file, juste change the locate argument)

Try :

locate .mp3 | perl -e 'while(<STDIN>) { chop ; $tsize += -s $_; } print $tsize/1048576 . "Mo\n"'

or

locate .mp3 | perl -e 'foreach (<>) { chop and $_["+"]+=-s$_ } print $_["+"]/1048576 . "Mo\n"'

or if you didn’t have perl (sorry for you ;))

locate .mp3 | awk '{print "\"" $0 "\""}' | xargs ls -l | awk 'BEGIN{s=0}{s+=($5/1024/1024)}END{print s "Mo"}'
Categories: Perl, Unix Tags: , ,

Add IPF rule automatically from log files

December 17th, 2008 No comments

Here is a very simple command to add a rule to your firewall (IPF in my example) when you match something in a log file (apache in this case)

for item in `tail -n 150 access_log | grep "c+dir" | awk '{print $1}'` ;
  do echo "block in quick on ne0 proto ip from $item to any" >> /etc/ipf.conf ;
done

This read 150 last line of access_log using tail, use grep as matching operator, use awk to catch ip (note that you could do /c+dir/{print $1} in awk to don’t use grep) then add a blocking rule in /etc/ipf.conf

You may want to add a comment to the end of the blocking rule saying why it was blocked.

Don’t forget to reload the firewall, /sbin/ipf -Fa -f /etc/ipf.conf for ipf, from time to time with cron to active the rule.

You may reload the firewall each time with

for item in `tail -n 150 access_log | grep "c+dir" | awk '{print $1}'` ;
  do echo "block in quick on ne0 proto ip from $item to any" >> /etc/ipf.conf; /sbin/ipf -Fa -f /etc/ipf.conf ;
done

This system has 2 problems:

  • You must run tail from cron as -f can’t work with the for statement.
  • Rules are added at the end of ipf.conf, this is very useless if you have pass in quick proto ip any to any port 80 before.

So, here is a Perl script that will do a better job.

my $IPF_FILE="/etc/ipf.conf";
my $TMP_FILE="/tmp/ipf.new.rules";
my %h;
open (FILE,"tail -fn 1 /usr/local/apache/logs/access_log|") || die "can't open FILE: $!";
 while (<FILE>) {
  if ($_ =~ /^(.*)s-s-.*c+dir/) {
   if(exists($h{"$1"})) { $h{"$1"}++ }
   else {
    $h{"$1"} = 1;
    open(IPF, "< $IPF_FILE") or die "can't open $IPF_FILE: $!";
    open(TMP, "> $TMP_FILE") or die "can't open $TMP_FILE: $!";
    print TMP "block in log quick on ne0 from $1 to anyn" or die "can't write to $TMP_FILE: $!";
    while (<IPF>) { (print TMP $_) or die "can't write to $TMP_FILE: $!"; }
    close(IPF)                  or die "can't close $IPF_FILE: $!";
    close(TMP)                  or die "can't close $TMP_FILE: $!";
    rename("$TMP_FILE", "$IPF_FILE") or die "can't rename $TMP_FILE to $IPF_FILE: $!";
    system("ipf -Fa -f $IPF_FILE");
   }
  }
 }
close (FILE);
}

Incrementation of $h{“$1”} is totally useless here but you may use it for something (like waiting more than one attemp of the IP before adding it to IPF). $h is used to don’t firewall two time the same IP.

You may think that $h is not usefull because as we have blocked the IP, we will not get any new request from it. Not really

  • Tail is not working really in live, it check time to time for new line then print them, so between the first request of the IP and the reload of the firwall, you may have more than one request (don’t forget that reloading ipf take time also);
  • My IPF rule is very strict, you may only block port 80, so you can still get request on port 443, or things like that.
Categories: NetBSD, Perl, Unix Tags: , , ,

Backup through SSH

December 17th, 2008 No comments

Why using SSH to transfer data when you can use SCP/SFTP ?
Because sometime SCP is disable in SSH configuration.

So here is an easy way to transfert data with ssh. Run something like this

ssh -C <host> "cd /path/to/folder/to/transfer; tar cvf - *" | tar xfv

-C will compress tranfer using gzip, I didn’t do any test to see if it’s better to use -C to compress on the SSH level or to use tar cvzf to compress the tar file. If you do please give me result !
“| tar xvf” will extract file in your current directory, of course, you may want to leave them in the .tar file.

Categories: Backup, Unix Tags: , ,

Reminder to add an user on NetBSD

December 17th, 2008 No comments

This is just a little reminder about adding user on NetBSD.
The basic way is to use :

# useradd -G <group_2> -b /home -g <group_1> -k /etc/skel -m -s /usr/pkg/bin/bash -v <user>

This add <user> in primary group <group_1> and secondary group <group_2>, create is home in /home/<user> using /etc/skel as skeleton. It also set the shell to bash.

N.B: in /etc/passwd you will see the primary group of each user.
In /etc/group you will find, for each group, the list of user that are inside the group as secondary group.
N.B: You can also use id command to see all group of an user, it will display uid, gid (primary group) then all secondary group

Categories: NetBSD Tags: ,

One Time Password authentication system

December 17th, 2008 No comments

S/key is an one time password authentication (OTP) system that prevent you from sending password in clear.
It’s especially useful with system like telnet.

It’s quite easy to setup on NetBSD. To start, run

# skeyinit -s <user>
[Adding user]
You need the 6 english words generated from the "skey" command.
Enter sequence count from 1 to 10000: Enter anything you want
Enter new seed [default NetB14423]: Just press return or enter something else
otp-md4 <sequence count> <seed>
s/key access password: <Follow instruction bellow>

To get the s/key access password, you have to run the following command but be careful to do not run it on the remote host through telnet, run it locally !

# skey <sequence count you entered for skeyinit> <seed you use in skeyinit>

This will ask you for a password (use a secure one) and give you 6 english word, use them to complete the “s/key access password:” question.

Your S/Key authentification is ready !


Next time you do a telnet connexion to the host you will get this prompt

login: <put your username>
Password [otp-md4 <random number> <seed you use in skeyinit>]:

To know the 6 english word password to use, you have to run the following command (on you local computer for example)

# skey <random number> <seed you use in skeyinit>

In fact the random number will be incremented by one each time you log in. You can easily generate (in advance) all 6 english word password for number X to Z, with the following command (another time, run this locally) :

skey -n (Z-X) Z <seed you use in skeyinit>

Z-X is the number of password to generate. Z mean the last generated password is for random number Z

N.B: You clear password will still work if you use it.


Related link: S/Key on Wikipedia

Categories: NetBSD, Unix Tags: