Tiff to JPEG
for file in `ls *.tiff`; do file2=`basename $file .tiff`;/bla/bin/tifftopnm "$file" | /bla/bin/pnmtojpeg > "$file2.jpg"; done
for file in `ls *.tiff`; do file2=`basename $file .tiff`;/bla/bin/tifftopnm "$file" | /bla/bin/pnmtojpeg > "$file2.jpg"; done
Context
I use a LDA that use Virtual User, and store email in /some/path/mail/<domaine.tld>/<user>/, this is a quite standard way to do.
I also use spamassassin but wanted to have a per user bayes database and configuration. It’s still simple with spamc/spamd by running spamd -c --virtual-config-dir=/some/path/to/spamassassin/%d/%l ...
and invoking spamc from you MTA with spamc -u ${recipient} -f -e /path/to/your/LDA
so that i have user preference in /some/path/saconf/<domaine.tld>/<user>/
.
Now I would like to provide 2 imap folder to users, LearnSpam and LearnHam so that they could train their bayes database.
Here the problem start, especially if you are not using one of the latest spamassassin version.
The bad way
What sa-learn command will you run to take care of LearnSpam and LearnHam folders ? sa-lean has an –username option, you may want to use that but this is not intended to be use in this case, it’s to be used when bayes database are stored in an SQL database instead of file (this is correctly documented in latest SA version). So don’t try sa-learn --username=<user>@<domaine.tld> --spam /some/path/mail/<domaine.tld>/<user>/.INBOX.LearnSpam/cur/*
it will not work. Imagine how this can work ? it can’t, how sa-learn could convert <user>@<domaine.tld> to /some/path/saconf/<domaine.tld>/<user>/ ?
The good way
So the right command to use is sa-learn -p /some/path/saconf/<domaine.tld>/<user>/user_prefs --spam /some/path/mail/<domaine.tld>/<user>/.INBOX.LearnSpam/cur/*
Using the -D (debug) option could be very helpfull to check if it’s work correctly, you must see dbg: bayes: tie-ing to DB file R/O /some/path/saconf/<domaine.tld>/<user>/bayes_toks
root problem with rsync
Imagine that you want to backup the /home directory of server ‘A’ to server ‘B’ using rsync.
There is two way to do this :
When Tar start to be your best friend
So how can you do ? the solution would be to store (uid/gid/permission/..) information in a dedicated file, so that you can apply them if you need to restore data.
How can you do that ? I’m sure you are too lazy to write a shell/perl/python/.. script to do that. You’re right ! Use tar.
What ? What ? You want me to tar /home and rsync it ? Are you mad ? I don’t use rsync to transfer 20Go at each backup.
When 1 option and 2 lines can save you
Tar as an incremental option. This mean that you can make a 1st tar file with /home then you can do a 2nd tar file with only modified file since previous tar. This option is -g.
Here is a 2 lines shell script to do the job
gtar -g /var/backup/home/home-backup.snar -cpvzf /var/backup/home/home-backup.`/bin/date +%s`.tgz /home/ rsync --delay-updates -avz -e ssh /var/backup/home backupuser@'B':/var/backup/
–delay-updates is very important because if you don’t use it if ‘A’ crash when rsync is copying the .snar file (used to store incrementation information) you will miss it on ‘B’ and can’t retore tar file correctly.
-g only exist in GNU Tar. You may have to install it if you’re running *BSD. First check if you have a gtar binnary
You may want to establish a full IP connection to a remote host (or remote lan) but you may not have any VPN software on the remote host, or on your host.
There is a solution using SSH and PPP with the command
pppd pty 'ssh -x -t -e none user@server /usr/sbin/pppd passive noauth 9600' noauth 10.0.0.1:10.0.0.2
You have to use key authentication because the tty is redirected to pppd so you can’t be prompted for a password.
With this command, you can reach server at IP 10.0.0.1.
By playing with pppd and route tables you can extend the IP tunnel to the entire remote LAN.
This has been tested between Mac OS X and NetBSD but should work with any system.
It works but it’s very slow.
On some system you may have TTY problem, if it’s your case take a look to http://shinythings.com/pty-redir/ or http://www.ishiboo.com/~nirva/Projects/vpn/
Problem: You have a web site on a server but can’t connect to it with ssh but only FTP and you want to backup it daily but the website is huge, too huge to make a complete backup daily
My solution is to put a php script on the webserver that list files with a modification date in the last X seconds.
Then within the following shell script I get the result of the php page then I retreive listed file using FTP.
This is, I think, the most generic solution. I see an other one:
This is the source of my SH script
#! /bin/sh BK_YEAR="`date +%Y`" BK_MONTH="`date +%b`" BK_DAY="`date +%d`" for file in `lynx -auth=login:password -source http://www.domain.com/admtool/last-modified.php` do ADL="`echo $file | sed "s|/path/to/www.domain.com/root/document/or/the/ftp/chroot/path/||"`" mkdir -p "/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page/`dirname $ADL`" wget -q --output-document="/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page/$ADL" -r "ftp://login:pass@ftp.domain.com/$ADL" done tar -czf "/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page.tgz" "/some/path/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page" rm -r "/some/path/wd1a/backup/domain.com/$BK_YEAR/$BK_MONTH/$BK_DAY/web-page"
This is the source of my PHP script
<?php $path = '.'; $time = 86400; // will print all file modified in the last 86400 seconds $curtime = time(); GetFileList($path); function GetFileList($path) { $curtime = time(); $handle=opendir($path); while($file = readdir($handle)) { if ($file=='.' || $file=='..') continue; if (is_dir($path . '/' . $file)) { GetFileList($path . '/' . $file); } else { if (($curtime - filemtime($path . '/' . $file)) < 86400) { echo $path . '/' . $file . "n"; } } } closedir($handle); } ?>
You want to know the total size of all your .mp3 files ? (or any kind of file, juste change the locate argument)
Try :
locate .mp3 | perl -e 'while(<STDIN>) { chop ; $tsize += -s $_; } print $tsize/1048576 . "Mo\n"'
or
locate .mp3 | perl -e 'foreach (<>) { chop and $_["+"]+=-s$_ } print $_["+"]/1048576 . "Mo\n"'
or if you didn’t have perl (sorry for you ;))
locate .mp3 | awk '{print "\"" $0 "\""}' | xargs ls -l | awk 'BEGIN{s=0}{s+=($5/1024/1024)}END{print s "Mo"}'
Here is a very simple command to add a rule to your firewall (IPF in my example) when you match something in a log file (apache in this case)
for item in `tail -n 150 access_log | grep "c+dir" | awk '{print $1}'` ; do echo "block in quick on ne0 proto ip from $item to any" >> /etc/ipf.conf ; done
This read 150 last line of access_log using tail, use grep as matching operator, use awk to catch ip (note that you could do /c+dir/{print $1} in awk to don’t use grep) then add a blocking rule in /etc/ipf.conf
You may want to add a comment to the end of the blocking rule saying why it was blocked.
Don’t forget to reload the firewall, /sbin/ipf -Fa -f /etc/ipf.conf for ipf, from time to time with cron to active the rule.
You may reload the firewall each time with
for item in `tail -n 150 access_log | grep "c+dir" | awk '{print $1}'` ; do echo "block in quick on ne0 proto ip from $item to any" >> /etc/ipf.conf; /sbin/ipf -Fa -f /etc/ipf.conf ; done
This system has 2 problems:
So, here is a Perl script that will do a better job.
my $IPF_FILE="/etc/ipf.conf"; my $TMP_FILE="/tmp/ipf.new.rules"; my %h; open (FILE,"tail -fn 1 /usr/local/apache/logs/access_log|") || die "can't open FILE: $!"; while (<FILE>) { if ($_ =~ /^(.*)s-s-.*c+dir/) { if(exists($h{"$1"})) { $h{"$1"}++ } else { $h{"$1"} = 1; open(IPF, "< $IPF_FILE") or die "can't open $IPF_FILE: $!"; open(TMP, "> $TMP_FILE") or die "can't open $TMP_FILE: $!"; print TMP "block in log quick on ne0 from $1 to anyn" or die "can't write to $TMP_FILE: $!"; while (<IPF>) { (print TMP $_) or die "can't write to $TMP_FILE: $!"; } close(IPF) or die "can't close $IPF_FILE: $!"; close(TMP) or die "can't close $TMP_FILE: $!"; rename("$TMP_FILE", "$IPF_FILE") or die "can't rename $TMP_FILE to $IPF_FILE: $!"; system("ipf -Fa -f $IPF_FILE"); } } } close (FILE); }
Incrementation of $h{“$1”} is totally useless here but you may use it for something (like waiting more than one attemp of the IP before adding it to IPF). $h is used to don’t firewall two time the same IP.
You may think that $h is not usefull because as we have blocked the IP, we will not get any new request from it. Not really
Why using SSH to transfer data when you can use SCP/SFTP ?
Because sometime SCP is disable in SSH configuration.
So here is an easy way to transfert data with ssh. Run something like this
ssh -C <host> "cd /path/to/folder/to/transfer; tar cvf - *" | tar xfv
-C will compress tranfer using gzip, I didn’t do any test to see if it’s better to use -C to compress on the SSH level or to use tar cvzf to compress the tar file. If you do please give me result !
“| tar xvf” will extract file in your current directory, of course, you may want to leave them in the .tar file.
S/key is an one time password authentication (OTP) system that prevent you from sending password in clear.
It’s especially useful with system like telnet.
It’s quite easy to setup on NetBSD. To start, run
# skeyinit -s <user> [Adding user] You need the 6 english words generated from the "skey" command. Enter sequence count from 1 to 10000: Enter anything you want Enter new seed [default NetB14423]: Just press return or enter something else otp-md4 <sequence count> <seed> s/key access password: <Follow instruction bellow>
To get the s/key access password, you have to run the following command but be careful to do not run it on the remote host through telnet, run it locally !
# skey <sequence count you entered for skeyinit> <seed you use in skeyinit>
This will ask you for a password (use a secure one) and give you 6 english word, use them to complete the “s/key access password:” question.
Your S/Key authentification is ready !
Next time you do a telnet connexion to the host you will get this prompt
login: <put your username> Password [otp-md4 <random number> <seed you use in skeyinit>]:
To know the 6 english word password to use, you have to run the following command (on you local computer for example)
# skey <random number> <seed you use in skeyinit>
In fact the random number will be incremented by one each time you log in. You can easily generate (in advance) all 6 english word password for number X to Z, with the following command (another time, run this locally) :
skey -n (Z-X) Z <seed you use in skeyinit>
Z-X is the number of password to generate. Z mean the last generated password is for random number Z
N.B: You clear password will still work if you use it.
Related link: S/Key on Wikipedia
Recent Comments