> Essentially making the emails read as http://tinyurl.com/2gwyv
> 2gwyv -> http://wikipedia.org
> [slashdot.com] or something like that.
>
IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!!
IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!! IT WORKS!!
I wrote a perl script to parse the emails on the fly and add in the tinyurl
destinations… Created a filter, which is executed as the messages
arrive… and taaa daaa!! (Look above ^^^^ it’s working!!)
I had it place the expansion on the NEXT line, to try to help prevent the
dreaded wrap/break… but with the original tinyurl so if there are multiple
tinyurl’s on a line, you can tell which goes where.
It adds an X- header, so the emails won’t be processed twice, and I’ve tried
very hard to prevent it from destroying email. It only affects the email if
there are VALID tinyurl’s to expand.
I’ll rewrite it as a proper plugin for claws-mail tomorrow or so… but this
is great! <happy dance!!>
Yeah yeah, so I get a little excited… shoot me!
So, to make this work in claws-mail (sorry everyone else…)
you must have curl installed. This is a requirement until this is made
into a plugin.
copy the script (tinyscript, posted below) somewhere… I chose .claws-mail,
since that seemed appropriate. Make it executable (chmod 755 tinyscript)
Create a filter, which has an action of:
execute “/home/lornix/.claws-mail/tinyscript %F”
Of course, with the appropriate path to the script file instead of mine.
With this in place, click on a newsgroup, then click on ‘tools → Filter all
messages in folder’… voila! It’ll take a moment to process all the
messages… essentially just ‘catching up’… new messages will be filtered
as they arrive and the delay won’t be noticable.
And now for the script…
=====================================================================
#!/usr/bin/perl
use strict;
use warnings;
my $allowreal=0;
enables debugging output
my $DEBUG=0;
Did we get anything on the command line?
if ($#ARGV<0)
{ # fail silently so we don’t litter console
exit(1);
}
yaay! we’ve got a filename to work with
my $filename=$ARGV[0];
Make sure it exists and has content… it could happen!
if ((! -e “$filename” ) || ( -z “$filename” ))
{ # fail silently again
exit(1);
}
to remember if we changed file or not,
don’t rewrite it if we didn’t change it
my $changed=0;
This could probably be done better, but…
read file into an array
open(FIN,"<",$filename) || exit(1);
slurp!!
my @wholefile=<FIN>;
close(FIN);
preset minimum line length for unix files
my $linelength=1;
a quick check for line endings… ^M adds an extra char
$linelength=2 if ($wholefile[0]=~/\015/);
Have we processed this file before? Look for
X-tinyurl-fix header
remember that headers STOP at first EMPTY line, then body starts
my $line=0;
while (($line<=$#wholefile)&&(length($wholefile$line])>$linelength))
{ print "$line: ".$wholefile$line] if ($DEBUG);
if ($wholefile$line]=~/^X-TINYURL-FIX:/i)
{ # we found it, bail out
print "File already processed
" if ($DEBUG);
exit(0);
}
$line++;
}
wasn’t found, but we’re pointing to the line AFTER where we need to add the tag
remember, this won’t be written to the file UNLESS we find a valid tag to fix
$wholefile$line-1].="X-TinyUrl-Fix: “.gmtime().” (GMT)
";
scan for tinyurl urls
while ($line<$#wholefile)
{ $line++;
next if ($wholefile$line]!~/http://tinyurl.com//);
extract the tiny tinyurl url. (haha!)
there might be more than one on this line… handle that
print "$line: ".$wholefile$line] if ($DEBUG);
my $urlline=$wholefile$line];
$urlline=~s/\015\012]//g;
make sure the ‘split’ later works since it’s not case insensitive
$urlline=~s/http:/http:/ig;
my @urlarray=split(“http://”,$urlline);
print $#urlarray." entries (add one)
" if ($DEBUG);
whew! got all the entries on the line, sort out which ones are real
foreach my $u (@urlarray)
{ # skip if it wasn’t http://tinyurl url
print "Testing for tinyurl address: ?> ‘$u’
" if ($DEBUG);
next if ($u!~/^tinyurl.com/i);
skip if it’s got a dot in it, probably a read addr
print “Testing for real address: ?> ‘$u’
" if (($allowreal)&&($DEBUG));
next if ((!$allowreal)&&($u=~/^tinyurl.com/[a-z0-9].[a-z0-9]/));
my $tiny=$u;
$tiny=~s/^tinyurl.com/([a-z0-9.]).*$/$1/i;
my $newurl=”";
we’ve got a valid tinyurl nugget, now request real url from tinyurl.com
open(RESULTS,“curl --silent --head http://tinyurl.com/${tiny} |”) || exit(2);
while (<RESULTS>)
{ $newurl=$_;
strip off CR/LF ickyness
$newurl=~s/\015\012]//g;
skip until we see Location entry
next if ($newurl!~/Location:/i);
Remove the header tag
$newurl=~s/^Location: //i;
we found a valid entry, signal that this file is to be changed
$changed=1;
last;
this all works because curl returns a blank line as last entry…
which of course, I use as a non-valid url indicator. yaay!
}
close(RESULTS);
now we need to insert this back into the array to be written out
we know what line it’s from, so we’ll tack a CRLF to the end, and then the url
this makes the new url show up on a new line, hopefully preventing line-wrap
if ($newurl ne “”)
{ $wholefile$line].="$tiny -> $newurl
";
print "======> ‘$tiny’ : ‘$newurl’
" if ($DEBUG);
}
}
}
final writeout…
if ($changed)
{ open(FOUT,">",$filename) || exit(1);
print FOUT @wholefile;
close(FOUT);
}
print "All done
" if ($DEBUG);
successful exit
exit(0);
vim:tabstop=2:shiftwidth=2:softtabstop=2:
=====================================================================
Hopefully the forum doesn’t mangle it too badly.
Loni
(malcolm, I’ll email it directly to you…)
L R Nix
lornix@lornix.com