January 24, 2003

I’ve started work on another new personal project – hopefully a quick one :p You know those web trawling programs like TeleportPro which create a fully copy of your site, or even mass downloaders like ReGet or FlashGet? They have one problem – they don’t seem to work with the newer redirecting sites where a link is actually a redirection to via their server to a different page and the URL is always populated with all sorts of extra junk to make it hard for these programs to work properly. I guess this is just another step in the content-protection war – but I hate it :p So I’m working on a new program which will go around this and still get me the stuff I want from a site given a selected list of links. It’s still not up to the standard of one of those programs which will make a perfect copy of a site on your hard disk – I don’t really want that. I just want to be able to download and save a list of linked files without having to click on each link, wait for it to go to another page and then select save as in the case of images or even have to click multiple links to get downloads started in the case of files. I’ve done most of the preliminary coding already and have tested some of it but the part that actually does the work has not been tested yet since I hit a snag in my coding yesterday. I think I’ve got the solution today however and so intend to complete the app (which I call Spider) today. If it works and enough people like/want it, I might enhance it from it’s current rather crude form – but if not, that’ll probably get added to the pile of apps that I use but nobody else does :p

Posted by Fahim at 6:34 am  |  3 Comments