I need a fairly simple script written in Perl. The script will get two things as input: a URL and a local path. The script will harvest the pages (including all media) and dump the pages to the local path (including subdirectories). Most importantly -- the script needs to work through all links that are part of the site's software, such as '[login to view URL]'. It will need to convert this path into a locally browsable path, done two (2) ways: 1) It will need to convert the path on the page to locally usable paths, converting [a href="[login to view URL]"] to [a href="go.cgi_id=35-category=[login to view URL]"] (Note the .html -- can't browse locally without it.) 2) It will need to rename the resulting file with that name as well - 'go.cgi_id=35-category=[login to view URL]'. (There could be 30 [login to view URL] files -- this ensures each is unique by using the query string.) Once it's pulled all pages and media (this includes gifs, jpegs, midis, waves, flash media) into the local directory -- then the dump needs to be archived into a Zip file. We can test if the software is working by unzipping the files locally and attempting to browse the pages. You are welcome to use all available CPAN modules, including LWP and friends, File::Find, HTML::LinkExtor and Archive::Zip. In fact, I highly recommend use of them over regexes.
## Deliverables
Complete and fully-functional working program(s) in executable form as well as complete source code of all work done. Complete copyrights to all work purchased.