Decided to give wget a spin after this thread a year ago
http://forums.opensuse.org/opensuseforums/english/get-technical-help-here/applications/453710-local-copy-web-site.html
Am very impressed with current version of wget, is lightning fast now compared to a year ago and on several websites made a local copy flawlessly.
But, this website (command below) didn’t copy any child pages, and am at a loss why. Am hoping someone with wget experience might understand why this happened on this specific website.
Here is the problem command
wget --recursive --domains uima.apache.org --no-parent --page-requisites --html-extension --convert-links https://uima.apache.org/downloads/releaseDocs/2.2.0-incubating/docs/html/overview_and_setup/overview_and_setup.html#ugr.ovv.eclipse_setup.install_eclipse
You’ll notice that only the specified root page is copied locally. If you hover over a link in the copied page, it looks like an initial try to connect to a local copy, and upon failing will offer a link to the original page.
AFAIK the wget options I’ve selected should properly download a “full website” following all links on the specified page, only restricted to being a member of the “uima.apache.org” domain name.
TIA,
TSU