Using download manager to download newer files and skip already downloaded files in Linux.

Hi I am Rupesh from India and I have seen a website called ebooks.tirumala.org/ and it consists of 2400 pdf files. Maximum files up to 1700 files are stored in the following directory.

HOME/download.

Remaining 700 files are stored in different directories.

There is website called https://hackertarget.com/extract-links/ which displays all the links present in webpage.

I have given http://ebooks.tirumala.org/ as input and it displayed up to 1500 links. One of the line is as follows

http://ebooks.tirumala.org/Home/Download/?ID=2800

I have saved all the links in a text file.

Most of the files up to 1400 in HOME/download directory of website are already downloaded. I want to download remaining 300 files.

Now the current need is I need a download manager which takes input a text file which contains list of URLs. The download manager must download newer files only.

Currently I am using Opensuse leap 15.0.

Up to now in Linux I have used download managers such as flare get which downloads all files specified in a list of URLs. Unfortunately this program does not have skip option I mean skip downloading of files which are already downloaded.

Can anyone of you suggest a download manager to download newer files and skip already downloaded files.

This sounds familar:
https://forums.opensuse.org/showthread.php/528201-How-to-download-mp3-file-s-from-a-webpage-which-are-hidden-or-not-visible

You posted something really similar 2 years ago about bulk-downloading from someone elses website. And you had also asked about a script to process a list/directory structure:
https://forums.opensuse.org/showthread.php/534053-Converting-mp3-files-to-opus-and-m4a-recursively-using-command-line-tools-and-scripts-in-Linux/page2

And it doesn’t seem like you read the flare documentation, since it does support resuming/skipping already downloaded files. So you can either learn to use flare, or easily write a script since you’ve been working with Linux and scripting for quite some time now. You have everything you need.

wget is difficult to use and understand and also it is based on command line options.

I am searching for easy to use download manager and fortunately found flare get. I searched the official website of flare get for documentation but not found and so is there any documentation or guide available for flare get.

I am thinking that flare get skips already downloaded files. Is it true or not.

Sorry, there is only so much help people are willing to offer, when you don’t show any effort. Yes, wget has a lot of options, and it also has a LOT of documentation, which you have available to you. If you don’t want to take the time to read or understand it, that is a problem that only you can solve. Asking others to read the documentation and tell you exactly what you want, so you can avoid reading and learning isn’t a nice thing. You have been asking about wget on numerous forums for at least 4 years, as far as I can tell.

And you need to re-read the previous reply; you were told very plainly that flare does support resume. If you want documentation on flareget, then go to their website, or contact them if you can’t find it. Again, asking us to look up documentation for you is fairly rude. If you can’t figure out how to use flareget, then there are a lot of other Linux download managers. You can also reference your many, MANY other threads about how to write a script, and use the list of files you have now as input, against the total list of files, and download the missing ones.

Sorry to sound nasty, but you have been doing things like this for many years on many forums.

Could not see where to edit previous post, but I THOUGHT this sounded familiar, and it is. From 2017.
https://www.linuxquestions.org/questions/linux-networking-3/how-to-download-mp3-file's-from-a-webpage-which-are-hidden-or-not-visible-4175618086/#post5788415

Complete with references to flarget. And while the website you’re trying to bulk-download has changed, the task hasn’t.