A small c++ program that takes an html file, uses regex to extract file urls (usually image files), and downloads all urls. Options include listing extracted urls in a file or stdout
Runs in bash, needs curl.
Requires libargmage.
Just drop the libargmage folder into the project folder and make.
or
git clone https://github.com/bin4rym4ge/libargmage.git
into the project file.
Usage:
-f file.html (or url savefile. depends on your regex)
-r regex_file.txt
-s url_save_file.txt
-x (no url save file or output)
-o /download/path/
Feel free ot use whatever and learn from it.
Done:
- argv parser
- help page
TODO:
- multidownload mode
- sigint/sigkill handler
- convert files to cbz/pdf/other
- error handling
Maybe:
- logging
- download html file (thats up to you at the moment)
- download next html (next page/chapter)
- support other media types (text to ebook)