Provides an up-to-date list of Grabify domains, for use by adblockers.
The goal of this project is to reduce the risk of being doxxed by unknowingly clicking on IP logger links.
This project is not associated with Grabify.
This project is made up of two parts:
- A client which periodically retrieves a list of domains offered by the IP Logger service Grabify
- A webserver which provides this list of domains to adblock clients
Compatible with uBlock Origin / AdBlockPlus (ABP), uBlacklist, and Hosts files
Easily self-hosted with Docker or compatible virtualization
Uses Flask, Python requests, and TinyDB
The client/server are run daily by CI, and the updated lists are pushed to this gist. If you only want the current filter list, without running your own server, use this. Note that the filter lists in this gist will not preserve historical changes over time, only the domains that are currently in-use.
- uBlock/ABP filter or Subscribe via uBlock/ABP
- uBlacklist filter or Subscribe via uBlacklist
- plain URL list
You can run this project in Docker or locally. Docker is recommended.
TLDR: git clone, edit .docker/single-container-compose.yml, ./.docker/single-build.sh. For a reverse proxy, forward the server through nginx/caddy/etc, get the forwarded URL. In the compose file, set the base proxied domain, set the proxy level -p, disable the exposed port, and connect the server to your bridge network.
- Clone this repository
git clone https://github.com/JonasLong/DeGrabifycd DeGrabify
- Consider using a reverse proxy as described below
- Configure the container in
.docker/single-container-compose.yml(or.docker/dual-container-compose.yml) ./.docker/single-build.sh(or./.docker/dual-build.sh)- Continue to the Client section
git fetchgit pull- You may need to
git stashif you've made changes to the config, then merge your stashed changes into main
- You may need to
./.docker/single-build.sh(or./.docker/single-build.sh)
- Change the
-pvalue in thecommandssection of the server config from0to1 - Uncomment both
networks:sections in the compose file. Change thename: defaultvalue to the name of your bridge network- This will ensure that the server is connected to the same external bridge network that your reverse proxy is running on.
- Comment out the
ports:section - Change the
-soption in server/server.py fromhttp://127.0.0.1:5000to the base domain your server will be hosted on (e.g.https://degrab.example.com). If this is set incorrectly, the "Subscribe" buttons won't work but nothing else will be affected. - Configure your reverse proxy
- scheme:
http - domain:
degrabify-aio-1(ordegrabify-server-1) - port:
5000
- scheme:
- Clone this repository
git clone https://github.com/JonasLong/DeGrabifycd DeGrabify
- Rebuild the container with
./.docker/single-build.sh(or./.docker/single-build.sh) - It may be helpful to convert the
dbvolume to a bind mount if you need visibility into the sites.json - To make changes with docker compose, reference the current compose file. E.g.:
docker compose -f .docker/dual-container-compose.yml --project-directory . logs --followto view compose logs when running the dual-compose container. If using single-compose it may be easier to reference the container by name instead of using docker compose
To install with your adblock or Hosts file:
- Navigate to the webserver address in a browser
- Click the relevant subscribe link, or follow the instructions in uBlockOrigin-HUGE-AI-Blocklist for the provided URLs.
- Most likely you'll go to uBlock or uBlacklist and import a filter, passing it the URL
- Don't copy-paste the list of domains into your filter list- it won't auto-update as the grabify domains change
- Install python and the pip packages
flaskandtinydb - Clone this repository
git clone https://github.com/JonasLong/DeGrabifycd DeGrabify
- In the console, run
python crawler/crawler.py -d database/sites.jsonpython server/server.py -d database/sites.json
- If you'd like crawler.py to run on a schedule, install
cronand runchmod 700 crawler/cron-install.sh./crawler/cron-install.sh $(pwd)/crawler.py '0 12 * * *' $(pwd)/database/sites.json- The cronjob will be written to
/etc/cron.d/crawl-cron. Logs will be saved in/var/log/crawl.log.
- The cronjob will be written to
cron
- Open
http://127.0.0.1:5000in a browser
- Abstract list formatting into the config/.env
- Make a ghcr build
- Double check header specs for ublock and ublacklist
- any missing fields?
- figure out if "access time" field ruins caching
- Use Flask in production
- Pretty up the homepage
- Better disguise crawler
- See if the "r" param in the URL changes over time as a revision #
- Better naming for the containers(?)
Inspired by the following projects, check them out as well: