Hey I want to address why this extension is different from other scrapers.

This is for ad hoc generation of EPub from websites that don't have scrape well using traditional scrapers (think standard request based command line scripts or some other chrome extensions that scrape based on open tabs/window) for some reasons:

1. Usually command line scrapers and other extensions have predefined sites they work for, this one's outside of those sites

2. Or they requires nontrivial configuration and/or code

3. Some sites use javascript to dynamically generates/retrieve the text, in which case you need the browser to run the JS - This was the biggest gap for me.

4. This one runs in the browser, so maybe less likely to be detected and blocked

I also don't intend this scraper to be robust or used in a repeated fashion as a background scheduled job, that's why there's a UI for selecting key elements for scraping. It's meant to be more generalized so that you don't have to have a configuration for a site to still be able to scrape it relatively easily with just some mouse clicks.

If the site you're scraping is already handled by the other programs/extensions, then this wouldn't perform better since the other ones are specifically configured for those sites. Otherwise, this extension gives you the tool to scrape something once or twice without spending too much time coding/configuring.

I don't find myself sticking to the same site a lot, so wrote this.

Having written my own one of these, the interesting thing about this one is really the UI for iterating on extracting content from an arbitrary site. Having a full GUI for working through the extraction is much more flexible than the norm.
  • nik5
  • ·
  • 3 hours ago
  • ·
  • [ - ]
Made something similar a while back Kindle-send[0] to send blogs to my Kindle. It also uses readability under the hood.

Now I use it to send blogs, books and sometimes send whole archives of a website (you can use it in scripts).

You can export Kindle highlights to Obsidian, so one benefit of making these epubs is how you accumulate the highlights at one place.

Although, name is kindle-send but it can send to any ereader that uses email as a mechanism to send books.

[0] https://github.com/nikhil1raghav/kindle-send

If this can handle those sites where every section is behind an accordion that must be expanded (and especially where it collapses other sections when you expand one), then this is going to be awesome.
Works on this site: https://docs.ray.io/en/latest/ for me.
Can it remove popups for newsletters, or subscription, or logins, or cookies' notifications? Can it read pages that requires signing in?
It extract the main content using Readability by default (you can configure it with something else). Logins would depend on how you're parsing. It has two modes, it either browses to the page inside the window (for non-refreshing pages), or retrieves it in the background using fetch.
Terrific, thank you.
  • ffsm8
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Heh, I'm currently creating something very similar.

A web scraper for blogs and mainly web novels etc and ePub parser that persists the data to database along with categories and tags, and a companion PWA for offline reading to track reading progress on various stories and let me keep multiple versions of the same story (web novels and published epub).

I'll jump on the bandwagon here to shamelessly plug my own little spin on Readability-based EPUB generator: It's a self-hosted OPDS server offering feeds of articles from HN, Tildes, and Pocket which are converted to EPUB on-the-fly (as soon as you try to fetch one). You can add/bookmark it in Koreader, which can run on most e-reader devices. It's simple to self-host (it's published as an image on Docker Hub and GHCR, or you can run it on Node directly).

My local instance just runs quietly on a Synology NAS; I like not having to interact with a computer to use it. Unlike the OP, it can't be used to compile many pages/URLs into a single EPUB, though.

https://github.com/BHSPitMonkey/news2reader

Is there a good tool for scraping a multi-page website (e.g. documentation) into plain text to send to an LLM?
Neat!

I once made a simple version of this concept that saves an epub file on the server‘s file system, which is then synced to my e-book reader:

https://github.com/solarkraft/webpub

The main ingredient is Postlight Parser, which gives a simplified „document“ view for a website.

Every so often, I want to get an epub of Paul Graham’s essays (eg right before a flight). Hopefully I’ll remember to use this
Does it support http://fanfiction.net/ ? I never found an easy solution for that one.
you can export epubs from https://fichub.net/
Fanfiction.net is trivial... apart from it having Cloudflare bot blocking turned up to aggressive levels. I've not seen an approach that works, other than using headless browsers to fetch the content.
headless browsers won't work by default for cloudflare captchas.

open source stealth plugins don't really work now either.

you have to use real browser fingerprints.

I use a calibre add-in https://www.mobileread.com/forums/showthread.php?t=259221

It sort of works ie some stories just work others just get the first page.

You can import a csv of all the chapter links, looks like it's just incremental numbering in the url
  • t-3
  • ·
  • 13 hours ago
  • ·
  • [ - ]
The issue is most likely cloudflare blocking most the best scraping methods. If the site can be pulled down with eg. wget or curl without a bunch of options that you definitely aren't writing by hand, pandoc can just be used to directly make an epub.
This is an amazing tool! Long gone are the days when I used to force cache many webpages for offline travels.
Gonna love running this on all the documentation heavy websites like AWS VueJS MDN w3schools realpython betterstack
E-Reader makers, take note. This is a cool feature that should be built in or at least able to be used with an API to get content onto the Kindle/etc. Or even a "send to Kindle" email address that can accept URLs too.
  • andai
  • ·
  • 16 hours ago
  • ·
  • [ - ]
I wonder if this would have a positive or negative effect on profits.

On the one hand, they'd be adding a massive amount of free content to a platform where they make money because people pay to consume content.

On the other hand, it might actually increase sales simply because I'd spend more time using it, which would presumably result in more book purchases too.

(Also Kindle store is already full of $0 public domain stuff, so they already don't seem too bothered by that possibility.)

Huh didn't know that, guess I never assummed they would bother with it, I'd think about a published work in kindle like a product page in amazon therefore doesn't make sense to have 0$ items

Are they an amazon offer or do third parties take the time to set that up?

  • andai
  • ·
  • 11 hours ago
  • ·
  • [ - ]
It's on Amazon, tons of public domain stuff republished for $0 on Kindle. 1 click to "purchase" (free download).
You have this with the Remarkable sort of - https://remarkable.com/blog/introducing-read-on-remarkable
Kobo has Pocket integration, is this substantially different?
  • ·
  • 16 hours ago
  • ·
  • [ - ]
  • anthk
  • ·
  • 15 hours ago
  • ·
  • [ - ]
I had that, buf for terminal under Unix and for web pages, Gopher and Gemini. Offpunk:

https://sr.ht/~lioploum/offpunk/

Instead of Epub, it get catched down into text files (Gopher), Gemini files (Gemini) and HTML+images (Web Pages). You can visit the hier from ~/.cache/offpunk or directly from Offpunk.

With the "tour" function, forget about doomscrolling. You'll read all the articles in text mode sequentially until you finish down.

Awesome!
It's rather unfair to "first commenters", who got the article up from the pile and left a quick recommendation, to get downvoted by latecomers.

(dartharva's comment was the only thing here when I first looked from the front page)

For those interested in a simple to use command line tool that accomplishes the same I've had success with percollate - https://github.com/danburzo/percollate
  • tra3
  • ·
  • 14 hours ago
  • ·
  • [ - ]
This looks great!! I've long been looking for something that leverages readability (or similar).

Edit: Tried it with Reuters and it looks like percolate requires javascript, etc. Back to using "Print as PDF" from the browser.

Does it support http://fanfiction.net/ ? I never found an easy solution for that one.
  • ·
  • 15 hours ago
  • ·
  • [ - ]
Is it legal?
  • Tepix
  • ·
  • 16 hours ago
  • ·
  • [ - ]
If you can read it on a website, why not on an ebook reader?

If you start selling the resulting files, now that would be a copyright violation. German law has a right to create a "Privatkopie", i.e. a private copy. I guess this is similar to fair use in US law?

Depends on where you live.

Where I am, it's perfectly legal.

Before cell service was as widespread as it is today, there were programs that would scrape web pages into ePUBs so you could read them later on your Palm Pilot. I used it every day during my commute. And the best part was that they ended. No mind-numbing infinite scroll.

When I switched to a "smart" phone (SonyEricsson m600c), I really missed it.

Danger Hiptop had a proxy that reformatted websites for their built in browser. Mostly as a way to reduce data transfer amounts.

https://medium.com/@chrisdesalvo/the-future-that-everyone-fo...

I wouldn't want to go back, because having instant access to anything is pretty amazing, but I do miss those days of offline internet.
Fully agree. I recently replaced my doomscrolling with a retro handheld and it really makes me happy. It also pushed me to pick up my ereader again.

I spend enough time at a computer than I shouldn't really need a smartphone outside of 'I need to message ___' or 'I need to go ___'

  • anthk
  • ·
  • 15 hours ago
  • ·
  • [ - ]
If you have a GNU/Linux/Mac/BSD machine with Python:

https://sr.ht/~lioploum/offpunk/

  • ·
  • 17 hours ago
  • ·
  • [ - ]