Even though RSS is my main source of news, it’s impossible to get around the fact that the (incredibly, unfathomably stupid) shutdown of Google Reader kneecapped it.
AT Protocol is more than “let’s make Twitter more open” — it’s an open protocol for anything that’s real-time and shareable. I was never a big Google Reader guy, I liked traditional clients like NetNewsWire, but the sociability of Google Reader cannot be dismissed.
I’m not sure that just bridging RSS to Bluesky is the future… a distinct AT Proto lexicon seems like the actual way forward IMHO
Exciting times!
(Being a Mac+iPhone person I've settled in quite happily with NetNewsWire now, but it took me a few years to get there.)
Just about every website and service I use has feeds.
People/geeks are rediscovering RSS.
I think this might be the golden age of RSS.
It's not straightforward to use because of the lack of a test/debug interface, but its worth the effort to set it up!
[1]: https://freshrss.github.io/FreshRSS/en/users/11_website_scra...
https://openrss.org/blog/bluesky-has-launched-rss-feeds
How would you extend this ("a distinct AT Proto lexicon seems like the actual way forward IMHO") beyond the RSS feed you can get from a Bluesky profile?
So instead of you polling an RSS feed on Bluesky, they would send you a stream of messages
Just in terms of software models and consumer uptake, I think it’s abundantly clear that the way Twitter and Instagram work “won” and RSS “failed.”
Like, just forget about "push/pull" for a minute, what about feed discoverability? A couple years ago my father asked me how I read news, and I told him to install Reeder on his iPad, then he could just start filling in feeds from The Atlantic or whatever. “Okay how do I find those?” “Well, you could install a browser extension — wait you're not going to do that — or view the web page source and look for an RSS link — wait what am I saying —”
RSS depends on website designers making their feeds discoverable, and as a career web dev I can assure you that’s simply not on their radar. Only alpha geeks care. The only reason RSS is still around is that it’s built into enough CMS's so most web devs don't have to do any work, and they still can't be bothered to put a little radar icon in the footer.
Discoverability is intrinsic to social though. No, I don't know how someone on Bluesky will search accounts on other networks in "the atmosphere," but I’m positive this will get solved. RSS fans deny that discoverability is a problem to this day, because the only people who use it are the same alpha geeks who don’t care about UX at all.
> I expect for this kind of tech to have really novel use cases. For it to sit between me and the internet and remove the ads, nuke time wasting clickbait, and obliterate low-information irrelevant noise. For it to be my personal bodyguard that protects me from any and all forms of attention stealer. ... I want this to be tech to give birth to an anti-Google, anti-social media algo, anti-advertiser terminator from the future. Something that torpedoes the previous paradigm and that does it so quickly that the old purveyors can't adjust in time. [1]
---
> It's now my daily driver for web access. It monitors for content I'm interested in (that's how I found your comment), handles all my searches and feeds, can dynamically adapt its interface, and is working on integrations to submit content for me so that I don't have to leave that interface to write these replies. [2]
We can build and tailor the interface for both discovery and consumption of the data transiting the AT Proto rails. This could be a client on your device (what Apple Intelligence should be, exposing an API to other apps to perform this compute), this could be an agent that runs on your own PDS [3] and then provides a distilled feed for you based on what it learns about you through signal from your client that is not shared, and you would be able to share your customized discovery feed with others. Plug DeepSeek in for now (while keeping an eye towards drivers so you can plug any other LLM in and you're off the the races and the state of the art advances). In the book "The Internet Con", Cory Doctorow explains how to seize the means of computation, by forcing Silicon Valley to do the thing it fears most: interoperate. AT Proto and Gen AI allows, to an extent, to sidestep the adversarial interoperability needed to diminish Big Tech control of the digital social fabric. We are currently on a path towards seizing the means of computation and users controlling their experience and consumption. To me, this is the most exciting part of the work ahead.
[1] https://news.ycombinator.com/item?id=42836289
Does that mean that at the protocol level one can share content of larger sizes than what Bluesky does by default?
Even though not standard at least via the ActivityPub protocol one can set more reasonable character lengths, though unsure how most Mastodon clients handle that in practice.
Bluesky devs seem set, from some past HN interaction, on keeping the restriction in place for the app.
In a decentralized application, such as Bluesky, as opposed to Mastodon, I can see a future where it would be a replacement for Reddit and HN, without those limits. As to what HN would be in such a future: a stream (or whatever term their use) of content curated/moderated by dang.
> In a decentralized application, such as Bluesky, as opposed to Mastodon, I can see a future where it would be a replacement for Reddit and HN, without those limits.
Not sure if you're saying that Bluesky is decentralized and Mastodon not, which if anything would be the other way around. The other interpretation is that you can see Bluesky as a replacement for HN/Reddit, but not Mastodon. My take is that both are unfit for that, since they're heavily embracing the micro-blogging format. I'm not sure if there are any ATProto applications like this already, but Lemmy and MBin are both examples of decentralized link aggregators using ActivityPub.
Involuntarily when I say decentralized I'm thinking about more peer-to-peer distribution of content (a la torrents), instead of federated. Mastodon instances are these islands of content, at the whims of the owner, as to what content you can see and what policies they think there should be. Your identity is also attached to the instance, migration isn't seamless and you can't just move around your account like a domain name.
My understanding of Bluesky, actually the AT protocol, is that there are features in there to allow you to own your identity (via domain names) which would make migration between instances seamless. At the same time, there is a different deployable services for redistributing (filtered/moderated) content.
Based on posts I've seen on HN, these were still partial of planned things (?). On the other hand, not even sure if there are self-hosted Bluesky instances yet.
Which instances? :)
If you're referring to "Personal Data Stores", sure. However, it's the "Relay" (i.e. the aggregator that generates timelines) that everyone relies on is centralized. In contrast, with Mastodon for example, both "Personal Data Store" and "Relay" functionalities are decentralised, offering a complete solution with no centralised choke-points. At least that's my non-authoritative understanding.
There is this lengthy blog post, _How decentralized is Bluesky really?_ [0], if you want to read more about the differences between Bluesky and ActivityPub (e.g. Mastodon).
[0] https://dustycloud.org/blog/how-decentralized-is-bluesky/
I host my own GoToSocial instance, with only me as it's single user. Doesn't get much more decentralized than that.
I guess it's easier to move your identity to a different PDS on Bluesky, but for me Bluesky doesn't count as decentralized as long as there's only one single relay. You may own your identity, but currently Bluesky (the company) owns the network.
I should have said P2P in my comment instead of decentralized, as the broader term captures the concept of federated as well.
An argument can be made that they provided an RSS reader service no one could come close to matching and basically dominated the area. Google's deep support also helped it proliferate. It is maybe underrated that RSS was likely one defense the web had against the dominance of walled gardens and social media. It allowed a lot of sites to flourish that I think would not get any traction today.
In today's hyperconcentrated digital landscape about the only thing that matters for mass market relevance are the default options on client software controlled by gatekeepers.
Mozilla too has abandoned RSS, which may or may not be correlated with them being (alas) increasingly irrelevant.
Eventually lack of popular RSS clients lead to slow decline on the server side as well. A new and fancy website today using a "modern" stack more likely than not does not support RSS, its fully aligned with the "follow us on XYZ" mentality.
Twitter dropped RSS completely. Facebook dropped them for pages. And then other large sites (Youtube I think is one if I recall right) made it less of a public feature.
Twitter is one that's pretty annoying because there is a lot that gets posted there first before it's published or not published elsewhere at all.
A friend of mine — in tech — asked me the other day where I was hearing a bunch of stuff and I said “my RSS feeds” and he laughed at me. That”s what I mean.
Your friend sounds rather ignorant.
This is the point. Even when it existed, almost no-one knew what RSS was.
Here's the result https://mov.im/community/news.movim.eu/ArsTechnica
The awesome thing is that articles stored in XMPP are actually Atom articles, so there's very little to do.
Can self host too
It's IFTTT with an LLM integrated?
So I wouldn't feel all that much sympathy for them.
If you think MS using github code to train AI is bad, let's be pragmatic about where we're putting the blame or there's no shot we can course correct.
I personally am not looking forward to the pain of losing my job, but I would never presume my job is more important than progress. My job wouldn't even exist if we halted progress to save the jobs.
https://hachyderm.io/@joschi/113914355705581670
Second post in that thread has some demo output.
> You can discover, create, and share actions to perform any job you'd like, including CI/CD, and combine actions in a completely customized workflow.
I guess it wasn't the goal initially, but it includes so much features now that it became a kind of _serverless_ orchestrator.
I don't use GitHub for that but my own self-hosted Gitea instance (so not quite serverless here), and I use it for this exact purpose: orchestrating containerized jobs without needing to setup something trickier. And since it's directly attached to a git repository, you don't need a second tool. So you know have everything configured and versionned at the same place.
Sure, it won't work if you need to run multiple big runners to do a job, but for small, periodic tasks like that, it's just so easy to do.
tar -tf ./archive.tar
It lists all of the files in the archive. I only remember it because of the mnemonic "tar the fuck is in this archive." journalctl -fu some-service
Also feels appropriate, because most of the time when I have to type in that command I'm thinking FU service file for not showing me by default why a service failed to start/restart.