It looks like this on Firefox, Windows: https://imgur.com/zNlEGgK
Same for all of the pointless cookie banners - they could've been UA prompts instead, putting the user in charge of setting a policy ("always trust example.com", "never trust example.net", "accept example.org for this session", etc). But building such prompts into a browser would've been a nuisance in 1997... So we ended up with that nuisance anyway, just enshrined by shortsighted laws, that target each and every website - rather than the three remaining browser engine vendors.
The web "browser" wasn't "intended" for this use case, hence the issue. This could be easily fixed though -- just like cookies.
I explicitly do not want such a thing in many of my HTML-apps, but one could add it with relative ease.
It does make forks a lot easier, though!
Another advantage is that it makes it clear what you’re saving, reducing the likelihood of errors being persisted.
Or keeping it as it. That's fine too. It just came to mind
God forbid you have to remember to save your work!
> typing or even thinking is itself a finished product
Any specific examples where you notice the difference?
Corrective action from having lost work too many times :-)
It's been over 20 year with auto-save being pretty common, one has to adapt to the modern times, especially when it makes things better.
I don't have to "remember to save my work" when I write on my notepad, why should it be different on a computer?
Yes, absoultely. Saving data you don't want saved and overwriting data you want to retain are just as bad as not saving data you want to keep.
Keeping a scratch file to restore from unexpected applications exits (crash, power loss, etc.) is fine but beyond that I expect to be in control of when and where things are saved.
> one has to adapt to the modern times
I expect my computers to adapt to my requirements, not the other way around.
> especially when it makes things better.
Modern rarely equals better.
You can also use the contenteditable attribute and use no JS, so you basically have a notepad.
https://i.imgur.com/UZWhppc.png
I basically modified the CSS a bit so you can fit multiple cards in a row. nullboard as a board import/export feature to a simple json file. I have small python script that generates the columns (way-points) and cards (bib numbers). Then I can import that json to start that years race. While I would like more features (time tracking), it's a rather simple tool that can be easily operated offline and requires no resources other than a web browser.
FWIW here's a Show HN from 2019 - https://news.ycombinator.com/item?id=20077177
The gist of it, as mentioned in [4], is that you need to have a web server that implements checkStatus and saveConfig PUTs, and PUT and DELETE for saveBoard.
[1] https://github.com/apankrat/nullboard-agent
[2] https://github.com/luismedel/nbagent
1. Put this script in html file and add a save/download button to trigger it
2. Set `contenteditable` on you editable elements
That's it. Now make changes on the page and click download button to save the page with your changes. This should allow seeing your work without necessarily depending on JS.
The script:
<script>
function downloadHTMLFile() {
// Get the current HTML content
const html = document.documentElement.outerHTML;
// Create a temporary link element
const link = document.createElement('a');
link.setAttribute('download', 'example-page.html');
// Encode the HTML content as a data URI
const encodedContent = encodeURIComponent(html);
link.setAttribute('href', 'data:text/html;charset=utf-8,' + encodedContent);
// Append the link to the DOM and click it
document.body.appendChild(link);
link.click();
// Remove the temporary link element
document.body.removeChild(link);
}
</script>
Maybe we just need an sqlite with better support for replicas? Then people have one tiny server with a bunch of sqlite’s to which the apps can sync?
If only there were a technology one could use to “serve” information from a central storage mechanism, then update this “server” in a deterministic fashion, possibly with security as part of the feature set…
I have it setup self-hosted, and as long as I have internet I can connect to it and update it. Of I don't have internet I can browse the contents as per the most recently cached version.
(I can also save it to a single HTML file and back that up in numerous off site locations).
That would mean manual busywork every time you start/end a session. If you ever forget one of those steps, your work becomes out of sync and that’s extra work to sort it out. Depending on when you notice, it may take you several hours to fix. Not having to do things manually is what computers are for.
I'll definitely be looking at the source code to see if there are any ideas I want to incorporate into my own single file tools.
I've been using something similar for a few years now hacked together from different sources, but yours is much more polished.
Maybe if I put the original trello card ID at the bottom of each NBX "note" and then synched any text back as a new comment on that card, and the list ID in the title of each list and adding any notes without a Trello card link as new cards to that list, it would be a pretty automated way to get a bunch of edits back into Trello where I could tidy up with copy/paste.
Rock on!! Forked the repo and have my new local version pinned.
There's so many apps like this that could be simple, but for robust state saving involve setting up and maintaining a backend (e.g. with security patches, backups, performance monitoring). There's also the privacy implications on your data being store on someone's server, and the risk of data leaks.
It's like there's a key part of the internet that's missing.
Something like this could be a browser extension? This exists?
Of course even if you kept it to the simple KV store interface like `localStorage` you'd need to define sync semantics and conflict resolution mechanics.
Then you'd have to solve all the security concerns of which pages get access to `roamingStorage` and how it determines "same app" and "same user" to avoid rogue apps exfiltrating data from other apps and users.
It would be neat to find an architecture to solve such things and see it added as a web standard.
It would be great to have this standardized. Custom two way syncing is a nightmare to implement correctly, and it doesn't make sense for apps to have to keep reinventing the wheel here.
Part of why there's always some reinventing the wheel in this space, unfortunately is that this also seems to be one of the harder problems to generalize. There's always going to be domain specifics to your models or your users or users' idea of your models that is going to need some custom sync and conflict resolution work. Sync has a lot more application specifics than most of us want.
That said, yeah, if there was a good simple building block "base line" to start with that met a nice 80/20 rule plateau, it would be great for giving apps an easy place to start and the tools to build application and domain specifics as they grow/learn their domain.
(One such 80/20 place to start if the idea was just a simple KV store might be a basic "Last Write Wins" approach and a simple API to read older versions when necessary. You can build a lot of the cool CRDT/OT stuff on top of a store that simple. Many starting apps LWW is generally a good place to start for the basics. It doesn't solve all the syncing/conflict resolution problems, you'll still need to write app-specific code at some point, but that could be a place to start. It's basically the place most of the simplest NoSQL databases started.)
Do you have any views on the easiest way to do two-way syncing for a web app if you want to avoid relying on propriety services, or a complex niche framework? This comes up for me every few years and I'm always disappointed there isn't a no-brainer way yet to get a 80% good enough solution.
https://developer.chrome.com/docs/extensions/reference/api/s...
> If syncing is enabled, the data is synced to any Chrome browser that the user is logged into. If disabled, it behaves like storage.local. Chrome stores the data locally when the browser is offline and resumes syncing when it's back online. The quota limitation is approximately 100 KB, 8 KB per item.
The README mentions that "Trello wasn't bad", but storing this type of data in the cloud wasn't desirable. Well, Planka is the answer to that.
One observation: Kanban is all about limiting the work in progress. That’s really its foundation. WIP limit is the main means for controlling and improving overall workflow effectiveness.
I would argue that boards not offering a WIP limit are not really “Kanban” boards, as they defeat the very goal of Kanban.
I use my Trello boards for mental hygiene.
Something comes in, I put it on the board to get it out of my head. Then, when appropriate, I go to the board, rearrange the top 5 or so items by priority, then start on the top item.
Many things never get done, but they have turned out to lower priority, by definition.
If I am waiting for something to happen on a current task, I put it second or third in the queue.
Keeps me sane....
That said: agree with others that sharing state between devices (either yours or others), and being able to collaborate on the same board, is sort of the canonical feature requirement of kanban boards. They can be used for 1-person projects, goal tracking, etc. - I've used e.g. Notion boards in this way - but they gain most of their value from allowing multiple people to share awareness of task status and ownership.
Plus the use of localStorage means I'd eventually blow away my board state by accident - which is kind of a showstopper IMHO; being able to trust your tools is important.
Still: nice to see people experimenting with what you can do just using web basics :)
I hope any Scrum Master who comes across this thread takes the absence of those terms as an indicator of the value of those things and all the related feature bloat that bedevils Agile project management tools.
I'm very color oriented, so my forked version does colors to help me organized. (Yes I've sent it in as a pull request).
> Still very much in beta.
The last commit was November 2023.
Here's the full timeline to get a general sense of the development pace - http://nullboard.io/changes
Looking at the repository, maybe the author wants to work on mobile support, think about alternative storage methods, as for the last commit, maybe he's working on something else of simply living his life.
One change I think I'll need to make is prettify the export of .nbx (JSON) so that git diffs perform better. Hopefully nbagent can keep the data synced as the board is updated.
You just edit the text. Perfect.
I remember working with Html applications (HTA) on Windows back with JScript or VBScript.
I am thinking of tools that would make navigating plain text long files useful such as simple table of contents generation or indexes.
It is simple, nice and clean.
I immediatley want things knowing deep down that those things, if delivered, would probably take away the esscence of what's good about it.
Still...i would like alt ways to persist and share ...would be nice to manage 1:1s across multiple teams I run :-p
I can't help but think the missing bit is portability of the data file. I wonder if simply allowing a a binary or even JSON representation to be copy pasted from the browser would work well enough.
call it whatever you want, but don't ever mention a BSD License if you've modified it.
Loaded locally and/or via the Web, are there any other file formats that work this way or is .html the only bootstrapping option browsers support?
Cool project, though - don't mean to take away anything from it.
I find the totally self-contained nature of them very appealing because it travels well through space and time, and it's incredibly accessible, both online and offline.
My current side project is actually using a WebDAV server to host a wide variety of different single HTML file apps that you can carry around on a USB drive or host on the web. The main trick to these apps is the same trick that TiddlyWiki uses, which is to construct a file in such a way that it can create an updated copy of itself and save it back to the server.
I'm attracted to this approach because it's a way to use relatively modern technologies in a way that is independent from giant corporations that want to hoover up all my data, while also being easy to hack and modify to suit my needs on a day-to-day basis.
The two projects out in the wild that natively work with this approach are TiddlyWiki and FeatherWiki.
I see room for a lightweight version of a calendar, a world clock, and even a lightweight spreadsheet that could be useful. I also have an idea for something I call a link trap where you can rapidly just drop links in and then search and sort over them to quickly retrieve something you saw before that was interesting. Sort of like my bookmarks-outside-the-browser except more of a history-outside-the-browser.
My primary saver is a Python script that I wrote called Notedeck, but I also sometimes use a Rust webdav server called dufs.
I haven't released either of my projects I'm working on that are the client files, otherwise I would have just linked them.
Is there some way to accomplish this through GitHub? Like the single html file running on GitHub.io pages can commit the changes to its repo?
That's not to say one couldn't still do what you're describing via other headers, I'm just saying "<input name=username><input name=password>" won't get it done
Any additional info/pointers on this ?
For the benefit of the Hacker News audience that are curious, let me take a stab here.
The general strategy is to include JavaScript in the HTML document that knows how to look at various nodes in the DOM and create a new version of the document that uses an updated set of data.
Some sections of the data can be pulled verbatim. So, for example, if you have one giant script at the bottom of the doc, you can give it an ID of perhaps S, and then use that ID to retrieve the outer HTML of that script tag and insert it into the clone.
Other areas of the DOM need to be templated. So for example, I insert a script that is of type JSON, and that contains all of the data for the application. This can be pulled and stringified when creating the clone.
For a minority of attributes like the title and document settings like whether or not you're in light or dark mode, you want to avoid a flash of unstyled content and so you actually templatize those elements so they are written into the cloned copy with the updated data directly inline.
There's no real magic to it, but it can be a little bit tedious. One really interesting gotcha is that the script tag is parsed with higher precedence than quote tags. And so I have to break up all script tags that appear in any string that I'm manipulating so that the parser does not close out the script I'm writing.
I have a very minimal app that uses this technique called Dextral. I'll be happy to host it on my site and link it here.
Edit: I sketched out a basic version in JS + Python, it's fairly ok to use. The nice parts about this approach is the HTML files are viewable without any extra services, and the service that enables saving doesn't itself need configuration or to keep state.
The downside is even though you know the current directory due to window.location the API is written in the way it assumes you either want a default location like "desktop" or need the user to navigate to the directory before you can do operations on it (for security reasons) even from a local context. The user needs to select the directory once per fresh page load (if you've dynamically reloaded the current content then multiple saves need only prompt once so long as you save the handle).
But also that’s a lot of code in general
- Split JS out from HTML, split CSS out from HTML
- Keep files reasonably small
So if I read "Single HTML file" I'd expect around a couple hundred lines at most, possibly with some embedded CSS.
It's kind of like saying "I've solved your problem in one line of JS" but then your line of JS is 1000 characters long and is actually 50 statements separated by semicolons. Yes, technically you're not lying, but I was expecting when you said "one line of JS" that it would be roughly the size and shape of a typical line of JS found in the wild.
When I see “single HTML file” it conjures up the same expectations as when PocketBase[0] describes itself as an “Open Source backend in 1 file”.
That is that I can copy that file (and nothing else) somewhere and open/run it, and the application will function correctly. No internet connection, no external "assets" needed, and definitely no server.
This mode of distribution, along with offline and local-first software, avoiding susbscriptions and third party / cloud dependencies, etc. all appeal to me very much.
So far I'm impressed, I appreciate the nice, dense and uncluttered UI out of the box and it seems to cover enough functionality to be useful. I'll definitely look out for a chance to give it a spin on something real.
[0] which I also think is great
Sorry if it's a bit direct and unrelated. I've actually got a question if you wouldn't mind.
I've been in the process of creating a local-first, non-subscription based Linux/Windows application that acts as a completely local search engine for your own documents and files. It's completely offline, utlises open source LLMs and you just use it like a search, so for example "what's my home insurance policy number", "how much did I pay for the humidifier", that kinda stuff. Things where you don't know exactly where the file is, but you know what you're looking for. You can browse through the results and go direct to the document if you want, or wait for the LLM response (which can be disabled if you just want plain search) to sift through the sources and give you a verifiable answer (checks sources for matching words, links to the source directly for you to reference etc).
My question would be. If you wanted something like this, how much would you pay? I'm going with the complete ownership type model, you pay, you get the exe and that's that. You like the new features in next major release, you pay an upgrade fee or something like that, but it's a "one and done" type affair.
$10, $30, $60, $100? I want it to be accessible to many people that don't want to feed their data to big corps but I also want to be able to make a living off improving it and bringing these features to people.
I've not really worked out any of this monetary side of things, or if it's something people even want. It's something I've developed over time for myself which I think could potentially be useful to other people.
I try to avoid proprietary software when I can, so if I wanted something like this I’d definitely look for open source options first. I’ll endure a reasonable amount of setup pain as long as the solution is what I’m after to go the open source route over a proprietary app.
For example, your idea seems to sit somewhere between Alfred (which I’ve bought every upgrade/ultimate pack/whatever for) or Raycast, and an LLM augmented search of a NAS / “personal cloud” server. So assuming I wanted it, if there was no neat and self contained open source solution, I’d probably try to cobble something together with Alfred, Ollama, Open WebUI, etc. (all of which I already run) plus some scripts/code first, too.
That said, for a good, feature-full local/self hosted solution that does exactly what I want in the absence of an open source option (or if it’s just much better in significant ways), I’m generally willing pay between $20–$100 per major release (for ex. I pay around that for e.g.: Alfred, the Affinity apps). For this I suppose $30–50 if it was pretty slick and filled an important niche. (I have paid more a handful of times in my life, usually for very specific use cases or if it helps me professionally, but not very recently.)
However, if a nice (or very promising and exciting to me), well maintained open source (GPL/MIT/Apache/BSD type license) solution does [most of] what I want and it’s something I really use (and a smaller project[0]) then I donate $10–30 per month (ex.: Helix, WezTerm). I sometimes do smaller or one-off donations, etc. for stuff I use less but appreciate.
That is, I intentionally pay more for open source I care about, and would humbly suggest considering that option for your project. Though I recognise that sustaining yourself financially that way is more than likely considerably harder, even with my small personal attempt at creating incentives for the world I want to see :)
NB: I do not buy subscription software unless it comes with a genuinely value added service (storing a few MiB on the devs cloud service doesn’t count) or access to data, for instance detailed snow/weather forecasts, market data, an advanced AI model (though my admittedly relatively minimal LLM use is currently >95% local and free/“open source” models).
[0] I don’t give money to the Linux kernel devs, etc. as I don’t think it’s likely to has as much positive impact
Not heard of Alfred as I'm not in the Apple ecosystem, but yes, you've hit the nail on the head between the combination of both after doing a bit of digging.
I'll seriously think about making it open source (time to brush up on the different licenses again). I want to keep it accessible so even my grandma could use it. I'm not expecting her to go cloning a git repo and checking dependencies etc, so I'm packaging it into a standalone executable. Maybe making the source open is something for me to consider and people can just pay if they don't want to go through any setup hassle (do I put some soft donation paywall up with a $0 minimum or something - just thinking out loud).
In terms of pricing, you've landed where I was thinking, maybe more towards the $30 end. I mean I think it's pretty slick and fills a niche, but I'm conscious I may be ever so slightly biased. A lot of stuff to mull over. Thanks again, really useful.
It will greatly increase the attractiveness of your software to me if you stick to the philosophy you’ve outlined there.
One approach that I’ve seen and have absolutely no issues with (in fact I think it’s a pretty smart way of doing things) is where a fully open source project provides code and releases on GitHub and in package managers like Homebrew, but also publishes releases as paid software on app stores.
This allows users to pay for the peace of mind of a “verified” app store install that Just Works. It also provides an easy way for the more technical among us to donate. I’ve switched to paid releases like this myself for a least a couple of fully open source projects just to give a little back.
Very different than a project that was made to be a single HTML file from the start.
Most projects prefer to have a separate database, server side rendering and often even multiple layers of compilers too.
A lot of projects even require hundreds of megabytes of language runtime in addition to the browser stack.
So a single HTML file is still unusual even if it’s something nearly any web app could technically do if they wished.
And for this reason alone, I think it’s unreasonable to have expectations of a JavaScript-less, CSS-less code-golfed HTML file. This isn’t sold as a product of the demo scene (that’s another expectation entirely). This is sold as a practical self-hosting alternative for people who need a kanban quickly and painlessly. Your comment even proves that it works exactly as that kind of solution. So having inlined JS is a feature I’d expect from this rather than complain about.
You made me look at the code and I was afraid of what I was going to find.
But man, that code is pretty and well organized, just like the resulting page.
We are definitely coming at this from a different angle.
It’s one reason Mac Apps get bundled as a single “file” from the user perspective. You don’t have to “install”, you just copy one file with everything. It’s a simpler dev experience.
Sure there are tradeoffs, but that’s great! We should accept that tradeoffs mean people can chose what works best for their specific context, rather than “best practices” which are silly.
Yes, they could be. And then they would have the same super power is this file: you can put it on a flash drive and run it anywhere with no setup or installation.
The number of clicks is an implementation detail. It depends on whether or not you're using the web file API, some browser download capability, a browser plug-in, a mobile app, desktop app, a webdav server, or something else.
For people trying it for the first time, they often have the experience you're describing. But for most anybody that actually picks this up and uses it on a day-to-day basis, they use something else that saves transparently and automatically.
All of this is orthogonal to whether or not it's in a single HTML file. I fear you took lelandbatey's original ctrl-s reference a bit more literally than intended, though if you want to be pedantic, I can confirm I use applications in this style all day as part of my daily workflow and I do press ctrl-s and it saves with no further interaction in fully patched versions of Chrome, Firefox, and Safari with no plugins whatsoever.
(And I am not the one downvoting you, in fact I couldn't even.)
As an aside: I do find these applications very interesting and am considering to make use of Nullboard myself, but also am weighing it against simply using org mode in Emacs and am looking for any advantage it might offer. Of course the ctrl+s issue plays a role there as well.
https://developer.chrome.com/docs/capabilities/web-apis/file...
Cool comment, though — don’t mean to take away anything from it.
And Merry Christmas!