I saw this and thought, if this doesn't get me to give it a go, nothing will.
Less than 45 minutes after signing up for fly.io, I have a multi-room tic tac toe game deployed.
https://tic-tac-toe-cyber.fly.dev/
I had it build the game, opting for a single room at first to see if that worked. Then I had it add multiple rooms on a different git branch in case that didn't work. It worked great.
I learned very little about elixir, phoenix, or deploying to fly.io up to this point, and I already have a nice looking app deployed and running.
I know a lot of devs will hate that this is possible, and it is up to me now to look at the steps it took to create this and really understand what is happening, which are broken down extremely simply for me...
I will do this because I want to learn. I bet a lot of people won't bother to do that. But those people never would have had apps in the first place and now they can. If they are creating fun experiences and not banking apps, I think that is still great.
You guys have been releasing amazing things for years only to be poorly replicated in other languages years later.. but you really outdid yourselves here.
I'm blown away.
edit: is there a way to see how much of my credits were used by building this?
it give me a selection of "styles" and I chose neon retro.. I probably could have been more creative and typed in my own suggestion.
Other than that, I said absolutely nothing about how I wanted the layout.
It came up with the idea of listing all active games on the homepage, with the number of players in each, all on its own.
I went from "I want a two player tic tac toe game" to having one, and then added multiple rooms, and deployed it all in under 45 minutes, with little input other than that..
I've seen others say they went through the full $20 within 45minutes to an hour.
They are supposed to be adding a way to monitor usage soon.
Just a clarifying question since I'm confused by the branding use of "Phoenix.new" (since I associate "Phoenix" as a web framework for Elixir apps but this seems to be a lot more than that).
- Is "Phoenix.new" an IDE?
- Is "Phoenix.new" ... AI to help you create an app using the Phoenix web framework for Elixir?
- Does "Phoenix.new" require the app to be hosted/deployed on Fly.io? If that's the case, maybe a naming like "phoenix.flyio.new" would be better and extensible for any type of service Fly.io helps in deployment - Phoenix/Elixir being one)
- Is it all 3 above?
And how does this compare to Tidewave.ai (created as presumably you know, by Elixir creator)
Apologies if I'm possibility conflating topics here.
You could absolutely treat phoenix.new as your full dev IDE environment, but I think about it less an IDE, and more a remote runtime where agents get work done that you pop into as needed. Or another way to think about it, the agent doesn't care or need the vscode IDE or xterm. They are purely conveniences for us meaty humans.
For me, something like this is the future of programming. Agents fiddling away and we pop in to see what's going on or work on things they aren't well suited for.
Tidewave is focused on improving your local dev experience while we sit on the infra/remote agent/codex/devin/jules side of the fence. Tidewave also has a MCP server which Phoenix.new could integrate with that runs inside your app itself.
Honestly, this is depressing. Pop in from what? Our factory jobs?
I do think there will be some Jevons effect going on with this, but I think it's important to recognize that software development as a resource is different than something like coal. For example, if the average iPhone-only teenager can now suddenly start cranking out apps, that may ultimately increase demand for apps and there may be more code than ever getting "written," but there won't necesarily be a need for your CS-grad software engineer anymore, so we could still be fucked. Why would you pay a high salary for a SWE when your business teams can just generate whatever app they need without having to know anything about how it actually works?
I think the arguments about "AI isn't good enough to replace senior engineers" will hold true for a few years, but not much beyond that. Jevon's Paradox will probably hold true for software as a resource, but not for SWEs as a resource. In the coal scenario, imagine that coal gets super cheap to procure because we invent robots that can do it from alpha to omega. Coal demand may go up, but the job for the coal miner is toast, and unless that coal miner has ownership stake, they will be out on their ass.
The future I’m seeing with AI is one where software (i.e. as a way to get hardware to do stuff) is basically a non-issue. The example I wanna work on soon is telling Siri I want my iPhone to work as a touchpad for my computer and have the necessary drivers for that to happen be built automatically because that’s a reasonable thing I could expect my hardware to do. That’s the sort of thing that seems pretty achievable by AI in a couple turns that would take a single dev a year or two. And the thing is, I can’t imagine a software dev that doesn’t have some set of skills that are still applicable in this future, either through general CS skills (knowing what’s within reasonable expectations of hardware, being able to effectively describe more specific behavior/choosing the right abstractions etc) or other more nebulous technical knowledge (e.g. what you want to do with hardware in the first place).
Another thing I will mention is that for things like the iPhone example from earlier, there are usually a lot of optimizations or decisions involved that are derived from the user’s experience as a human which the LLM can’t really use synthetically. As another example if I turned my phone into a second monitor the LLM might generate code that sends full resolution images to the phone when the phone’s screen is much lower, there’s no real point for it to optimize that away if it doesn’t know how eyes work and what screens are used for. So at some point it needs to involve a model of a human, at least for examples like these.
I definitely agree that there will be some jobs/roles like that, and it won't be 100% destruction of SWEs (and many other gigs that will be affected), but I can't imagine that more than a small percentage of consultants will be needed. The top 10% of engineers I think will be just fine for the reasons you've said, but at the lower levels it will be a blood bath (and realistically maybe it should as there are plenty of SWEs that probably shouldn't be writing code that matters, but that feels like a separate discussion). Your point about other skills/knowledge is good too, though I suspect most white collar jobs are on the chopping block too, just maybe shortly behind.
Your future is one that I'm dreaming about too (although I have a hard time believing Apple would allow you to do that, but on Android or some future 3rd option it might be possible). Especially as a Linux user there have been plenty of times I've thought of cool stuff that I'd love to have personally that would take me months of work to build (time I've accepted I'll never have until my kids are all out of the house at least haha). I'm also dreaming of a day when I can just ask the AI to produce more seasons of Star Trek TOS, Have Gun - Will Travel, The Lieutenant, and many other great shows that I'm hungry for more, and have it crank them out. That future would be incredible!
But that feels like the smooth side of the sword, and avoiding a deep cut from the sharp side feels increasingly important. Hopefully it will solve itself but seeing the impacts so far I'm getting worried.
I appreciate the discussion and optimism! There is too much AI doomerism out there and the upsides (like you've mentioned) don't get talked about enough I think.
"Training" is just upfront work. Why on Earth people expect to get from the machine that processes data some novel information that did not exist before?
This whole fantasy hinges on not understand the sheer amount of data these LLMs are being trained on, and some magical thinking about it producing some novel information ex nihilo somehow. I will never understand how intelligent people fall into this patterns of thought.
We can only get from computers what we put into them.
It depends on how good the AI is. The advantage of an SWE is that they have a systems thinking mindset, so they can solve some problems more efficiently. With some apps in won't matter, but with others will.
One potential positive outcome is that we will be able to solve more and bigger problems, since our capacity for solving problems has been augmented with AI.
Oh, you sweet summer child. ;)
You will pop in from the other 9 projects you are currently popping in on, of course! While running 10 agents at once!
We're building a serfdom again.
You've literally been given an excavator when you currently have a shovel, and you're worried that other excavators will dig you out of a job. That is a literal analogy to your POV, here
In the long term. In the short term, we get to do the same work but faster.
What I mean is, it will create value. Just not for the masses. And maybe not for the small businesses. If anything, it will let the big corporations do even more: a few big players doing everything and no little players at all.
Is it possible to get that headless Chrome browser + agent working locally? With something like Cursor?
- ability to run locally somehow. I have my own IDE, tools etc. Browser IDEs are definitely not something I use willingly.
- ability to get all code, and deploy it myself, anywhere
---
Edit: forgot to add. I like that every video in Elixir/Phoenix space is the spiritual successor to "15-minute rails blog" from 20 year ago. No marketing bullshit, just people actually using the stuff they build.
You could also have it use GitHub and do PRs for a codex/devin style workflows. Running phoenix.new itself locally isn't something we're planning, but opening the runtime for SSH access is high on our list. Then you could do remote ssh access with local vscode or whatever.
So no plans to open the source code?
These other details that are not “just coding” are always the biggest actual impediments to “showing your work”. Thanks for making this!! Somehow I am only just discovering it (toddler kid robbing my “learning tech by osmosis” time… a phenomenon I believe you are also currently familiar with, lol)
The LLM chat taps out but I can't find a remaining balance on the fly.io dashboard to gauge how I'm using it. I _can_ see a total value of purchased top ups, but I'm not clear how much credit was included in the subscription.
It's very addictive (because it is awesome!) but I've topped up a couple of times now on a small project. The amount of work I can get out the agent per top-up does seem to be diminishing quite quickly, presumably as the context size increases.
PS: Why can't I get IEx to have working command-line history and editing? ;-P
2. How do you handle 3rd party libraries? Can the agent access library docs somehow? Considering that Elixir is less popular than more mainstream languages, and hence has less training data available, this seems like an important problem to solve.
But either way I hear you, thanks so much for taking the time to set me straight. It seems like either way you have done some visionary things here and you should be content with your good work! This stuff does not work for me for just circumstantial reasons (too poor), but still always very curious about the stuff coming out!
Again, so sorry. Congrats on the release and hope your day is good.
I was curious what the pricing for this is? Is it normal fly pricing for an instance, and is there any AI cost or environment cost?
And can it do multiple projects on different domains?
web https://example.com/file.pdf Error: page.goto: net::ERR_ABORTED at https://example.com/file.pdf Call log: - navigating to "https://example.com/file.odf", waiting until "load" at main (/usr/local/lib/web2md/web2md.js:313:18) { name: 'Error' }
/workspace#
How do you protect the host Elixir app from the agent shell, runtime, etc
quick followup if the agent's running on a separate machine and interacting remotely, how are failure modes handled across the boundary? like if the agent crashes mid-operation or sends a malformed command, does the remote runtime treat it as an external actor or is there a strategy linking both ends for fault recovery or rollback? just trying to understand where the fault tolerance guarantees begin and end across that split.
1. Remote agent - it's a containerized environment where the agent can run loose and do whatever - it doesn't need approval for user tasks because it's in an isolated environment (though it could still accidentally do destructive actions like edit git history). I think this alone is a separate service that needs to be productionized. When I run claude code in my terminal, automatically spin up the agent in an isolated environment (locally or remotely) and have it go wild. Easy to run things in parallel
2. Deep integration with fly. Everyone will be trying to embed AI deep into their product. Instead of having to talk to chatgpt and copy paste output, I should be able to directly interact with whatever product I'm using and interact with my data in the product using tools. In this case, it's deploying my web app
https://hub.docker.com/r/linuxserver/kasm
https://www.reddit.com/r/kasmweb/comments/1l7k2o8/workaround...
It does not handle any infrastructure, so no hosting. It allows me to set multiple small tasks, come back and check, confirm and move forward to see a new branch on GitHub. I open a PR, do my checks (locally if I need to) and merge.
How is this innovation?
It’s in fact one of my predictors for if they are going to be enthusiastic about agents or not.
And you wouldn’t think containerization would be a big leap but this stuff is so new and moving so fast that combining them with existing tech can surprise people.
I worked all day on a Phoenix app we’re developing for ag irrigation analysis. Of late, my “let’s see what $20/mo gets you” is Zed with its genetic offerings.
It actually writes very little Elixir code for me. Sometimes, I let it have a go, but mostly I end up rewriting that stuff. Elixir is fun, and using the programming model as intended is enlightening.
What I do direct it to write a lot is a huge amount of the HEEX stuff for me. With an eventual pass over and clean it up for me. I have not memorized all of the nuances of CSS and html. I do not want to. And writing it has got to be the worst syntactic experience in the history of programming. It’s like someone said people lisp was cool; rather than just gobs of nested parentheses, let’s double, nay triple, no quadruple down on that. We’ll bracket all our statements/elements with a PAIR of hard to type characters, and for funsies, we’ll make them out different words in there. And then when it came to years of how to express lists of things, it’s like someone said “gimme a little bit of ini, case insensitivity, etc”. And every year, we’ll publish new spec of new stuff that preserves the old while adding the new. I digress…
I view agentic coding as an indictment on how bad programming has gotten. I’m not saying there wouldn’t be value, but a huge amount of the appeal, is that web tech is like legalese filled with what are probably hidden bugs that are swallowed by browsers in a variety of u predictable ways. What a surprise that we’ve given up and decided the key tools do the probabilistic right thing. It’s not like we had a better chance of being any more correctly precise on our own anyway.
As an Elixir enthusiast I've been worried that Elixir would fall behind because the LLMs don't write it as well as they write bigger languages like Python/JS. So I'm really glad to see such active effort to rectify this problem.
We're in safe hands.
Its a negative point for engineering leaders that are the decision makers on tech stacks as it relates to staffing needs. LLMs not writing it well, developers that know it typically needing higher compensation, a DIY approach to libraries when there aren't any or they were abandoned and haven't kept pace with deprecations/changes, etc.
In the problem space of needing a web framework to build a SaaS, to an engineering leader there are a lot of other better choices on tech stack that work organizationally better (i.e. not comparing tech itself or benchmarks, comparing staffing, ecosystem, etc.) to solve web SaaS business problems.
I don't know where I stand personally since I'm not at the decision maker level, just thought I'd point out the non-programmer thought process I've heard.
From time to time it tries to do something a little old-school, but nothing significant really. It's very capable at spitting out entire new features, even in liveview.
Over all the experience has very productive, and at least on-par with my recent work on similar sized python and nextjs applications.
I think because I'm using mostly common and well understood packages it has a big leg up. I also made sure to initialise the phoenix project myself to start with so it didn't try to go off on some weird direction.
Before this I did a small project and I hit the 50 free tier limit through Zed by the time I was about 90% done. It was a small file drop app where internal users could create upload links, share them with people who could use them to upload a file. The internal user could then download that file. So it was very basic, but it churned out a reasonable UI and all the S3 compatible integration, etc.
I had to intervene a bit and obviously was reviewing everything and tweaking where needed. But I was surprised at how far I got on the 50 free prompts.
It's hard to know what you really get for that prompt limit though as I probably had a much higher number of actual prompts than they were registering. It's obviously using some token calculation under the hood and it's not clear what that is. All in all I probably had about 60-70 actual prompts I ran through it.
My gut says 500/mo would feel limited if I was going full "vibe" and having the LLM do basically everything for me every day. That said, this is the first LLM product I'm considering personally paying for. The integration with Zed is what wins for me over Claude, where you'd have to pay for API credits or use Claude Code. The way they highlight code changes and stuff really is nice.
Bit of a brain dump, sorry about that!
For Claude Code, the limit is reset every 5 hours so if you hit it, you rest a bit. Not that big a deal to me. But the way it works I find much more stressful. It is reviewing just about everything it is doing. It is step-by-step. Some of it you can just say Yes, do it without permission, but it likes to run shell commands and for obvious reasons arbitrary shell commands need your explicit Yes for each run. This is probably a great flow if you want a lot of control in what it is doing. And the ability to intercede and redirect it is great. But if you want more of a "I just want to get the result and minimize my time and effort" then Zed is probably better for that.
I am also experimenting with OpenAI's codex which is yet a different experience. There it runs on repos and pull requests. I have no idea what their rate/limit stuff will be. I have just started working with it.
Of the three, disregarding cost, I like Zed's experience the best. I also think they are the most transparent. Just make sure never to use the burn mode. That really burns through the credits very quickly for no real discernible reason. But I think it is also limited to either small codebases or prompts that limit what the agent is going through to get up to speed due to the context window being about 120k (it is not 200k as the view seems to suggest).
If Phoenix.new helps solve that problem, I’m all for the effort. But otherwise, the sole focus of the community leaders of Elixir should be squarely and exactly focused on creating the incentives and dynamics to grow the base.
Compare, for example, Mastra in TypeScript or PydanticAI in Python. Elixir? Nothing.
Not here to bash. It’s more just a disappointment because otherwise I think nothing comes close.
Want a first-party client library for the service you're using? Typically the answer is "too bad, Elixir developer." And writing your own Finch or Req wrapper for their REST endpoint simply isn't a valid answer.
>For its size, Elixir is doing quite well.
I'm actually arguing the opposite. Elixir is not doing well because of its size. So how can that be influenced and changed?
Worse still, the quality of Stripe’s own docs have really degraded this decade for anyone not using a language they have an SDK for. Most of their newer docs assume m have a drop-down toggle for on backend language with a few popular languages and no option for “other”. Example: https://docs.stripe.com/billing/quickstart
None of this is a fault of anyone working on Elixir or Phoenix but it definitely has an effect of discouraging some of the fledgling entrepreneur types who Elixir would otherwise be a near perfect fit for, as Rails was in the late aughts.
I have just shipped a production service centered around OAuth and interfacing with OpenID Connect servers.
Some languages—Clojure is a good example—have packages from 10 years ago, entirely unmaintained, that still work great because no maintenance is needed.
You think just because an author bumps the version number of a library it's somehow better than a library that is considered complete?
It boggles my mind that people actually think this way.
If you want to take your website and business down, use ChatGPT-4o's code
I mean seriously, fuck everything about how the data is gathered for these things, and everything that your comment implies about them.
The models cannot infer.
The upside of my salty attitude is that hordes of vibe coders are actively doing what I just suggested -- unknowingly.
I am not sure, but the cat is out of the box. I don't think we can do anything at this point.
My experience with software development is maybe different than yours. There's a massive amount of not-yet-built software that can improve peoples' lives, even in teeny tiny ways. Like 99.999% of what should exist, doesn't.
Building things faster with LLMs makes me more capable. It (so far) has not taken work away from the people I work with. It has made them more capable. We can all build better tools, and faster than we did 12 months ago.
Automation is disruptive to peoples' lives. I get that. It decreases the value of some hard earned skills. Developer automation, in my life at least, has also increased the value of other peoples' skills. I don't believe it's anti worker to build more tools for builders.
We agree on this completely, however you and I know there are plenty of people without jobs in the world who could be employed to do this work. You are spending your finite amount of time on earth working with services that are trying to squeeze the job market (they've said this openly) rather than spending it increasing the welfare of workers by giving them work.
> Automation is disruptive to peoples' lives.
You know the difference between automation and the goals of these companies. You know that they don't want to make looms that increase the productivity of workers, they want to replace the worker so they never have to pay wages again.
Saying the quiet part loud here.
It's really a matter of positive sum/growth mindset vs scarcity/status quo mindset.
It's more than evident that software has automated away all kinds of wage labor from the aforementioned typist pools to Hollywood special effects model-makers.
What's different now is that it is actually the software creators’ labor that is in danger of automation (I think this is easily overstated but it is obviously true to some degree).
I get that it feels different for us now that OUR ox is the one being gored. And I do think there will be no end of negative externalities from the turn towards AI. But none of that refutes the above respondent's point?
1. Typists are still around and so are special effects model-makers. 2. People who program aren't in danger of automation. 3. These services are entirely unsustainable, they will absolutely not last at their current pace.
The premise of this entire work, detailed by the creator, is to utilize a program to reduce the amount of work a programmer is required to do. They believe ultimately, like most results of improved automation, that this will result in more things we can work on because we have more time. I agree that this would likely be the case! We could also simply make more programmers, could we not? Why haven't we? Do the 18k people homeless in my city tonight not deserve a shot at learning a skill before we even think about making the work easier per person?
Finally, and more to the point, genAI is built by and designed to eliminate workers entirely. The money that goes into those services funds billionaires who seek to completely and totally annihilate the concept of the proletariat. When I make a tool that helps workers at my job do their job better I am not looking to eliminate that person from the company.
The days are numbered where humans are sitting typing out code themselves.
It's akin to the numbered days of type writer secretaries of the 20th century.
I'm sure your poor understanding of the history of improved tooling, like "type writer secretaries", will be a soft comfort in the future.
Overall I think we would all be happier if efficient machines take away the drudgery of our daily work and allow us to focus on things that really matter to us. . . as long as our basic needs are met.
Nope, I've been doing it for 16 years.
I have a question about how you manage context, and what model you use. Gemini seems the best at working with give context windows right now, but even that has its limitations. Thinking about working with Claude Code, a fair bit of my strategizing is in breaking down work and managing project state to keep context size manageable.
I'm watching the linked video and it's amazing seeing it in action, but I'm imagining continuing to work on a project and wondering if it will start losing its way so to speak. Can you have it summarize stuff, and can you start a session clean with those summaries, and have it "forget" files it won't need to use for this next feature, etc?
I've been daydreaming of an agentic framework that maximally exploits BEAM. This isn't that, but maybe jido[0] is what I'm looking for.
https://elixirforum.com/t/is-anyone-working-on-ai-agents-in-...
Which LLMs do you use that you find are best with Elixir/Phoenix?
I guess I should also note that I haven't really used LiveView much.
hard to put confidence in AI vibe hacks when the basic stuff just doesn't work.
* (Mix) The database for Myapp.Repo couldn't be created: killed
Few issues:
1. The 150 message limit is understable but it suddenly pops up and you lose significant work. I was working on UI mockup and just as I had finished and was ready to go on implementation, this limit appeared and significant part of my work was lost. 2. After the first credit, the credit seems to exhaust pretty fast which makes it expensive, especially when you are trying it out. 3. Also I don't understand when you ask it to prototype different screens, why does it overwrites the same file. 4. It is not able to stop to seek user feedback but keeps trying different approach which kind of exhausts the credit. It would be nice if it describes its approach, so the human developer can provide their feedback. 5. It seems it is using OpenAI because it is often self-congratulatory to the point of being annoying sometimes.
This is a tangential comment and should not detract from what Chris and team have created. I think closing the loop between agent and the running output is a great/critical step forward.
However, I find using AI to build transitional Apps with a UI is a bit like improving the way automobile steering wheels are made. In a world that soon won't need steering wheels at all.
If the AI is so good to write the code for an App, how much longer before you won't need those Apps in the first place? And then the question is, what will fill the role that Apps play today.
A coded app is significantly more efficient to execute, and more predictable, than dealing with AI in most situations.
Feels like we're getting into a weird situation if LLM providers are publishing open source agentic coding tools and OSS web app frameworks are publishing closed source/non-BYOK agentic coding tool. I realize this may not be an official "Phoenix" project but it seems analogous to DHH releasing a closed-source/hosted "Rails.new" service.
And thinking about it made me realize that soon there will be a completely different programming language used solely by coding agents. ChatGPT gives an interesting take on this, "The fundamental shift is that such a language wouldn’t be written or read, but reasoned about and generated. It would be more like an interlingua between symbolic goals and executable semantics, verbose, unambiguous, self-modifying, auto-verifiable, evolving alongside the agents that use it").
There’s a “clone Git repo” thing in the left side bar, use that to clone the project locally, mix deps.get, mix phx.serve and you’re up. You can deploy this anywhere you want.
I find Claude to have quite a bit of problems trying to navigate changesets + forms + streams in my codebase, just wondered if you had any tips of making it understand better :)
I've been working with Phoenix a lot the last few months, and I like it a lot. But I do get the sense that the project suffers from wanting to perpetually chase the next new thing, even when that comes at the expense of the functional elegance and conceptual cohesiveness that I think is Phoenix' main strength.
LiveView is a great example. It's a very neat bit of tech, but it's shoe-horned surprisingly awkwardly into Phoenix. There's now a live view and non-live view way to do almost everything in Phoenix, and each has their own different foibles and edge cases. A lot of code needs to work with both (e.g. auth needs to happen at both levels, basically), meaning a surprising amount of code needs to have two, nearly identical variants: one with traditional Plug idioms, and then another using LiveView equivalents. Quick little view helpers end up with either convoluted 'what mode am I in?' branching, or (more likely) in view-mode-dependent wrappers around view-mode-independent abstractions. This touches even the simplest helpers (what is the current path?) and becomes more cumbersome from there. (And given the lack of static analysis for views, it can be non-trivial to even find out what is and isn't actually working where.)
Not every website should be a live view (e.g. hiking directions, for example), but that is clearly the direction of travel in Phoenix. Non-live views get the disparaging moniker 'dead views', and the old Phoenix.HTML helpers have been depreciated in favour of <.form />-style live components. The generators depend on those, plus Tailwind, Hero Icons and (soon) DaisyUI, all fetched live from various places on the Internet on build. This tight coupling to trendy dependencies will age poorly, and it makes for bumpy on-boarding (opinionated and tightly coupled isn't necessarily a smoother experience, just a more inflexible one).
So with all of that in mind, while I'm not shocked to see Phoenix jump on the vibe coding hype train, I guess I am disappointed.
The revelation that AI is now writing PRs for Phoenix itself is not confidence inspiring. I rely on frameworks like Phoenix because I don't want to have to think about the core abstractions around getting a website to my users; I want to focus on my business logic. But implicit in that choice is the assumption that someone is thinking about those things. If it's AI pushing out Phoenix updates now, my trust level and willingness to rely on this code drops dramatically. I also do not expect Phoenix' fraying conceptual cohesiveness to get any better if that's the way we're headed.
Phoenix is still an amazing piece of tech, but I wish I felt more at ease about its future trajectory.
Having a full stack that is easy to use as a learning sandbox is incredibly helpful in that regard, so this looks amazing.
I couldn't get Tidewave working but I must try again to see if Tidewave with Claude Code would offer this level of awesome.
ps. @fly - please let me buy more credit, I just get an error!
We want things we can tinker and toy with from the inside.
Ya I really am not the target demographic for this since I don't use AI agents in my IDE anyway.
It does seem to perfectly fit fly.io in the sense that I also don't care about "edge" apps.
Does anyone know any great resources to learn how to design agents? Tool agnostic resources would be awesome.
Elixir is particularly well suited here. In Elixir this is a genserver doing http posts and reacting to the token stream. The LiveView chat gets messages from the genserver agent regardless of where it is on the planet, and the agent also communicates with the phoenix channel websocket talking to the IDE machines with regular messages, again anywhere they are on the planet.
I talk about this quite a bit in my ElixirConfEU talk and distill things down: https://youtu.be/ojL_VHc4gLk?si=MzQmz-vofWxWDrmo&t=1040
It's like helping LLMs use a computer; like building an interface for it.
Ok, this is enough to get me started.
How did you get VS Code embedded in your app? I'm aware of projects like Monaco, and that vscode.dev exists - so it's clearly possible - but I didn't realize it was something others could build upon?
Again, kudos!
In my vim workflow I keep splitting/unsplitting windows and I like to have a file browser I can navigate with vim bindings.
(I almost just asked you on company Slack but figured the answer would be more broadly interesting.)
What usage limits do we get with the $20 monthly price?
Thank you!
This is very exciting and I’ll check it out!
Some libraries have text-based documentation for LLMs which works great in my experience.
ty fly team
Disabled on news.ycombinator.com
"They are not making money off AI".was the most common response to my pointing out they were shilling AI. Feels good to be right.
Where they have this nugget about plagiarism:
> But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.
So I guess if you have concerns about them using any code you upload using this tool you can shove it up your ass.
Honestly doubt the AI stuff is going to move the needle much if you can't even have a dependable S3 client.
If you look around you'll see this kind of stuff is really one of the biggest blockers for Elxir and Phoenix. Especially for something as fundamental as cloud storage.
Maybe fine today but what about 5 years from now?
Can you say, with any degree of confidence, if these these libraries are going to be properly maintained in the future? No, you cannot.
https://hex.pm/packages/ex_aws https://hex.pm/packages/ex_aws_s3
I've usually not seen more than 3 or so official SDK for most services and there are a lot more programming languages than that. For example Microsoft's Graph API doesn't have an official Ruby client, they have one that sort of works.
The official aws cli used to talk to the soap interface and used regex instead of actually doing correct error handling and that was used by so many tools. Even though it used to break horrible.
It's quite a niche you are talking about, not big enough to debug open source code but still big enough to require SLA for SDK and not being able to talk Amazon into creating it. It's generated code, it's not rocket science.
What I have experienced is that software licence, where you are sending data to, where you are hosting it and having access to audit the code has usually been a bigger concern.
But then again big organisations often have really specific concerns. So I'm not doubting your statement it's just that I have never heard it before.
I'm not looking for anything. I'm describing my experience when evaluating Elixir/Phoenix recently.
I'm also questioning the investment into AI tooling when there are far more pressing issues that are hurting adoption.