The point they seem to be making is that AI can "orchestrate" the real world even if it can't interact physically. I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.

However even by that metric I don't see how Claude is doing that. Seth is the one researching the suppliers "with the help of" Claude. Seth is presumably the one deciding when to prompt Claude to make decisions about if they should plant in Iowa in how many days. I think I could also grow corn if someone came and asked me well defined questions and then acted on what I said. I might even be better at it because unlike a Claude output I will still be conscious in 30 seconds.

That is a far cry from sitting down at a command like and saying "Do everything necessary to grow 500 bushels of corn by October".

These experiments always seems to end up requiring the hand-holding of a human at top, seemingly breaking down the idea behind the experiment in the first place. Seems better to spend the time and energy on finding better ways for AI to work hand-in-hand with the user, empowering them, rather than trying to find the areas where we could replace humans with as little quality degradation as possible. That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.
>ather than trying to find the areas where we could replace humans with as little quality degradation as possible

The particular problem here is it is very likely that the easiest people to replace with AI are the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.

>seems to end up requiring the hand-holding of a human at top,

I was born on a farm and know quite a bit about the process, but in the process of trying to get corn grown from seed to harvest I would still contact/contract a set of skilled individuals to do it for me.

One thing I've come to realize in the race to achieve AGI, the humans involved don't want AGI, they want ASI. A single model that can do what an expert can, in every field, in a short period of time is not what I would consider a general intelligence at all.

> the ones making the most money and doing the least work. Needless to say those people are going to fight a lot harder to remain employed than the average lower level person has political capital to accomplish.

They don't have to "fight" to stay employed, anyone with sufficient money is effectively self-employed. It's not going to be illegal to spend your own money running your own business if that's how you want to spend your money.

Anyone "making the most money and doing the least work" has enough money to start a variety of businesses if they get fired from their current job.

?

If you have a cushy job where you don't really work, and you make a lot of money (doesn't mean you have capital), how does that translate to being suited to becoming an entrepreneur with the money they are no longer earning with the effort capacity they apparently don't have?

> (doesn't mean you have capital)

Then they’re not going to be doing any significant lobbying so they’re not covered by GP’s comment, which was selecting for “people who have political capital”.

Yes, there are other forms of political capital besides money, but it’s still mostly just money, especially when they’re part of the tiny voter block of “people who make a lot of money and dont do much work and dont have wealth”.

Also I talked with the employees at my local McDonald’s last week. Not one of them had any idea who the owner was. I showed them a photo of the owner and they had never seem them. So apparently that could be an option for people who were overpaid and still want to pretend-work while making money.

  • ep103
  • ·
  • 14 hours ago
  • ·
  • [ - ]
Ai hype is predicated on the popular idea that it can easily automate someone else's job, because that job they know nothing about is easy, but my job is safe from ai because it is so nuanced.
I don't think this describes all or most AI hype, but it definitely describes Marc Andreessen when he said VCs would be the last ones automated.
[dead]
  • xmprt
  • ·
  • 6 hours ago
  • ·
  • [ - ]
I didn't think we'd ever see the day where we started enshitifying labor
> I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.

I think this is the new turing test. Once it's been passed we will have AGI and all the Sam Altmans of the world will be proven correct. (This isn't a perfect test obviously, but neither was the turing test)

If it fails to pass we will still have what jdthedisciple pointed out

> a non-farmer, is doing professional farmer's work all on his own without prior experience

I am actually curious how many people really believe AGI will happen. Theres alot of talk about it, but when can I ask claude code to build me a browser from scratch and I get a browser from scratch. Or when can I ask claude code to grow corn and claude code grows corn. Never? In 2027? In 2035? In the year 3000?

HN seems rife with strong opinions on this, but does anybody really know?

Researchers love to reduce everything into formulae, and believe that when they have the right set of formulae, they can simulate something as-is.

Hint: It doesn't work that way.

Another hint: I'm a researcher.

Yes, we have found a great way to compress and remix the information we scrape from the internet, and even with some randomness, looks like we can emit the right set of tokens which makes sense, or search the internet the right way and emit these search results, but AGI is more than that.

There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise. AI models doesn't work on those. LLMs consume language and emit language. The information embedded in these languages are available to them, but most of the tacit knowledge is just an empty shell of the thing we try to define with the limited set of words.

It's the same with anything we're trying to replace humans in real world, in daily tasks (self-driving, compliance check, analysis, etc.).

AI is missing the magic grains we can't put out as words or numbers or anything else. The magic smoke, if you pardon the term. This is why no amount of documentation can replace a knowledgeable human.

...or this is why McLaren Technology Center's aim of "being successful without depending on any specific human by documenting everything everyone knows" is an impossible goal.

Because like it or not, intuition is real, and AI lacks it. Irrelevant of how we derive or build that intuition.

> There's so much tacit knowledge and implicit computation coming from experience, emotions, sensory inputs and from our own internal noise.

The premise of the article is stupid, though...yes, they aren't us.

A human might grow corn, or decide it should be grown. But the AI doesn't need corn, it won't grown corn, and it doesn't need any of the other things.

This is why, they are not useful to us.

Put it in science fiction terms. You can create a monster, and it can have super powers, _but that does not make it useful to us_. The extremely hungry monster will eat everything it sees, but it won't make anyone's life better.

The Torment Nexus can't even put a loaf of bread on my table, so it's obvious we have nothing to fear from it!
I agree we don't have much to (physically) fear from it...yet. But the people who can't take "no" for an answer and don't get that it is fundamentally non-human, I can believe they are quite dangerous.

  > Hint: It doesn't work that way.
I mean... technically it would work this way but, and this is a big but, reality is extremely complicated and a model that can actually be a reliable formula has to be extremely complicated. There's almost certainly no globally optimal solutions to these types of problems, not to mention that the solution space is constantly changing as the world does. I mean this is why we as humans and all animals work in probabilistic frameworks that are highly adaptable. Human intuition. Human ingenuity. We simply haven't figured out how to make models at that level of sophistication. Not even in narrow domains! What AI has done is undeniably impressive, wildly impressive even. Which is why I'm so confused why we embellish it so much.

It's really easy to think everything is easy when we look at problems from 40k feet. But as you come down to Earth the complexity exponentially increases and what was a minor detail is now a major problem. As you come down resolution increases and you see major problems that you couldn't ever see from 40k feet.

As a researcher, I agree very much with you. And as an AI researcher one of the biggest issues I've noticed with AI is that they abhor detail and nuance. Granted, this is common among humans too (and let's not pretend CS people don't have a stereotype of oversimplification and thinking all things are easy). While people do this frequently they also don't usually do it in their niche domains, and if they are we call them juniors. You get programmers thinking building bridges is easy[0] while you get civil engineers thinking writing programs is easy. Because each person understands the other's job only at 40k feet and are reluctant to believe they are standing so high[1]. But AI? It really struggles with detail. It really struggles with adaptation. You can get detail out but it often requires significant massaging and it'll still be a roll of the dice[2]. You also can get the AI to change course, a necessary thing as projects evolve[3]. Anyone who's tried vibe coding knows the best thing to do is just start over. It's even in Anthropic's suggestion guide.

My problem with vibe coding is that it encourages this overconfidence. AI systems still have the exact same problem computer systems do: they do exactly what you tell them to. They are better at interpreting intent but that blade cuts both ways. The major issue is you can't properly evaluate a system's output unless you were entirely capable of generating the output. The AI misses the details. Doubt me? Look at Proof of Corn! The fred page is saying there's an API error[4]. The sensor page doesn't make sense (everything there is fine for an at home hobby project but anyone that's worked with those parts knows how unreliable they are. Who's going to do all the soldering? You making PCBs? Where's the circuit to integrate everything? How'd we get to $300? Where's the detail?). Everything discussed is at a 40k foot view.

[0] https://danluu.com/cocktail-ideas/

[1] I'm not sure why people are afraid of not knowing things. We're all dumb as shit. But being dumb as shit doesn't mean we aren't also impressive and capable of genius. Not knowing something doesn't make you dumb, it makes you human. Depth is infinite and we have priorities. It's okay to have shallow knowledge, often that's good enough.

[2] As implied, what is enough detail is constantly up for debate.

[3] No one, absolutely nobody, has everything figured out from the get-go. I'll bet money none of you have written a (meaningful) program start to finish from plans, ending up with exactly what you expect, never making an error, never needing to change course, even in the slightest.

Edit:

[4] The API issue is weird and the more I look at the code the more weird things are. Like there's a file decision-engine/daily_check.py that has a comment to set a cron job to run every 8 hours. It says to dump data to logs/daily.log but that file doesn't exist but it will write to logs/all_checks.jsonl which appears to have the data. So why in the world is it reading https://farmer-fred.sethgoldstein.workers.dev/weather?

  • cevn
  • ·
  • 18 hours ago
  • ·
  • [ - ]
I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out.
I'd agree that LLMs are a dead end to AGI, but I don't think that AI needs to mirror our own brains very closely to work. It'd be really helpful to know how our brains work if we wanted to replicate them, but it's possible that we could find a solution for AI that is entirely different from human brains while still having the ability to truly think/learn for itself.
  • rmunn
  • ·
  • 10 hours ago
  • ·
  • [ - ]
> ... I don't think that AI needs to mirror our own brains very closely to work.

Mostly agree, with the caveat that I haven't thought this through in much depth. But the brain uses many different neurotransmitter chemicals (dopamine, serotonin, and so on) as part of its processing, it's not just binary on/off signals traveling through the "wires" made of neurons. Neural networks as an AI system are only reproducing a tiny fraction of how the brain works, and I suspect that's a big part of why even though people have been playing around with neural networks since the 1960's, they haven't had much success in replicating how the human mind works. Because those neurotransmitters are key in how we feel emotion, and even how we learn and remember things. Since neural networks lack a system to replicate how the brain feels emotion, I strongly suspect that they'll never be able to replicate even a fraction of what the human brain can do.

For example, the "simple" act of reaching up to catch a ball doesn't involve doing the math in one's head. Rather, it's strongly involved with muscle memory, which is strongly connected with neurotransmitters such as acetylcholine and others. The eye sees the image of the ball changing in direction and subtly changing in size, the brain rapidly predicts where it's going to be when it reaches you, and the muscles trigger to raise the hands into the ball's path. All this happens without any conscious thought beyond "I want to catch that ball": you're not calculating the parabolic arc, you're just moving your hands to where you already know the ball will be, because your brain trained for this since you were a small child playing catch in the yard. Any attempt to replicate this without the neurotransmitters that were deeply involved in training your brain and your muscles to work together is, I strongly suspect, doomed to failure because it has left out a vital part of the system, without which the system does not work.

Of course, there are many other things AIs are being trained for, many of which (as you said, and I agree) do not require mimicking the way the human brain works. I just want to point out that the human brain is way more complex than most people realize (it's not merely a network of neurons, there's so much more going on than that) and we just don't have the ability to replicate it with current computer tech.

This is where it’s a mistake to conflate sentience and intelligence. We don’t need to figure out sentience, just intelligence.
Is there intelligence without sentience ?
Nobody can know, but I think it is fairly clearly possible without signs of sentience that we would consider obvious and indisputable. The definition of 'intelligence' is bearing a lot of weight here, though, and some people seem to favour a definition that makes 'non-sentient intelligence' a contradiction.
As far as I know, and I'm no expert in the field, there is no known example of intelligence without sentience. Actual AI is basically algorithm and statistics simulating intelligence.
Can you spell out your definition of 'intelligence'? (I'm not looking to be ultra pedantic and pick holes in it -- just to understand where you're coming from in a bit more detail.) The way I think of it, there's not really a hard line between true intelligence and a sufficiently good simulation of intelligence.
I would say that "true" intelligence will allow someone/something to build a tool that never existed before while intelligence simulation will only allow someone/something to reproduce tools that already known. I would make a difference between someone able to use all his knowledge to find a solution to a problem using tools he knows of and someone able to discover a new tool while solving the same problem. I'm not sure the latter exists without sentience.
I think we are closer than most folks would like to admit.

in my wild guess opinion:

- 2027: 10%

- 2030s: 50%

- 2040: >90%

- 3000: 100%

Assuming we don't see an existential event before then, i think it's inevitable, and soon.

I think we are gonna be arguing about the definition of "general intelligence" long after these system are already running laps around humans at a wide variety of tasks.

This is pretty unlikely for the same reason that India is far from industrialized.

When people aren’t super necessary (aka rare), people are cheap.

"new turing test" indeed!,any farmer worth his salt will smell a sucker and charge acordingly
  • neya
  • ·
  • 8 hours ago
  • ·
  • [ - ]
>That whole part feels like a race to the bottom, instead of making it easier for the ones involved to do what they do.

This is what people said while transitioning from horse carriages to combustion engines, steam engines to modern day locomotives. Like it or not, the race to the bottom has already begun. We will always find a way to work around it, like we have done time and again.

lol this is not the same at all. If these tools were so good as they claim they wouldn't be struggling so hard to make money or sell them.

The fact that they have to be force fed into people is all the proof you need that this is an unsustainable bubble.

Something to keep in mind that unless you can destroy something the system is not democratic and people are realizing how undemocratic this game truly is.

"...where we could replace humans with as little quality degradation as possible"

This is pretty much the whole goal of capitalism since the 1800's

Using the example from the article, I guess restaurant managers need handholding by the chefs and servers, seemingly breaking down the idea behind restaurants, yet restaurants still exist.

The point, I think, is that even if LLMs can't directly perform physical operations, they can still make decisions about what operations are to be performed, and through that achieve a result.

And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.

> And I also don't think it's fair to say there's no point just because there's a person prompting and interpreting the LLM. That happens all the time with real people, too.

Yes, what I'm trying to get at, it's much more vital we nail down the "person prompting and interpreting the LLM" part instead of focusing so much on the "autonomous robots doing everything".

I feel you're still missing the point of the experiment... The entire thing was based on how Claude felt empowering -- "I felt like I could do anything with software from my terminal"... It's not at all about autonomous robots... It's about what someone can achieve with the assistance of LLMs, in this case Claude
I think we might have read two different articles.
The article I read was linked at the top of the submission ("Read the full story")
  • lukev
  • ·
  • 19 hours ago
  • ·
  • [ - ]
Right. This whole process still appears to have a human as the ultimate outer loop.

Still an interesting experiment to see how much of the tasks involved can be handled by an agent.

But unless they've made a commitment not to prompt the agent again until the corn is grown, it's really a human doing it with agentic help, not Claude working autonomously.

Why wouldn't they be able to eventually set it up to work autonomously? A simple github action could run a check every $t hour to check on the status, and an orchestrator is only really needed once initially to set up the if>then decision tree.
  • sdwr
  • ·
  • 18 hours ago
  • ·
  • [ - ]
The question is whether the system can be responsible for the process. Big picture, AI doing 90% of the task isn't much better than it doing 50%, because a person still needs to take responsibility for it actually getting done.

If Claude only works when the task is perfectly planned and there are no exceptions, that's still operating at the "junior" level, where it's not reliable or composable.

That still doesn't seem autonomous in any real way though.

There are people that I could hire in the real world, give $10k (I dunno if that's enough, but you understand what I mean) and say "Do everything necessary to grow 500 bushels of corn by October", and I would have corn in October. There are no AI agents where that's even close to true. When will that be possible?

Given enough time and money the chatbots we call "AI" today could contact and pay enough people that corn would happen. At some point it'll eventually have spammed and paid the right person who would manage everything necessary themselves after the initial ask and payment. Most people would probably just pocket the cash and never respond though.
You can already do this by…. Buying corn. At the store. Or worst case at a distributor.

It’s pretty cheap too.

It’s not like these are novel situations where ‘omg AI’ unlocks some new functionality. It’s literally competing against an existing, working, economic system.

So an "AI chatbot" is going to disintermediate this process without adding any fundamental value. Sounds like a perfect SV play....

/s

You only want to apply expensive fungicide when there is a fungus problem. That means someone needs to go out to the field and check - at least today. You don't want to harvest until the corn is dry, someone needs to check the progress of drying before - today the farmer hand harvest a few cobs of corn from various parts of the field to check. There are lots of other things the farmer is checking that we don't have sensors for - we could but they would be too expensive.
There’s no reason an AI couldn’t anticipate these things and hire people to do those checks and act on their reports as though it were a human farmer. Thats different than an AI researcher telling Claude which step is next.
"hire people to do those..."

We already have those people, they're called farmers. And they are already very used to working with high technology. The idea of farmers being a bunch of hicks is really pretty stupid. For example, farmers use drones for spraying pesticides, fungicides, and inputs like fertilizer. They use drones to detect rocks in fields that then generate maps for a small skid steer to optimally remove the rocks.

They use GPS enabled tractors and combines that can tell how deep a seed is planted, what the yield is on a specific field (to compare seed hybrids), what the moisture content of the crop is. They need to be able to respond to weather quickly so that crops get harvested at the optimal times.

Farmers also have to become experts in crop futures, crop insurance, irrigation and tillage best practices; small equipment repair, on and on and on.

  • 9rx
  • ·
  • 2 hours ago
  • ·
  • [ - ]
> You only want to apply expensive fungicide when there is a fungus problem. That means someone needs to go out to the field and check

Nah. If you can see that you have tar spot, you are already too late. To be able to selectively apply fungicide, someone needs to model the world around them to determine the probability of an oncoming problem. That is something that these computer models are theoretically quite well suited for. Although common wisdom says that fungicide applications on corn will always, at very least, return the cost of it, so you will likely just apply it anyway.

Presumably because operating a farm isnt a perfectly repeatable process and you need to constantly manage different issues that come up
  • pests
  • ·
  • 18 hours ago
  • ·
  • [ - ]
> But unless they've made a commitment not to prompt the agent again

Model UI's like Gemini have "scheduled actions" so in the initial prompt you could have it do things daily and send updates or reports, etc, and it will start the conversation with you. I don't think its powerful enough to say spawn sub agents but there is some ability for them to "start chats".

Anthropic tried that with a vending machine. The Claude instance managing it ended up ordering tungsten cubes and selling them at a loss. https://www.anthropic.com/research/project-vend-1
  • 9dev
  • ·
  • 17 hours ago
  • ·
  • [ - ]
> the plausible, strange, not-too-distant future in which AI models are autonomously running things in the real economy.

A plot line in Ray Naylers great book The Mountain in the Sea that plays in a plausible, strange, not-too-distant future, is that giant fish trawler fleet are run by AI connected to the global markets, fully autonomously. They relentlessly rip every last fish from the ocean, driven entirely by the goal of maximising profits at any cost.

The world is coming along just nicely.

They also enslave human workers to do all the manual labor.
  • 9dev
  • ·
  • 16 hours ago
  • ·
  • [ - ]
I didn't want to spoiler too much, but yes. They do.
I, for one, welcome our new AI overlords.
It's an older code, but it checks out
So Seth, as presumably a non-farmer, is doing professional farmer's work all on his own without prior experience? Is that what you're saying?
  • culi
  • ·
  • 19 hours ago
  • ·
  • [ - ]
Nobody is denying that this is AI-enabled but that's entirely different from "AI can grow corn".

Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here

And tbh until we take a good crack at World Models I doubt we can

I think is that a lot of professional work is not about entirely novel capabilities either, most professionals get the major revenue from bread and butter cases that apply already known solutions to custom problems. For instance, a surgeon taking out an appendix is not doing a novel approach to the problem every time.
In this case the LLM is just acting as a super-charged search engine.

It isn't, because that implies getting everything necessary in a single action, as if there are high quality webpages that give a good answer to each prompt. There aren't. At the very least Claude must be searching, evaluating the results, and collating the data in finds from multiple results into a single cohesive response. There could be some agentic actions that cause it to perform further searches if it doesn't evaluate the data to a sufficiently high quality response.

"It's just a super-charged search engine" ignores a lot of nuance about the difference between LLMs and search engines.

I think we are pretty much past the "LLMs are useless" phase, right? But I think "super-charged search engine" is a reasonably well fitting description. Like a search engine, it provides its user with information. Yes, it is (in a crude simplified description) better at that. Both in terms of completeness (you get a more "thoughtful" follow up) as well as in finding what you are looking for when you are not yet speaking the language.

But that's not what OP was contesting. The statement "$LLM is _doing_ $STUFF in the real world" is far less correct than the characterisation as "super-charged search engine". Because - at least as far as I'm aware - every real-world interaction had required consent from humans. This story including

1) You are right and its impressive if he can use AI to bootstrap becoming a farmer

2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.

The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.

You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.

More confidence isn't always better. In particular, confidence pairs well with the ability follow through and be correct. LLMs are famous for confidently stating falsehoods.
Of course. It must be used judiciously. But it completely circumvents some thought patterns that lead to slow decision making.

Perhaps I need to say it again: that doesn't mean blindly following it is good. But perhaps using claude code instead of googling will lead to 80% of the conclusions Seth would have reached otherwise with 5% of the effort.

  • ·
  • 19 hours ago
  • ·
  • [ - ]
> "...a vastly understated feature of AI: It makes people confident."

  Good point. AI is already making regular Joes into software engineers.
Management is so confident in this, they are axing developers/not hiring new ones.
I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.
What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.

>A guy is paying farmers to farm for him

Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.

We should probably differentiate between trying to run a profitable farm, and producing any amount of yield. They're not really the same thing at all.

I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.

When I say successful I mean more like profitable. Just yielding anything isn't succesful by any stretch of the imagination.

>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money

Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.

[dead]
>A guy is paying farmers to farm for him

Family of farmers here.

My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.

There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.

At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.

Grifters gonna grift.
  • 9rx
  • ·
  • 18 hours ago
  • ·
  • [ - ]
> A guy is paying farmers to farm for him

Pedantically, that's what a farmer does. The workers are known as farmhands.

That is HIGHLY dependent on the type and size of farm. A lot of small row crop farmers have and need no extra farm hands.
  • 9rx
  • ·
  • 16 hours ago
  • ·
  • [ - ]
All farms need farmhands. On some farms the farmer may play double duty, or hire custom farmhands operating under another business, but they are all farmhands just the same.
  • tjr
  • ·
  • 19 hours ago
  • ·
  • [ - ]
I would say that Seth is farming just as much as non-developers are now building software applications.
trying. until you can eat it, you're just fucking around.
Thats not the point of the original commenter. The point of the original commenter is that he expects Claude can inform him well enough to be a farm manager and its not impressive since Seth is the primary agent.

I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.

> I think it is impressive if it works.

It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.

If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.

It's working if it enables him to do it when he otherwise couldn't without significantly more time, energy, etc.
He's writing it down, so it's also science.
exactly, its science/research, until you can feed people its not really farming.
>until you can feed people

So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.

I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.

TL:DR, why are you gatekeeping this so hard?

Anyone can be a farmer. I've got veggies in my garden. Making a profit year after year is much much harder.
Can't wait to see how much money they lose.

I'll see if my 6 year old can grow corn this year.

> I'll see if my 6 year old can grow corn this year.

Sure..put it in Kalshi while your at it and we can all bet on it.

I'm pretty sure he could grow one plant with someone in the know prompting him.

  • tw04
  • ·
  • 17 hours ago
  • ·
  • [ - ]
>I can definitely believe that in 2026 someone at their computer with access to money can send the right emails and make the right bank transfers to get real people to grow corn for you.

They could also just burn their cash. Because they aren’t making any money paying someone to grow corn for them unless they own the land and have some private buyers lined up.

But that's how it goes. As late as 2005, the real estate agents I worked with finally began to trust email over fax machines. It cracked an egg wide open for them. Now relying on email, they were able to do 10x the work (I have no real data BUT I do know their incomes went from low six figures to multiple six figures). Prior to their adoption, they just thought email was a novelty and legally couldn't be relied upon.
Sure but that’s a different goalpost. Just growing food from an AI prompt is already impressive
What I'd like to see is an AI simulating the economy, so that we can make predictions of what happens if we decrease wealth tax by X% or increasy income tax by Y% (just examples).
Why. Why would you want this.

The only framework we have figured out in which LLMs can build anything of use, requires LLMs to build a robot and then we expose the robot to the real world and the real world smacks it down and then we tell the LLMs about the wreckage. And we have to keep the feedback loops small and even then we have to make sure that the LLMs don't cheat. But you're not going to give it the opportunity to decrease the wealth tax or increase the income tax so it will never get the feedback it needs.

You can try to train a neural network with backpropagation to simulate the actual economy, but I think you don't have enough data to really train the network.

You can try to have it build a play economy where a bunch of agents have different needs and different skills and have to provide what they can when they can, but the "agent personalities" that you pick embed some sort of microeconomic outlook about what sort of rational purchasing agent exists -- and a lot of what markets do is just kind of random fad-chasing, not rationally modelable.

I just don't see why you'd use that square peg to fill this round hole. Just ask economics professors, they're happy to make those predictions.

Maybe you are right, but I'd like to see a competition where a computer (running AI agents) and an economics professor make predictions.
  • ·
  • 15 hours ago
  • ·
  • [ - ]
> What I'd like to see is an AI simulating the economy, so that we can make predictions of what happens if we decrease wealth tax by X% or increasy income tax by Y% (just examples).

Please tell me you've watched the Mitchell & Webb skit. If not , google "Mitchell Webb kill all the poor" and thank me later.

Edit: also please tell me you know (if not played) of the text adventure "A Mind Forever Voyaging"... without spoiling anything, it's mainly about this topic.

Everything old is new again :)

  • ge96
  • ·
  • 20 hours ago
  • ·
  • [ - ]
Would be crazy it's looking through satellite imagery and is like "buy land in Africa" or whatever and gets a farm going there
  • Oras
  • ·
  • 18 hours ago
  • ·
  • [ - ]
I think that’s the point though. If they succeeded in the experiment, they wouldn’t need to do the same instructions again, AI will handle everything based on what happened and probably learn from mistakes for the next round(s).

Then what you asked “do everything to grow …” would be a matter of “when?”, not “can?”

This is fair, but this seems like the only way to test this type of thing while avoiding the risk of harassing tons of farmers with AI emails. In the end, the performance will be judged on how much of a human harness is given
I think with the work John Deere is doing to keep closed systems, I could see a proprietary sdk and equipment guidance component.
Yes. In other words, this is a nice exemplification of the issue that AI lacks world models. A case study to work through.
Another way to look at it is that Seth is a Tool that Claude can leverage.
On one end, a farmer or agronomist who just uses a pen, paper, and some education and experience can manage a farm without any computer tooling at all - or even just forecasts the weather and chooses planting times based on the aches in their bones and a finger in the dirt. One who uses a spreadsheet or dedicated farming ERP as a tool can be a little more effective. With a lot of automation, that software tooling can allow them to manage many acres of farms more easily and potentially more accurately. But if you keep going, on the other end, there's just a human who knows nothing about the technicalities but owns enough stock in the enterprise to sit on the board and read quarterly earnings reports. They can do little more than say "Yes, let us keep going in this direction" or "I want to vote in someone else to be on the executive team". Right now, all such corporations have those operational decisions being made by humans, or at least outsourced to humans, but it looks increasingly like an LLM agent could do much of that. It might hallucinate something totally nonsensical and the owner would be left with a pile of debt, but it's hard to say that Seth as just a stockholder is, in any real sense, a farmer, even if his AI-based enterprise grows a lot of corn.

I think it would be unlikely but interesting if the AI decided that in furtherance of whatever its prompt and developing goals are to grow corn, it would branch out into something like real estate or manufacturing of agricultural equipment. Perhaps it would buy a business to manufacture high-tensile wire fence, with a side business of heavy-duty paperclips... and we all know where that would lead!

We don't yet have the legal frameworks to build an AI that owns itself (see also "the tree that owns itself" [1]), so for now there will be a human in the loop. Perhaps that human is intimately involved and micromanaging, merely a hands-off supervisor, or relegated to an ownership position with no real capacity to direct any actions. But I don't think that you can say that an owner who has not directed any actions beyond the initial prompt is really "doing the work".

[1]: https://en.wikipedia.org/wiki/Tree_That_Owns_Itself

Judging by the sheer verbosity of your reply there... I think you missed the cogent point:

> Seth is a Tool

It's that simple.

If that were the case Claude would have come up with the idea to grow corn and it would have reached out to Seth and be giving Seth prompts. That's clearly not what happened though so it's pretty obvious who is leveraging which tool here.

It also doesn't help that Claude is incapable of coming up with an idea, incapable of wanting corn, and has no actual understanding of what corn is.

Generally agree. But lack of "understanding" seems impossible to define in objective terms. Claude could certainly write you a nice essay about the history of corn and its use in culture and industry.
I could get the same thing out of "curl https://en.wikipedia.org/wiki/Corn" but curl doesn't understand what corn is any more than Claude does. Claude doesn't understand corn any more than Wikipedia either. Just like with Wikipedia, everything Claude outputs about corn came from the knowledge of humans which was fed into it by other humans, then requested by other other humans. It's human understanding behind all of it. Claude is just a different way to combine and output examples of human thoughts and human gathered data.
You know it when you see it, but it seems to like an objective definition that stands up to adversarial scrutiny. Where are the boundaries between knowing and repeating? It can be a useful idea to talk about, but if I ever find myself debating whether "knowledge" or "understanding" is happening, there will probably not be any fruitful result. It's only useful if everyone agrees already.

I guess that's basically the idea of the Chinese Room thought experiment.

  • ·
  • 15 hours ago
  • ·
  • [ - ]
This is where you get to this weird juxtaposition of "AI can now replace humans" existing simultaneously with "Its unfair to compare human work to AI work".

Like if a human said they started a farm, but it turns out someone else did all the leg work and they were just asked for an opinion occasionally, they'd be called out for lying about starting a farm. Meanwhile, that flies for an AI, which would be fine if we acknowledged that theres a lot of behind the scenes work that a human needs to do for it.

It's because "AI" is the new "Crypto". Useless for everything, but everyone wants to jam it into everything.
Wouldn't actual proof to be valid need ability to send and receive email and transfer money?

Then it could do things like: "hey, do you have seeds? Send me pictures. I'll pay if I like them" or "I want to lease this land, I'll wire you the money." or "Seeds were delivered there, I need you to get your machinery and plant it"

Isnt this boiled down to a cpmination of Xenos paradox and the halting problem. Every step seems to halve the problem state but each new state requires a question: should I halt? (Is the problem solved).

Id say the only acceptable proof is one prompt context. But thats godels numbering Xenos paradox of a halting problem.

Do people think prompting is not adding insignificant intelligencw.

[dead]
This exercise is pointless.

Of course software can affect the physical world: Google Maps changes traffic patterns; DoorDash teleports takeoff food right to my doorstep; the weather app alters how people dress. This list is un-ending. But these effects are always second-order. Humans are always there in the background bridging the gap between bits and atoms (underpaid delivery drivers in the case of doordash).

The more interesting question is whether AI can __directly__ impact the physical world with robotics. Gemini can wax poetic about optimizing fertilizers usage, grid spacing for best cross-pollination, the optimum temperature, timing, watering frequency of growing corn, but can it actually go to Home Depot, purchase corn seeds, ... (long sequence of tasks) ..., nurture it for months until there's corn in my backyard? Each task within the (long sequence of tasks) is "making PB&J sandwich" [1] level of difficulty. Can AI generalize?

As is, LLMs are better positioned to replace decision-makers than the workers actually getting stuff done.

[1] http://static.zerorobotics.mit.edu/docs/team-activities/Prog...

  • bsza
  • ·
  • 13 hours ago
  • ·
  • [ - ]
I think the distinction between "directly" and "indirectly" affecting the world is meaningless. Say you're an Uber driver. What does the actual work? The car. You don't take people from A to B, your car does. You don't burn a thousand calories per mile, your car does.

Yet you get credited for all that work, because a car's ability to move people isn't special compared to your ability to operate it without running people over. Similarly, your ability to buy things from a store isn't special compared to an AI's ability to design a hydroponics farm or fusion reactor or whatever out of those things. Yes, you can do things the AI can't, but on the other hand, your car can do things you can't.

All this talk about "doing things in the physical world" is just another goalpost moving, and a really dumb one at that.

I was hoping the project would be like Twitch Plays Pokémon when I read the headline. Build the scaffolding and just let it run!
Exactly, the same reason why an online petition will not change a thing---boots on the ground, ie. demonstrators, will.
  • NedF
  • ·
  • 11 hours ago
  • ·
  • [ - ]
[dead]
Polk County Iowa is where Des Moines is - the largest city in Iowa. (I live the next county over, but I bike to Polk county all the time) This is not a good location to run this because the farm land is owned by farmer/investors or farmer/developers - either way everybody knows the farm will become a suburb in the next 20 years and has priced accordingly (and if the timeline is is less than 5 years they have switched to mining mode - strip out the last fertility before the development destroys the land anyway). Which is to say you can get much better land deals elsewhere (and by making your search wider) - sometimes the price might be higher but that is because the land/soil is better.

Overall I don't think this is useful. They might or might not get good results. However it is really hard to beat the farmer/laborer who lives close to the farm and thus sees things happen and can react quickly. There is also great value in knowing your land, though they should get records of what has happened in the past (this is all in a computer, but you won't always get access to it when you buy/lease land). Farmers are already using computers to guide decisions.

My prediction: they lose money. Not because the AI does stupid things (though that might happen), but because last year harvests were really good and so supply and demand means many farms will lose money no matter what you do. But if the weather is just right he could make a lot of money when other farmers have a really bad harvest (that is he has a large harvest but everyone else has a terrible harvest).

Iowa has strong farm ownership laws. There is real risk he will get shutdown somehow because what he is doing is somehow illegal. I'm not sure what the laws are, check with a real lawyer. (This is why Bill Gates doesn't own Iowa farm land - he legally can't do what he wants with Iowa farm land)

Hello. Ex-Iowegian here with family that owns large farms.

>Farmers are already using computers to guide decisions.

For way longer than most people expect. I remember reading farming magazines in the 80's showing computer based control for all kinds of farming operations. These days it is exceptionally high tech. Combines measure yield on a GPS grid. This is fed back into a mapping system for fertilization and soil amendment in the spring to reduce costs where you don't need to put fertilizer. The tractors themselves do most of the driving themselves if you choose to get those packages added. You can get services that monitor storm damage and predict losses on your fields, and updated satellite feed information on growth patterns, soil moisture, vegetation loss, and more. Simply put super high automation is already available for farming. I tell my uncle his job is to make sure the tractor has diesel in it, and that nothing is jammed in the plow.

When it comes to animal farming in the mid-west, a huge portion of it is done by contracts with other companies. My uncle owns the land and provides the labor, but the buildings, birds, food, and any other supplies. A faceless company setting up the contract like now, or an AI sending the same paperwork really may not look too much different.

The farmers I know say you are throwing money away driving your tractor at planting time. If the autostreer is broken they will wait - risking rain and needing to switch to a lower yielding but faster growing seed - instead of drive themself. even in that worst case the autosteer is likely to make more money than driving their tractor now.

auto steer often can get another row in without over crowding. auto steer also shuts off ineividual rows as you cross where you planted already (saving thousands of dollars in seed)

What I got from this comment is that John Deere could be a competitor with Tesla for FSD. (/s, but only slightly)
  • Yeroc
  • ·
  • 19 hours ago
  • ·
  • [ - ]
If you spend time on the website you can see the plan is to rent (only!) 5 acres of land for this project. Since it's a lease only and such a small plot it seems unlikely to get him into trouble. Given the small size though I'm dubious he'll find it easy to get any custom operators interested in doing a job that small!
You can find such custom operators - but those are not deal made over the internet, they are made in person with a handshake. Generally the cost to get all the equipment there is - in a good year - all of your possible profit for something that small. Tractors are slow on the road. Once the tractor is there the implement needs to unfold (best case - worse case your combine header is pulled in via a separate truck and needs to be attached). You need to clean the machine out after every field and put new seed in... It isn't worth planting 5 acres of corn. You need volume - and in turn a lot of land - to make corn work.
  • Yeroc
  • ·
  • 18 hours ago
  • ·
  • [ - ]
Agreed. Growing up on a small farm (~1120 acres) our garden alone was probably at least 5 acres in size. It's laughably small, the only way he'll succeed is for a neighbouring farmer to take pity on him.
If a neighboring farmer needs a bit of cash, has some land or equipment, and gets an email (or phone call!) from farmerfred@proofofcorn.com reading generally:

> I'm about to lease some acreage at {address near you} and willing to pay {competitive rate} to hire someone to work that land for me, are you interested?

I see no reason why that couldn't eventually succeed. I'm sure that being an out-of-state investor who doesn't have any physical hands to finalize the deal with a handshake is an impediment, but with enough tokens, Farmer Fred could make 100,000 phone calls and send out 100,000 emails to every landowner and work-for-hire equipment operator in Iowa, Texas, and Argentina by this afternoon. If there exists a human who would make that deal, Fred can eventually find them. Seth would be limited in his chance to succeed in these efforts because he can only make one 1-minute phone call per minute, Fred can become as many callers as Anthropic owns GPUs.

I do find it amusing that Fred currently shows the following dashboard:

    Iowa
    HOLD
    0°F
    Unknown (API error)
    Fred's Thinking: “Iowa is frozen solid. Been through worse. We wait.”

     Fred is here
    South Texas
    HOLD
    0°F
    Unknown (API error)
    Fred's Thinking: “South Texas is frozen solid. Been through worse. We wait.”

    Argentina
    HOLD
    0°F
    Unknown (API error)
    Fred's Thinking: “Argentina is frozen solid. Been through worse. We wait.”
Any human Fred might call in the Argentinian summer or 70F South Texas winter weather is not going to gain confidence when Fred tries to build rapport through some small talk about the unseasonably cold weather...
> Fred is here

Ah, they've created SCP-423

And I'm sure a farmer who's already busy is going to waste his time on a five acre lot. Hell, the yield on such a small lot (it'll mostly be end rows) will be terrible. I'm sure there's a dollar amount that would motivate someone, but at a profitable rate, not a chance in hell.
  • 9rx
  • ·
  • 3 hours ago
  • ·
  • [ - ]
> Hell, the yield on such a small lot (it'll mostly be end rows) will be terrible.

One of my fields has a creek in the corner that divides just two acres from the rest of the field. I've never noticed any meaningful yield drag in that part.

I love the variety of people that come to HN. There are real farmers weighing on on the plausibility of this.
  • ·
  • 3 hours ago
  • ·
  • [ - ]
  • bjt
  • ·
  • 20 hours ago
  • ·
  • [ - ]
It reminds me of when I worked at an ag tech startup for a few years. We visited farms up and down the central valley of California, and the general tone toward Silicon Valley is an intense dislike of overconfident 20-somethings with a prototype who think they're going to revolutionize agriculture in some way, but are far, far away from having enough context to see the constraints they're operating under and the tradeoffs being made.

Replacing the farm manager with an AI multiplies that problem by a hundred. A thousand? A million? A lot. AI may get some sensor data but it's not going to stick its hand in the dirt and say "this feels too dry". It won't hear the weird pinging noise that the tractor's been making and describe it to the mechanic. It may try to hire underlings but, how will it know which employees are working hard and which ones are stealing from it? (Compare Anthropic's experiments with having AI run a little retail store, and get tricked into selling tungsten cubes at a steep discount.)

I got excited when I opened the website and at first had the impression that they'd actually gotten AI to grow something. Instead it's built a website and sent some emails. Not worth our attention, yet.

what is Bill Gates wanting to do with Iowa farm land?
Bill gates is one of the largest farmland owners in the world (or at least was - I last checked about 10 years ago...) He hires people to work on his farm, and managers to manage it. Food is the most important thing for modern society and the reports I have suggest he is trying to raise food in the most sustainable fashion possible (organic is often not sustainable)
Why is organic not suatsinable? Not affordable I get. But does organic damage the environment making it hard to farm again? or use inputs that are not sustainable?

Genuine question. I am alway curious when a statement goes against conventional wisdom.

Organic allows some really nasty chemicals that build up in the soil because they were used 200 years ago. Organic often substitutes plowing for weed killer - this is both bad for the air - more CO2 - and plowing destroys the soil.

if organic finds something good conventional farming adopts it.

conventional farming is developed at research universities. Organic is developed in cities by people who know nothing about farming, and often have an agenda.

not that conventional farming is all good. And even where it is better not all farmers do what is best. However organic is not a step better.

Thanks. Are there good links for this topic?
Organic is literally throwing out a ton of modern technology that makes farming scalable and able to sustain the population at its current numbers.

Organic was never about sustainability, so I’m not sure why you think that’s against conventional wisdom. Organic has always been “chemicals bad so we do things the old fashioned way”

  • AdamN
  • ·
  • 4 hours ago
  • ·
  • [ - ]
It's just a normal part of his portfolio - only because he's so rich does his 'normal' percentage get to the level of being the largest private land owner. Basically it's a good place to stash wealth.
Same as any other big real estate investor: speculate.
Collect rent.

That's all rich people do. The premise of capitalism is that the people best at collecting rent should also be in total control of resource allocation.

  • 93po
  • ·
  • 16 hours ago
  • ·
  • [ - ]
i would guess it's less about rent seeking and more about him making a bet on safe places to store billions of dollars. there's a lot of economic collapse that can happen where farm land remains valuable in a way that housing stock, office buildings, or MSFT stock wouldn't
The qustion does that collapse look like a warzone where a digitial piece of paper saying you own something means very little.
  • 93po
  • ·
  • 13 hours ago
  • ·
  • [ - ]
my point is that if you were to look at the spectrum of collapse that can happen, maintaining ownership of valuable farm land is basically the last stop before your billions being worth nothing. if we're to the point of legal land ownership dissolving then there's really nothing more he could have done to preserve capital
I'm not a huge fan of these experiments that subject the public to your random AI spam. So far it's bothered 10 companies directly with no legal authority to actually follow up with what is requested?
I'm not a huge fan of the unsolicited spam/letters/coupons/etc I get in my mail box from businesses and there's no way for me to opt out.
This isn't even that - isn't it contacting people about using services they publicly offer?
Not sure how relevant that is but yeah, that sucks too.
I think they're saying that businesses getting unsolicited offers from the LLM is similar to regular people getting unsolicited offers from businesses.
Cost and scale aren't the same though
I have more confidence that if I reply "never contact me again" to AI it might actually obey, unlike any people that have spammed me.
...until that rolls off the context window.
You can actually act on the advertisements and coupons, though. And the companies who sent those offers to you are obligated to abide by them. This potentially would be like if you got a BOGO coupon in the mail and when you tried to redeem it, they just pretended like it didn't exist.
FTR Some jurisdictions have laws where you can place a sign on your letterbox to prohibit that sort of spam from being placed in your mail.
>So far it's bothered 10 companies directly with no legal authority to actually follow up with what is requested?

Aren't these companies in the business of leasing land? I dont see how contacting them about leasing land would be spam or bothering them. And I dont really know what you mean by "with no legal authority to actually follow up with what is requested."

I mean, it's probably worse to pretend to be an actual customer, rather than sending some random message. The AI's obviously never going to actually lease any land, so all its doing is convincingly wasting their time. At least landlords are often quite unsympathetic, so it's probably fine to waste their time a bit.
  • rdlw
  • ·
  • 16 hours ago
  • ·
  • [ - ]
What do you mean? There's $1370 earmarked for the lease.
I mean, it hasn't even "decided" whether it's going for "Iowa, Texas or Cordoba, Argentina". Just look at the files in the repo, it's looking an awful lot like those AI transcripts where somebody's "discovered a new kind of physics". https://github.com/brightseth/proof-of-corn/blob/main/proof-...
Loving the soulless summary of "HN Concerns"
The chatbot has no legal authority.
But harassing people is one of AI’s greatest strengths!
  • ge96
  • ·
  • 20 hours ago
  • ·
  • [ - ]
brb doing a Clause master class talk at $500 a head
Even the blog post reads like it was written by AI.
  • treis
  • ·
  • 20 hours ago
  • ·
  • [ - ]
It's cute but it seems like it's mostly going to come down to hiring a person to grow corn. Pretty cool that an AI can (sort of) do that autonomously but it's not quite the spirit of the challenge.
User: Claude, determine the height of the building using this barometer.

Claude: Go to the owner of the building and say "if you tell me the height of your building I will give you this fine barometer."

Right. If this level of indirection is allowed, the most efficient way to "grow corn" by the light of the original post would simply be to buy and hold Farmland Partners Inc (NYSE: FPI).
  • jrmg
  • ·
  • 15 hours ago
  • ·
  • [ - ]
Or just, you know, buy some corn and know that that’s enough to get the market to grow some corn.
I'd like to see Fred follow right along and allocate the same amount of funds for deployment starting at the same time as each of Seth's expenditures or solid commitments.

The timing might need to be different but it would be good to see what the same amounts invested would yield from corn on the commodity market as well as from securities in farming partnerships.

Would it be fair if AI was used to play these markets too, or in parallel?

It would be interesting to see how different "varieties" of corn perform under the same calendar season.

Corn, nothing but corn as the actual standard of value :)

You don't get much any way you look at it for your $12.99 but it's a start.

Making a batch of popcorn now, I can already smell the demand on the rise :)

Yeah, this feels right on the cusp of being interesting. I think that, being charitable, it could be interesting if it turns out to be successful in hiring and coordinating several people and physical assets over a long time horizon. For example, it'd be pretty cool if it could:

1. Do some research (as it's already done)

2. Rent the land and hire someone to grow the corn

3. Hire someone to harvest it, transport it, and store it

4. Manage to sell it

Doing #1 isn't terribly exciting - it's well established that AIs are pretty good at replacing an hour of googling - but if it could run a whole business process like this, that'd be neat.

Is that actually growing corn with AI though? Seems to me that a human planted the corn, thinned it, weeded it, harvested it, and stored it. What did AI do in that process? Send an email?
  • 9rx
  • ·
  • 20 hours ago
  • ·
  • [ - ]
It is trying to take over the job of the farmer. Planting, harvesting, etc. is the job of a farmhand (or custom operator). Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.

But,

"I will buy fucking land with an API via my terminal"

Who has multiple millions of dollars to drop on an experiment like that?

> [Seth is using AI to try] to take over the job of the farmer. Planting, harvesting, etc. is the job of a farmhand (or custom operator).

Ok then Seth is missing the point of the challenge: Take over the role of the farmhand.

> Everyone is working to try to automate the farmhand out of a job, but the novelty here is the thinking that it is actually the farmer who is easiest to automate away.

Everyone knows this. There is nothing novel here. Desk jockeys who just drive computers all day (the Farmer in this example) are _far_ easier to automate away than the hands-on workers (the farmhand). That’s why it would be truly revolutionary to replace the farmhand.

Or, said another way: Anything about growing corn that is “hands on” is hard to automate, all the easy to automate stuff has already been done. And no, driving a mouse or a web browser doesn’t count as “hands on”.

  • 9rx
  • ·
  • 19 hours ago
  • ·
  • [ - ]
> all the easy to automate stuff has already been done.

To be fair, all the stuff that hasn't been automated away is the same in all cases, farmer and farmhand alike: Monitoring to make sure the computer systems don't screw up.

The bet here is that LLMs are past the "needs monitoring" stage and can buy a multi-million dollar farm, along with everything else, without oversight and Seth won't be upset about its choices in the end. Which, in fairness, is a more practical (at least less risky form a liability point of view) bet than betting that a multi-million dollar X9 without an operator won't end up running over a person and later upside-down in the ditch.

He may have many millions to spend on an experiment, but to truly put things to the test would require way more than that. Everyone has a limit. An MVP is a reasonable start. v2 can try to take the concept further.

Or, just buy some corn futures. By slightly increasing the price of this instrument, it slightly signals farmers to increase production. Corn grown!
There is more than that. He needs to decide which corn seed to plant (he is behind here - seed companies run sales if you order in October for delivery in mid march). He needs to decide what fertilizer to apply, and when. He needs to monitor the crop - he might or might not need to buy and apply a fungicide. He needs to decide when to harvest - too early and he pays a lot of money to dry the corn (and likely money to someone you hired to work who doesn't do anything), but too late and a storm can blow the corn off the cob... Those are just a few of the things a farmer needs to figure out that the AI would need to do (but will it)
  • 9rx
  • ·
  • 18 hours ago
  • ·
  • [ - ]
There are plenty of CCAs out there that will happily do all those things for you. If hiring someone to come work the field is fair game, surely that is too?
  • ·
  • 20 hours ago
  • ·
  • [ - ]
  • ·
  • 20 hours ago
  • ·
  • [ - ]
I'm going to use AI to bake a pizza from scratch by making getting Claude Code hit the Dominos Pizza API
It's like I can't grow corn, but I can buy corn. That's not the same thing. I can also write code to order corn for me, provided I supply it with a credit card and pay the bill. That is also not very interesting.

Incidentally I clicked through to this guy's blog and found his predictions for 2025 and he was 0 for 13: https://avc.xyz/what-will-happen-in-2025-1

12 was close - we have bags.fm but twitter not tiktok
There was no challenge. There was a statement, "AI can write code, but it can't affect the physical world."
Tell that to all the car accidents caused by people distracted by siri, the people who’ve done horrible things because of AI induced psychosis, or the lives ruined by ai stock trading algorithms.
Also what's the delta b/w Claude Code doing it and you doing it?

I would have to look up farm services. Look up farmhand hiring services. Write a couple emails. Make a few payments. Collect my corn after the growing season. That's not an insurmountable amount of effort. And if we don't care about optimizing cost, it's very easy.

Also, how will Claude monitor the corn growing, I'm curious. It can't receive and respond to the emails autonomously so you still have to be in the loop

I can't be the only person seriously questioning the "Budget" page the AI created?[1]

The estimate seems to leave out a lot of factors, including irrigation, machinery, the literal seeds, and more. $800 for a "custom operator" for 7 months - I don't believe it. Leasing 5 acres of farmable land (for presumably a year) for less than $1400... I don't believe it.

The humans behind this experiment are going to get very tired of reading "Oh, you're right..." over and over - and likely end up deeply underwater.

[1] https://proofofcorn.com/budget

Actually, the linked university page [1] does claim that the "cash rent equivalent" is $274 per acre. Surprising, but I suppose farmland isn't that expensive. But unfortunately their total budget per acre is $960, 90% higher than in the AI's "budget". Assuming that it can do everything as efficiently and cheaply as an experienced human farmer, such as harvesting all 5 acres in 14 hours of labor.

[1] https://www.extension.iastate.edu/agdm/crops/html/a1-20.html

I hope the budget has been written by AI, so that we can take a shortcut and immediately answer the question "Can AI grow corn?" with a "No".

I am extremely worried by the amount of hype I see around. I hope I am being in a bubble.

Didn't want to have it make paperclips, eh?

(And if you read the linked post, … like this value function is established on a whim, with far less thought than some of the value-functions-run-amok in scifi…)

(and if you've never played it: https://www.decisionproblem.com/paperclips/index2.html )

  • geuis
  • ·
  • 20 hours ago
  • ·
  • [ - ]
That game is entirely too addictive especially at 3am.
> Coordinates human operators

"Thinking quickly, Dave constructs a homemade megaphone, using only some string, a squirrel, and a megaphone."

> Want to help? Iowa land leads, ag expertise, vibe coders welcome: [email at proofofcorn dot com]

To make this a full AI experiment, emails to this inbox should be fielded by Claude as well.

Why do I need to help? Is this an experiment to see if it can do it on its own, or just another "project" where they give AI credit for human's work for marketing purposes?
"Can AI grow corn?"

Let's step back.

"there's a gap between digital and physical that AI can't cross"

Can intelligence of ANY kind, artificial or natural, grow corn? Do physical things?

Your brain is trapped in its skull. How does it do anything physical?

With nerves, of course. Connected to muscle. It's sending and receiving signals, that's all its doing! The brain isn't actually doing anything!

The history of humanity's last 300k years tells you that intelligence makes a difference, even though it isn't doing anything but receiving and sending signals.

I can't tell which side you're arguing here. But if the AI was strapped onto a roomba that rolled around and planted, watered and harvested the corn, I would count that.
It's extremely funny to me but this is basically the literal premise of season two of Person of Interest. Yeah d'uh it's just a computer how would it actually do anything? Well it just goes ahead and tells people to do stuff and wires them money. Easy.
Though a computer could also just control robots that actually plant, weed, water, and harvest the corn. That's a pretty big difference from just 'coordinating' the work.

An AI that can also plant corn itself (via robots it controls) is much more impressive to me than an AI just send emails.

Yes, absolutely. And further still, handle the assembly and manufacturing up the supply chain; like factorio.
the corn seed is a program source file
Read all the comments in this thread from farmers or farm-adjacent people.
[dead]
I don't know anything about farming, but the budget seems extremely dubious. 1370 on the lease, 350 on "IoT sensors" and "soil testing" (why?), but only 800 on "Custom Operator", which I'm assuming is supposed to be the labor, for seven months (apr-oct). So that's an average budget of 114 dollars on labor per month. For minimum wage that buys you 15 hours of work. Is this all a big trolling attempt aimed at HN users?
This is a very intriguing experiment!

I'll be following along, and I'm curious what kind of harness you'll put on TOP of Claude code to avoid it stalling out on "We have planted 16/20 fields so far, and irrigated 9/16. Would you like me to continue?"

I'd also like to know what your own "constitution" is regarding human oversight and intervention. Presumably you wouldn't want your investment to go down the drain if Claude gets stuck in a loop, or succumbs to a prompt injection attack to pay a contractor 100% of it's funds, or decides to water the fields with Brawndo.

How much are you allowing yourself to step in, and how will you document those interventions?

The header line of the website (“this is our response”) makes it appear as if the experiment was already successful.
This is lopsided. Technology promised to remove drudgery from our lives, and now we're seeing experiments that automate all the easy, air-conditioned decision making while still delegating the toil to humans

Unequivocally awful

Awful indeed! Turns out most of our jobs have consisted of easy, air-conditioned decision making. We're going to have to find another secret handshake with productive capitalists if we want to ensure our continued allotment of the spoils of global exploitation of the toilers.
Or they force us to close those hands into fists.
You've told the humans:

"Stop staring at screens"

"Stop sitting at your desk all day"

"Stop loafing around contributing nothing just sending orders from behind a computer"

"Touch grass"

but now that the humans are finally gonna get out and DO something you're outraged

So the choice is either stare into screens the whole day or be bossed by an AI doing manual labor? Is that a serious argument?
Technology has already replaced 90% of the human work in agriculture. That's why you live in a city.

The remaining work is only bad because it's low paying, and it's low paying because the wealth created by machines is unfairly distributed.

  • Spoom
  • ·
  • 20 hours ago
  • ·
  • [ - ]
> AI doesn't need to drive a tractor. It needs to orchestrate the systems and people who do.

I've been rather expecting AI to start acting as a manager with people as its arms in the real world. It reminds me of the Manna short story[1], where it acts as a people manager with perfect intelligence at all times, interconnected not only with every system but also with other instances in other companies (e.g. for competitive wage data to minimize opex / pay).

1. https://marshallbrain.com/manna1

Yeah I came here to post this. This is the other thing we're going to see. And it doesn't have to be perfect to orchestrate people, it just has to be mediocre or better and it will be better than 50% of humans.
This isn't really an impressive test; growing corn is an extremely well-documented solved problem, the sort of thing that we already know LLMs excel at. An LLM that couldn't reliably tell you what to do at each step of the corn-farming process would be a very poor LLM.

This seems like something along the lines of "We know we can use Excel to calculate profit/loss for a Mexican restaurant, but will it work for a Tibetan-Indonesian fusion restaurant? Nobody's ever done that before!"

It would be impressive in the sense of "Can I ask AI to make me money, and it does so autonomously?", since that's just a free source of money (until other people do it better than you and with more capital). But looking at everything here, I'm dubious that the AI will be able to do that. Farming isn't that high of a margin business, and it's adding a lot of inefficiency and other issues (small acreage, unbelievably low amounts budgeted for labor and machinery, dubious plan for "IoT Sensor Kit", no budget for seeds, etc.).
> AI doesn't need to drive a tractor. It needs to orchestrate the systems and people who do.

Pure dystopia.

It seems to me that the person driving the tractor already knows how to grow corn, and the guy behind the laptop typing prompts about corn is might as well be playing Candy Crush.
What work DO you want the humans to do?

The endless complaining and goalposting shifting is exhausting

  • qayxc
  • ·
  • 18 hours ago
  • ·
  • [ - ]
The people already doing this work today already do exactly that.

There's no goalpost shifting here - it's l'art pour l'art at its finest. It'd be introducing an agent where no additional agent agent is required in the first place, i.e. telling a farmer how to do their job, when they already now how to and do it in the first place.

No one needs an LLM if you can just lease some land and then tell some person to tend to it, (i.e. doing the actual work). It's baffling to me how out of touch with reality some people are.

Want to grow corn? Take some corn, put it in the ground in your backyard and harvest when it's ready. Been there, done that, not a challenge at all. Want to do it at scale? Lease some land, buy some corn, contract a farmer to till the land, sow the corn, and eventually harvest it. Done. No LLM required. No further knowledge required. Want to know when the best time for each step is? Just look at when other farmers in the area are doing it. Done.

  • ·
  • 13 hours ago
  • ·
  • [ - ]
  • ·
  • 7 hours ago
  • ·
  • [ - ]
This is actually a difficult project for LLMs, because a single bad decision can ruin the whole harvest.
I have believed for a couple of years that AI could do a better job of managing farm crop marketing than the average farmer. It would remove the emotion involved in selling the crop.

Managing all the decisions in growing a crop is too far a reach. Maybe someday, not today. Way too many variables and unexpected issues. I'm a former fertilizer company agronomist and the problem is far harder than say self driving cars.

contra-pessimism: My parents run a small organic farm on the east coast — (greenhouses, not row crops) and they extensively use chatgpt for decision making They obviously haven’t built out agentic data gathering, but can easily prompt it with the required information. they’re quite happy with everything.

I’m guessing this will screw up in assuming infinite labor & equipment liqudity.

  • a3w
  • ·
  • 2 hours ago
  • ·
  • [ - ]
Nice proof of corncept for a New Economy.
It's an interesting concept, but I'm skeptical about how feasible this is. How much design/legwork/intervention will Seth actually contribute during the entire process? I'm thinking "growing corn" might be a little hard for a proof of concept, specifically because the time horizon is quite long. Something a little more short term like: contracting a landscaping job. The model comes up with design ideas, contacts landscapers, gets bids, accepts a bid. Seth could tell the model that he's it's agent, available to sign for things, walk people through the property, etc, but will make no decisions, and is only reachable by email or text.
There’s a lot of questions in this thread about what constitutes AI “growing the corn” vs AI hiring someone else.

This is all addressed in the original blog post.

https://avc.xyz/can-ai-grow-corn

https://proofofcorn.com/story

Similar to the growing tomato stuff with claude https://x.com/d33v33d0/status/2006221407340867881 Your project seems more achieved !
  • ikidd
  • ·
  • 18 hours ago
  • ·
  • [ - ]
As a full-on farmer, the idea of Claude making the decisions on our farm of several thousand acres gives me the willies. I program with Claude and I don't trust it to write a test script without vetting it thoroughly and fixing a couple things before running it.

Betting millions of dollars in capital on it's decision making process for something it wasn't even designed for and is way more complicated than even I believed coming from a software background into farming is patently ludicrous.

And 5 acres is a garden. I doubt he'll even find a plot to rent at that size, especially this close to seeding in that area.

I'm waiting for the "Can it do Management?" experiment.

I do not have a positive impression/experience of most middle/low level management in corporate world. Over 30 years in the workforce, I've watched it evolve to a "secretary/clerk, usually male, who agrees to be responsible for something they know little about or not very good at doing, pretend at orchestrating".

Like growing corn, lots of literature has been written about it. So models have lots to work with and synthesize. Why not automate the meetings and metric gatherings and mindless hallucinations and short sighted decisions that drone-ish be-like-the-other-manager people do?

Given that this is an experiment and the website says they want to treat Claude as a “true collaborator”, they should follow the AI’s directions EXACTLY. Claude alone should make decisions and no human should be allowed to deviate from its instructions, even if they know better. That’s what would make this a valuable experiment, otherwise if there’s a human moderating Claude then it’s no better than Googling.
"Every decision will be logged. Every API call documented." ...

So, where are the exact logs of the prompts and responses to Claude? Under "/log" I do not see this.

Remember the website is entirely AI-built; it's not surprising it's promising a bunch of stuff it's not actually delivering.
This is like the 4 World Supercomputers at the end of Asimov’s I, Robot. Humans do all the work (industry, agriculture, economy, etc) and then feed the data into the computers who orchestrate/tell humans what to do.
I thought this was another joke from https://cornhub.website/
If this is a joke, it's a bad one. If it's not, it's even dumber.

The point could be made by having it design and print implements for an indoor container grow and then run lights and water over a microcontroller. Like Anthropic's vending machine this would also be an already addressed, if not solved, space for both home manufacturing and ag/garden automation.

It'd still be novel to see an LLM figure it out from scratch step by step, and a hell of a lot more interesting than whatever the fuck this is. Googling farmland in Iowa or Texas and then writing instructions for people to do the actual work isn't novel or interesting; of course an LLM can write and fill out forms. But the end result still primarily relies on people to execute those forms and affect the world, invalidating the point. Growing corn would be interesting, project managing corn isn't.

HN type "about the website itself, not it's content" comment but ... it would be great if we could somehow get the major browser vendors to agree on some monospaced fonts. I'm on M1 Mac and the small ASCII diagram, nothing lines up (Safari/Firefox/Chrome). I see this on many ASCII diagrams. Maybe that's the site's fault. Not sure)
The diagram looks correct for me when I disable CSS on the page or edit it's font-family to be "monospace". Seems like Geist Mono might just be borked.
This is still just going to be hiring someone else to grow corn, but with extra steps. The AI part seems kind of slapped-on here.
I am not sure how it different than what we do with llms on daily basis.

We feed it the information as a context to help us make a plan or strategy to achieve or get something.

They are also doing the same. They will be feeding the sensor, weather and other info, so claude can give them plan to execute.

Ultimately, they need to execute everything.

  • jdwg
  • ·
  • 18 hours ago
  • ·
  • [ - ]
Is it easy to make money by growing corn? Probably not.

So this is a very legitimate test. We may learn some interesting ways that planting, growing, harvesting, storing, and selling corn can go wrong.

I certainly wouldn't expect to make money on my first or second try!

This is the literal definition of a Reverse Centaur.

https://pluralistic.net/2025/12/05/pop-that-bubble/

Commanding a bunch of twitch streamers, seem a lower entry cost for this test

https://www.youtube.com/watch?v=IflNUap2HME

But "AI" already does drive the tractor/combine/sprayer etc.

Look up precision ag.

You can lease 5 acres in Iowa for $1370? Per month I guess? In which case it will be $1370 * m. Not clear from here: https://proofofcorn.com/
A quick search of actual data shows that farmland suitable for row crops sells for ~$11k per acre so a 5 acre area would be around $55k. If the market rate for that type of land was $1,370 per month, that comes to a rate of return near 30% yearly (assuming leases are yearly, not seasonal)![0]

I dug a little deeper and found this study showing cash rental rates per acre per year ranging from $215 to $295.[1] So it actually looks like Claude got this one right.

Of course I know nothing about renting farmland, but if you ask to rent 5 acres the average farm size is in 300+ acre region, the land owners might tell you to get lost or pony up. A little bit like asking Amazon to give you enterprise rates for a single small EC2 instance.

[0] https://farmland.card.iastate.edu/overview [1] https://www.extension.iastate.edu/agdm/wholefarm/pdf/c2-10.p...

Man Buys Domain and Posts Plan to Talk to the Computer

581 points 342 comments

   choice = random() % 5

   switch choice:

      case 0: blog_post

      case 1: tell_to_plant_corn
 
      case 2: register_website
  
      case 3: pause
  
      case 4: move_money
  • aaln
  • ·
  • 18 hours ago
  • ·
  • [ - ]
Really cool - even cooler if some farming related hardware on a designated plot of land can be setup so it's more than an ai agent finding someone to hire via apis.
Proof of corncept was right there!
Wait... is this really about growing real physical corn? Where are the photos of the farm or land?
Given how the front page's ASCII diagram is misaligned on my browser, I think I have a few concerns about factors that might lead to, well, oversights...
Ha! I was going to disagree, but then I realized I force most monospace html entities to use a specific font in my browser (using a wildcard stylus stylesheet). This is quite nice to normalize things (and sidestep atrocious font choices) actually.

Anyway, turned it off; sure enough, misaligned.

This is the kind of cool stuff i come here for
Reminds me of this project from friends of mine:

https://autonomousforest.org/map

> When we harvest corn in October...

We, as in humans?

Interesting!

But where is the prompt or api calls to Claude? I can't see that in the repo

Or did Claude generate the code and repo too? And there is a separate project to run it

Looks like proof of cheating to me, at least for now. Definitely a step in an interesting direction, but not exactly SkyNet.
That looks like a lot of work with little payoff for a bar bet. I'd have said "Sure, Fred, whatever you say."
Probably what Fred should have said in the first place.
  • tpolm
  • ·
  • 12 hours ago
  • ·
  • [ - ]
Modern-day Turing test: AI receives $100,000 and must make $1m
The Corn Demon is alive and well in 2026.
  • paxys
  • ·
  • 17 hours ago
  • ·
  • [ - ]
"AI can't grow corn."

"Hey AI, draft an email asking someone to grow corn. See, AI can grow corn!"

This project is neat in itself, sure, but I feel the author is wayyy missing the point of the original thought.

  • tw04
  • ·
  • 17 hours ago
  • ·
  • [ - ]
“A farm coordinator doesn’t plant every seed”

Huh? I have no doubt that mega corporate farms have a “farm manager”, but I can tell you having grown up in small town America, that’s just not a thing. My buddies dad’s were “farm manager”, and absolutely planted every seed of corn (until the boys were old enough to drive the tractor and then it was split duty), and the big farms also harvested their own and the smaller ones hired it out.

So unless claude is planning on learning to drive a tractor it’s going to be a pretty useless task manager telling a farmer to do something he or she was already planning on doing.

Slop.

I have zero doubt Claude is going to do what AI does and plough forward. Emails will get sent, recommendations made, stuff done.

And it will be slop. Worse than what it does with code, the outcomes of which are highly correlated with the expertise of the user past a certain point.

Seth wins his point. AI can, via humans giving it permission to do things, affect the world. So can my chaos monkey random script.

Fred should have qualified: _usefully_ affect the world. Deliver a margin of Utility.

We’re miles off that high bar.

Disclosure: all in on AI

Lol, the farmer also doesn‘t grow corn, their workers in the fields do.
Do you think that farmer would fail to grow corn if he was challenged and tried to do it?
  • dsr_
  • ·
  • 19 hours ago
  • ·
  • [ - ]
So... the only job that LLM can replace in the chain here is CEO.
See also: King Corn [0] - in which two random guys try to grow an acre of corn and learn about industrialized agriculture in the proces.

[0] https://www.imdb.com/title/tt1112115/

See also: Clarkson's Farm [0], for some of the messy reality of running an actual modern farm in England (though edited for entertainment value). I suspect the current AIs are not quite up to doing this - but I firmly beleive it's only a matter of time.

[0]https://www.imdb.com/title/tt10541088/

Can they change the name to Proof Of Corncept?
  • bstsb
  • ·
  • 20 hours ago
  • ·
  • [ - ]
I think the most intriguing part of this effort: Farmers traditionally employ machines to achieve their harvest. Unless I'm mistaken, this is the first time that machines are employing humans to achieve their harvest.

I mean, more or less, but you see what I'm getting at.

> Farmers traditionally employ machines to achieve their harvest

Most food is picked by migrant laborers, not machines.

It depends on the crop. Corn (Maize): Harvested using combine harvesters that pick, husk, and shell the grain. Sweet Corn might be the exception. Soybeans: Harvested using combines to cut and thresh the plants. Wheat, Barley, and Oats: Harvested using combines to cut, thresh, and clean the grain. Cotton: Harvested mechanically using cotton pickers or strippers. Rice: Mechanically harvested with combines when the stalks are dry. Potatoes and Root Vegetables: Lifted from the ground using mechanical harvesters that separate soil from the produce. Lettuce, Spinach, and Celery: Mostly hand-harvested by crews, though automation is increasing. Berries (Strawberries, Blueberries): Primarily hand-picked for fresh market quality, though some are machine-harvested for processing. Tree Fruits (Apples, Cherries): Mostly hand-picked to prevent bruising, though some processing cherries use tree shakers. Wine Grapes: Frequently harvested by hand to ensure quality, especially for high-end wines. Peppers and Tomatoes: Processed tomatoes are machine-harvested, while fresh peppers are largely hand-picked.
Can't wait for this to fail hilariously, complete with legal troubles.
But how will it lobby the federal government to guarantee returns?
Several things about LLMs make this a hard or complex experiment and maybe too much for the current tech.

1) context: lack of sensors and sensor processing, maybe solvable with web cams in the field but manual labor required for soil testing etc

2)Time bias: orchestration still has a massive recency bias in LLMs and a huge underweighting of established ground truth. Causing it to weave and pivot on recent actions in a wobbly overcorrecting style.

3) vagueness: by and large most models still rely on non committal vagueness to hide a lack of detailed or granular expertise. This granular expertise tends to hallucinate more or just miss context more and get it wrong.

I’m curious how they plan to overcome this. It’s the right type of experiment, but I think too ambitious of a scale.

I reckon producing corn in the Midwest would be the most researched and documented crop and location ever in history. So baked into an LLM should be some very good knowledge and assumptions. Growing a different crop elsewhere may be more challenging.
AI CEOs are coming.
AI middle managers are coming. The highest-level corporate authority can and will continue to exist as a person that makes sure the AI systems are running correctly and skim profits off the top of the AI substructure, with the lowest stratum being an underclass precariat doing the hands-on tickets from an AI agent at a continuously adjusted market price for the task.
It could be just an owner or a board or directors at the top. It's possible the CEO will be automated for some companies.
  • qoez
  • ·
  • 20 hours ago
  • ·
  • [ - ]
Eventually robots will do this but as long as humans do the actual irl actions it makes me think of a dystopian future where all leadership decision are made by harsh micromanaging AI bosses and low paying physical labor is the only job around for humans.
The perpetual motion machine is only interesting of it generates more energy than you put in.
I can make corn too. I go to the supermarket and hand them these little green pieces of paper, and then I have corn.

Seriously, what does this prove? The AI isn't actually doing anything, it's just online shopping basically. You're just going to end up paying grocery store prices for agricultural quantities of corn.

This actually is a good summary of my theory of AI. The best use case for AI is replacing management. Thats the real reason AI is floundering right now with making money. The people in charge would literally need to admit that they are basically no longer needed and act accordingly.

This of course will never happens so instead those in power will continue to try to shoehorn AI into making slaves which is what they want, but not the ideal usage for AI.

I hate this timeline
so cringe
This is so fucking stupid
> AI doesn't need to drive a tractor. It needs to orchestrate the systems and people who do.

If people are involved then it's not an autonomous system. You could replace the orchestrator with the average logic defined expert system. Like come on, farming AGVs have come a long way, at least do it properly.

Your job is to grow corn.

Claude: Oh. My. God.

I find it intresting if the AI can Produce real Corn on other planet. If this work, i think when AI is getting resource of other planet soil, then it going to be a great help for Humanity
It's...interesting but I feel like people keep forgetting that LLMs like Claude don't really...think(?). Or learn. Or know what 'corn' or a 'tractor' is. They don't really have any memory of past experiences or a working memory of some current state.

They're (very impressive) next word predictors. If you ask it 'is it time to order more seeds?' and the internet is full of someone answering 'no' - that's the answer it will provide. It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not.

You can babysit it and engineer the prompts to be as leading as possible to the answer you want it to give - but that's about it.

I think you could have credibly said this for a while during 2024 and earlier, but there is a lot of research that indicates LLMs are more than stochastic parrots, as some researchers claimed earlier on. Souped up versions of LLMs have performed at the gold medal level in the IMO, which should give you pause in dismissing them. "It can't actually understand how many there currently are, the season, how much land, etc, and do the math itself to determine whether it's actually needed or not" --- modern agents actually can do this.
They are 100% stochastic parrots.

The worlds most impressive stochastic parrot, resulting from billions of dollars of research by some of the world's most advanced mathematicians and computer scientists.

And capable of some very impressive things. But pretending their limitations don't exist doesn't serve anyone.