I have a dozen VR headsets, more of them doing passthrough, I have AR glasses and even a monocle. I prototype in the field. The technology is amazing... but you don't need it to get value. Bring your tablet to the kitchen, add a webcam if you want to try to get visual feedback, display the recipe and... just cook.

I think there is a big gap between :

- how messy the World is (imagine a "normal" kitchen, not a prop for a demo)

- how good existing low-tech solutions are (e.g. just a book or a printed page for a recipe)

meaning that to provide marginally more value the tech has to be excellent at handle a messy World. Sure the tech is radically improving but we are vastly underestimating, as we did for robotics for decades now, how challenging seemingly menial task can be and how to do so reliably.

I always think of my Roomba in this kind of situation, namely when one tries to imagine a future where tech works outside of the computer. Basically my little vacuum robot mostly sits idling in the corner because... a basic broom or a normal vacuum cleaner is radically more efficient, even if it means I have to do so myself.

So... yes I'm techno-optimist but the timescale is IMHO not as close as we hope for.

PS: AR setups (glasses or not) do NOT have to be connected to be useful. Do not get footed by Meta slick ads to imply that only they can make glasses, heck I'm taped a pair together 5 years ago with a RasberryPi Zero. It wasn't slick https://twitter-archive.benetou.fr/utopiah/status/1449023602... but it worked.

I agree with the general vibe of your post but let me make a tangeant:

> I always think of my Roomba in this kind of situation, namely when one tries to imagine a future where tech works outside of the computer. Basically my little vacuum robot mostly sits idling in the corner because... a basic broom or a normal vacuum cleaner is radically more efficient, even if it means I have to do so myself.

For me the whole value of a robot vacuum is that... I don't vacuum, the robot does, and when it does so it when I'm not at home. Reduce the dust periodically by 75% so that it's neater; I'll do a deep clean every few month or so.

(maybe what you're saying is that the cost of vacuuming for you is low enough that it's similar to launching the robot vacuum, in that case I understand you !)

I worked on some next next gen AR glasses, I saw and used both orion and the prototype version of the rayban meta display glasses

Orion suffered from the problem that plagued it throughout its production: shifting goals and poor product management. The display team promised and then didn't deliver, twice.

One of my friends showed me a demo in meta display glasses (the production one) that allowed you to take a youtube recipe video, feed it into the dipshit machine and it'd spit out a step by step guide for you to follow. That was really nice to use.

The demo I worked on was taking some research glasses and attaching SLAM, always on audio and basic VLM descriptions of what you are looking at and dumping it into a database. You'd be surprised at how useful that context was. at the end of the day you could say "In the meeting I had with x what were the actions" Sometimes it even worked. Because everything that you saw or did was geocoded (ie it knew which room you were in) and linked to a calendar, even though facial and voice recognition wasn't allowed, you could get something akin to a decent executive assistant.

Making it secure and private is a big fucking challenge. I put in face and screen blurring, but thats not really enough. There is OCR based on eye gaze, which is hard to censor. The problem is there is more than enough context stored to infer lots of things.

The problem is not just securing the storage, its finding a decent sharing mechanisms that allows you to perform normal actions without asking a million questions or over sharing.

    Like for example: I don't know, sometimes I'm cooking for my partner. Do they hear a one-sided conversation between me and the "Live AI" about what to do first, how to combine the ingredients? Without the staged demo's video-casting, this would have been the demo: a celebrity chef talking to himself like a lunatic, asking how to start.
I think this is one of those things that society just adapts to. Some people will be in the kitchen talking to "themselves" but that's okay, people understand why. My Mom would often talk to herself when cooking anyway. She was a verbal processor. You just get used to it and eventually it doesn't seem weird anymore.
That’s fine. I cook. I’ve got my AirPods in listening to music and talk to an iPad with my recipe on it to set timers and the like. And sometimes I talk to my Dutch oven because it has feelings.

Voice control makes a lot of sense in cooking because my hands are messy. I’m just not sure what the glasses are for that could be fun or helpful while cooking. The demo video is… amusing but also wrong, because you’d need to have the recipe before you setup your ingredients.

  • jaapz
  • ·
  • 1 hour ago
  • ·
  • [ - ]
The glasses could present a recipe as actions needing to be taken in the field of view, instead of as a bulletpoint list in a book. And the glasses could use the video stream from the glasses to improve help an AI may give during cooking.

Then again, I don't see myself using them. I also cook and I'd rather just internalize processes and recipes, and something like this would make it way too easy to just rely on the glasses to "know" everything.

I think it's pretty obvious that something like this product is going to be the future, but the technology is still pretty raw. Eventually, humans will have some kind of personal assistant baked into their field of view. Maybe we're on the cusp, or maybe we're 50 years out. Hard to know.
[dead]
  • NedF
  • ·
  • 1 hour ago
  • ·
  • [ - ]
[dead]