Show HN: OpenAI/reflect – Physical AI Assistant that illuminates your life
I have been working on making WebRTC + Embedded Devices easier for a few years. This is a hackathon project that pulled some of that together. I hope others build on it/it inspires them to play with hardware. I worked on it with two other people and I had a lot of fun with some of the ideas that came out of it.

* Extendable/hackable - I tried to keep the code as simple as possible so others can fork/modify easily.

* Communicate with light. With function calling it changes the light bulb, so it can match your mood or feelings.

* Populate info from clients you control. I wanted to experiment with having it guide you through yesterday/today.

* Phone as control. Setting up new devices can be frustrating. I liked that this didn't require any WiFi setup, it just routed everything through your phone. Also cool then that they device doesn't actually have any sensitive data on it.

Somewhere in here there's a joke about how many tokens it takes to turn on a lightbulb.
It deserves a minor rewrite of the Black Mirror episode Fifteen Million Merits where people do menial labor like folding laundry and washing dishes to earn tokens so that their LLM will dispense their toothpaste and send Studio Ghibli stylized birthday cards to their friends.
inb4: When sama and co talk about UBI, they mean a variation of it based around a memecoin tethered/indexed on (tik)tokens.
  • a2128
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Probably 1,000 for the system prompt, 400 for the audio speech-to-text, 8 for the query, 180 for the thinking, 12 for the tool call, 33 for the response with a useless follow-up question
This project isn’t tightly coupled with anything. Any service that supports WebRTC should work!

Also I was hoping to push people toward a RTOS. Better experience then a raspberry pi, I can cycle power and be back way quicker. Also cheaper/more power efficient.

I also think I unfairly like ESP because it’s an excuse to write C :)

I also have been working with Daily on https://github.com/pipecat-ai/pipecat-esp32

I see so much potential if I can make hardware hacking + WebRTC easy. Not just for AI assistants but security cameras + robotics. If anyone has questions/ideas/feedback here to help :)

  • joshu
  • ·
  • 7 hours ago
  • ·
  • [ - ]
what is Daily?
https://www.daily.co/

You can use it to build lots of different real-time communication projects. Conferencing, Send your audio/video to GPU servers for AI, broadcasting and lots more.

It’s a super fun space to be in

  • su
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Are there any cool demos that use daily I can explore?
https://www.linkedin.com/posts/thorwebdev_esp32-webrtc-activ...

https://m.youtube.com/watch?v=HbO18Elw9WY

Are two that I know of. Try it out, if you hit any roadblocks @ me on pipecat discord and would love to help

  • baxtr
  • ·
  • 8 hours ago
  • ·
  • [ - ]
If you want to know what this is about, here’s the video they provided:

https://www.youtube.com/watch?v=G5OUnpPAyCg

I love seeing that hackathons are encouraged inside OpenAI and most importantly, that their outcome is also shared :)
Is it my browser, or does the video in the readme not have sound?
No sound! YouTube video in README does.

I was tempted to put Erik Satie in the README video. Didn’t want to risk copyright issues

Philips Hue is about to start a riot
I get that this is as-is, but I wonder if so many ultra-alpha products don't dilute the OpenAI brand and create redundancy in the product line. It feels like the opposite of Apple's well thought out planned product design and product line.

Let's see if it pays out.

This is just a hackathon project. Not a product in any way.

My primary work is on backend WebRTC servers. This was just an outlet/fun side thing to do client and embedded work. I love writing C and do microcontrollers. I just can’t seem to find a way to do it full time:(

For a developer platform having examples is useful as a starting point for new projects.

Also, I’m not sure if it’s similar at OpenAI, but when I was at Google it was much easier to get approval to put an open source project under the Google GitHub org than my personal user.

They're selling shares at a $500B valuation. The market is telling them everything they are doing is amazing.
Is it possible to differentiate the feedback of the initial success of chatgpt from whatever came after it?

It's possible those investments are just the oai owners selling their 2023 chatgpt success and its profit share.

  • ·
  • 7 hours ago
  • ·
  • [ - ]