I’m a dev and a dad to a 10-year-old. I built this because I caught my daughter using ChatGPT to do her history homework. She wasn't learning; she was just acting as a "middleware" between the AI and the paper.
The Backstory: I realized the problem isn't the AI—it's the zero-friction answers. Most "AI for kids" apps are just "parrots"—they mimic intelligence by repeating patterns.
What’s Different: Qurio is a "Bicycle" for the mind. It treats the child like a future "Architect" rather than a "Junior Executor." Technically, it wraps an LLM in a strict "Socratic Loop." It detects intent to "cheat," refuses the direct answer, and generates a leading question based on the user's current logic level. It forces "Healthy Friction" back into the learning process.
The stack: Next.js 14, Supabase (Auth/DB), Vercel AI SDK.
Mods: I've added the backstory and differentiator as requested. Ready for the re-up! Thank you.
I do really like the feel of this though, and it’s an awesome idea. Maybe tighten up some short circuiting tricks that may fall under “tells me the answer”
You can dial in on the difficulty: "you must be pedantic and ask that I correct misuse of terminology" vs "autocorrect my mistakes in terminology with brackets".
Super duper useful way to learn things. I wish I had AI as a kid.
I'd love to know: did the AI feel too "stubborn" in your first few turns, or did it hit that sweet spot of guiding you toward the answer?
Actually we don’t keep chat logs to protect privacy and child data. In order to improve the engine, I rely on feedback of users such as yours. All feedback’s shared are very much appreciated!
You can email me as well if you need more credits for beta access as paid subscription is on hold until beta testing is completed
Seeing the discussion about "jailbreaking" and Socratic pedagogy has been incredibly helpful. I would love for you to give Qurio a real-world test drive with your kids or students.
I'm specifically looking for feedback on:
The Friction Level: Is the Socratic questioning helpful or just frustrating?
Edge Cases: If your child finds a clever way to "trick" the engine into giving an answer, please let me know.
Mastery: Do you feel they actually owned the concept by the end of the session?
Your feedback is the "Bicycle" that helps me build a better engine. Thank you for being my first "Alpha" testers!
(what's the name purpose of mandating the use of emails BTW? There's nearly infinite supply of email addresses, so it's not stopping anyone from getting a second one)
I don't have my disposable Google account at hand, and I'm not willing to provide my real name to the random project in the unspecified country (unknown regulatory constraints). Especially since you were collecting phone numbers as well (?).
Chat logs are also stored, presumably not encrypted at rest, so in case the unspeakable happens, someone gets my and my kid's real email addresses, names, and their complete chat logs.
It would be also no go, because I don't want to start from violating your terms of use:
```
01. Prohibited Activities
- Bypassing safeguards [...]
- Misrepresenting identity, age, or intent (e.g., a child posing as an adult to bypass filters).
Enforcement
Violations may result in immediate suspension or permanent termination of access without refund, at our sole discretion.
```
You invite our kids to try to break free from the interface's rails, and if in the process they claim they're me, this grounds for termination :)
Thanks but no, thanks :)
But I wish you success, a tool like this would be amazing to have.
Rest assured, chat logs are encrypted and saved as per regulatory standards.
We Use industry-standard AES-256 encryption at rest and TLS encryption in transit. We use Row Level Security to ensure that your data is logically isolated—even to myself We do not sell your data. We do not use your chat logs to train our own models. Your "thinking journey" is yours alone.
but still I respect your decision. Thank you:)
1. Instruction Drift vs. The Gatekeeper: General-purpose LLMs are trained to be "helpful and agreeable." If a student pushes or shifts the topic, the model often "drifts"—like you mentioned, it might start correcting grammar instead of pushing the child to derive the essay's core logic. Qurio uses a secondary "Gatekeeper" agent that audits every response turn specifically to ensure the "Socratic Loop" stays on the core concept, not just surface-level fixes.
2. The Walled Garden: A general-purpose AI is an open "Ducati"—it has the entire internet's biases and infinite distractions. Qurio provides a closed-loop logic environment. It removes the ads, tracking, and the constant temptation to "just get the answer" that is always one click away in a standard bot.
3. The "Architect" UI: Unlike a standard chat, our Cognitive Process Capsules (CPCs) record the thinking journey, not just the final result. This allows parents to see the logical steps their child took, which is a feature prioritized for education rather than just production.
Ultimately, a kid uses this because it treats them like a Future Architect who needs to understand the "Why," rather than just a user who needs a "Result."
I'm a developer and a dad—the project is real, even if my grammar needs a boost! I'll try to let more of my own "unfiltered" voice through.
> the model often "drifts"—like you mentioned
which was attributed to me, even though I didn't ask that
I think the ESOL explanation is believable though, I have a coworker or two who do the same thing
Since I'm self-funding the API costs, I can only keep trial open for a few more people today.
If you've done a full session with your child, please drop a comment here. I’m curious if the age-calibrated response worked as planned or was it too hard for them
The compliance parts are good to make clear considerating one segment of the user target audience.
May I ask what techniques do you use to test regressions or correct behaviour of your multi turn conversation in your product? What are the biggest lessons and learnings in that space?
The biggest learning so far: 'Instruction Drift' is real. You can't just give one long prompt. You have to break the reasoning into smaller 'Cognitive Process Capsules' (CPCs) to keep the model from losing the Socratic thread during long sessions.
Kagi Assistant has a custom "Study" model that works similarly. I've been using it for certain learning topics and find it useful.