There are some true serverless approaches out there for the signaling, e.g. where both peers scan each other's QR code, but that obviously has very limited use.
Only avoiding provisioning and managing the server just means you are renting rather than self-hosting.
"Serverless" is like paying for a hot desk by the minute, with little control of your surroundings, but it is convenient and cheap if you only need it for an hour.
You take that data and send to the peer over signaling connection, and they call you back on that IP:port. Most NAT implementations make and keep a temporary mapping between public port to private IP consistent[1] for few minutes, and not completely random per destination[2], so it usually works.
1: e.g. router.public.ip.example:23456 <-> 192.168.0.12:12345
2: e.g. if stun.l.google.com:12345 sent from port 23456 but if yourfriend.router.ip.example:12345 sent from port 45678
Direct link to the underlying source code.
1) Serverless isn't really serverless and we are sick of this AWS speak. The trend lasted briefly but it isn't appealing when you are metered for every little thing and unable to SSH into a host and resolve issues
2) You can already do matchmaking easily with FOSS self-hosted solutions. These don't cost a lot of computing or bandwidth but some developers think its a chaotic and resource heavy problem.
uses someone else's network for signaling
I can tell you roughly how it works for webrtc video calls.
If you're in a 5-person peer-to-peer webrtc video call where you receive 4 streams of video, you also need to send 4 streams of video. This is scalable in a sense; the uplink and downlink requirements are equal.
The problem comes if you're in a 100-person meeting, and the application logic has hidden 95 people's video to save on bandwidth. In that case, while you'd only receive 4 streams of video you'd have to send 99.
In practice, webrtc video calling often uses an 'SFU' or 'Selective Forwarding Unit' where you send one video stream to the vendor's cloud server and they forward it to the other people in the meeting. This also benefits people on asymmetric connections, and mobile users where uploading costs battery life, and users behind highly restrictive firewalls where webrtc's NAT traversal fails to work.
The issue is not with the throughput: a typical videoconference requires 700kbit/s per stream, so even 12Mbit/s upstream should be enough for 20 streams or so. The issue is with having to encode the video separately for every receiver.
WebRTC adapts to the available throughput by encoding the video separately for every receiver, with different parameters. If you're in a five-person peer-to-peer conference, you decode four videos simultaneously, which is fine, but you're also encoding your video four times, which is not.
An SFU works around the issue by not reencoding the video: the SFU merely decrypts the video and reencrypts it with the public key of every receiver. Since AES is implemented in hardware, the reencryption comes essentially for free.
(Of course, that implies that the SFU needs to use other techniques for bandwidth adaptation, such as simulcast or scalable video coding (SVC). See slides 10-12 of https://galene.org/galene-20250610.pdf if you're interested.)
But don't most home connections have a slower uplink than downlink? Mine certainly does.
Considering the site just spams my error console with
DOMException: Failed to construct 'RTCPeerConnection': Cannot create so many PeerConnections
I'd say not very.For audio-only the sky is the limit. I used to work on a voice-based social media and you also need an SFU here as well, but I added a few mixing features so that multiple incoming audio streams would be mixed together into a single outgoing one. Was very fun (and scalable).
I removed the chat feature.
At any rate, getting banned by OFCOM is starting to sound like a badge of honor these days.
Each host will carry a legal responsibility for both what they push, and what they pull.
The law sucks but the misinformation around it is getting out of hand.
https://www.legislation.gov.uk/ukpga/2023/50/section/12
What's even "harmful content to minors"? Even if it were restricted only to pornography--which is not--I wouldn't count with being able to "moderate" all the ways users can draw penises.
The act regulates "user-to-user" services:
https://www.legislation.gov.uk/ukpga/2023/50/section/3
> In this Act “user-to-user service” means an internet service by means of which content that is generated directly on the service by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service.
The legal text is dense but there is some analysis here:
https://www.eff.org/deeplinks/2023/09/uk-online-safety-bill-...
And some news about Reddit: https://www.eff.org/deeplinks/2025/08/americans-be-warned-le...
Is there anything like that?
I'd love to use an existing protocol to get (distributed?) user accounts and chat and stuff, and just build my game as a plugin for that. Or something.