I've found using these and similar tools that the amount of prompts and iteration required to create my vision (image or video in my mind) is very large and often is not able to create what I had originally wanted. A way to test this is to take a piece of footage or an image which is the ground truth, and test how much prompting and editing it takes to get the same or similar ground truth starting from scratch. It is basically not possible with the current tech and finite amounts of time and iterations.
  • jerf
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It just plain isn't possible if you mean a prompt the size of what most people have been using lately, in the couple hundred character range. By sheer information theory, the number of possible interpretations of "a zoom in on a happy dog catching a frisbee" means that you can not match a particular clip out of the set with just that much text. You will need vastly more content; information about the breed, information about the frisbee, information about the background, information about timing, information about framing, information about lighting, and so on and so forth. Right now the AIs can't do that, which is to say, even if you sit there and type a prompt containing all that information, it is going to be forced to ignore most of the result. Under the hood, with the way the text is turned into vector embeddings, it's fairly questionable whether you'd agree that it can even represent such a thing.

This isn't a matter of human-level AI or superhuman-level AI; it's just straight up impossible. If you want the information to match, it has to be provided. If it isn't there, an AI can fill in the gaps with "something" that will make the scene work, but expecting it to fill in the gaps the way you "want" even though you gave it no indication of what that is is expecting literal magic.

Long term, you'll never have a coherent movie produced by stringing together a series of textual snippets because, again, that's just impossible. Some sort of long-form "write me a horror movie staring a precocious 22-year old elf in a far-future Ganymede colony with a message about the importance of friendship" AI that generates a coherent movie of many scenes will have to be doing a lot of some sort of internal communication in an internal language to hold the result together between scenes, because what it takes to hold stuff coherent between scenes is an amount of English text not entirely dissimilar in size from the underlying representation itself. You might as well skip the English middleman and go straight to an embedding not constrained by a human language mapping.

  • LASR
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What you are saying is totally correct.

And this applies to language / code outputs as well.

The number of times I’ve had engineers at my company type out 5 sentences and then expect a complete react webapp.

But what I’ve found in practice is using LLMs to generate the prompt with low-effort human input (eg: thumbs up/down, multiple-choice etc) is quite useful. It generates walls of text, but with metaprompting, that’s kind of the point. With this, I’ve definitely been able to get high ROI out of LLMs. I suspect the same would work for vision output.

I'm not sure, but I think you're saying what I'm thinking.

Stick the video you want to replicate into -o1 and ask for a descriptive prompt to generate a video with the same style and content. Take that prompt and put it into Sora. Iterate with human and o1 generated critical responses.

I suspect you can get close pretty quickly, but I don't know the cost. I'm also suspicious that they might have put in "safeguards" to prevent some high profile/embarrassing rip-offs.

> Long term, you'll never have a coherent movie produced by stringing together a series of textual snippets because, again, that's just impossible.

Why snippets? Submit a whole script the way a writer delivers a movie to a director. The (automated) director/DP/editor could maintain internal visual coherence, while the script drives the story coherence.

This almost certainly won’t work. Feel free to feed any of the hundreds of existing film scripts and test how coherent the models can be. My guess is not at all
The clips on the Sora site today would have been utterly astonishing ten years ago. Long term progress can be surprising.
> The clips on the Sora site today would have been utterly astonishing ten years ago.

Yeah, and Apollo 11 would have been utterly astonishing a decade before it occurred. And, yet, if you tried to project out from it to what further frontiers manned spaceflight would reach in the following decades, you’d…probably grossly overestimate what actually occurred.

> Long term progress can be surprising.

Sure, it can be surprising for optimists as well as naysayers; as a good rule of thumb, every curve that looks exponential in an early phase ends up being at best logistic.

In the long run we are all dead. Saying that technology will be better in the future is almost eye-roll worthy. The real task is predicting what future technology will be, and when it will arrive.

Ask anyone with a chronic illness about the future and they'll tell you we're about 5 years off a cure. They've been saying that for decades. Who knows where the future advancements will be.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This will almost certainly be in theaters within 5 years, probably first as a small experimental project (think blair witch).
The Blair Witch Project was a (surprise) creative masterpiece. It worked with very limited technology to create a very clever plot which was paired with an amazing marketing. The combination of which the world hadn’t seen before. It took some creative geniuses to peace the Blair Witch Project together.

Generative AI will never produce an experience like that. I know never is a long time, but I’m still gonna call it. You simply can’t produce such a fresh idea by gathering a bunch of data and interpolating.

Maybe someday enough AI will be good enough to create shorter or longer videos with some dialog and even a coherent story (though I doubt it), but it won‘t be fresh or creative. And we humans will at best enjoy it for its stupidity or sloppiness. Not for its cleverness or artistry.

Why does the idea need to be generated by AI? Let people generate the ideas, the AI will help execute. I think soon (3-5 years) a determined person with no video skills will be able to put together a compelling movie (maybe a short). And that is massive. AI doesn’t have to do everything. Like all tech, it’s a productivity tool.
> Why does the idea need to be generated by AI?

This is the at-first-fun-but-now-frustrating infinite goal move. "AI (a stand in for literally anything) will do (anything) soon." -> "It won't do (thing), it's too complex." -> "Who said AI will do (thing)?"

AI will self-drive cars in San Francisco
  • Breza
  • ·
  • 1 week ago
  • ·
  • [ - ]
I'm suspicious of most claims of AI growth, but I think screenwriting is an area where there's real potential. There are many screenplays out there, many movie plots are very similar to each other, and human raters could help with training. And it's worth noting that the top four highest grossing movies right now are all sequels or film adaptations. It's not a huge leap to imagine an LLM in the future that's been trained on movie writing being able to create a movie script when given the Wicked musical. https://www.imdb.com/chart/boxoffice/
The 2023 Writers Guild of America strike was in part to prevent screenplays being written entirely by generative AI.

So no I don’t think this will happen either. Authors may use use AI them selves as one tool in their tool box as they write their script, but we will not see entire production screen plays being written by generative AI set for theatrical release. The industry will simply not allow that to happen. At most you can have AI write a screen play for your own amusement, not for publication.

I'm thinking more of a Gibsonian 'Garage Kubrick'. A solitary auteur (or small team) that produces the film alone perhaps without even touching a camera, generating all the footage using AI (in the novel the auteur creates all the footage through photo/found-footage manipulation, or at least thats all we see in text). The script will probably be human written, I'm not talking about an AI producing a film from scratch, rather a film being produced using AI to create all the visuals and audio.
That is a far more reasonable prediction but I don’t even see this future. This kind of “film making” will at best be something generated for the amusement of the creator (think, give me a specific episode of Star Trek where Picard ...) or as prototypes or concepts of yet to be filmed with actual actors. And it certainly won’t be in theaters, not in 5 years, or ever.

Generative AI will not be able to approach the artistry of your average actor (not even a bad actor), it won’t be able match the lighting or the score to the mood (unless you carefully craft that in your prompt). It won‘t get creative with the camera angles (again unless you specifically prompt for a specific angle) or the cuts. And it probably won’t stay consistent with any of these, or otherwise break the consistency at the right moments, like an artist could.

If you manage to prompt the generative AI to create a full feature film with excellent acting, the correct lighting given the mood, a consistent tone with editing to match, etc. you have probably spent much more time and money into crafting the prompt than would otherwise have gone into simply hiring the crew to create your movie. The AI movie will certainly contain slop and be visibly so bad it guaranteed will not be in theaters.

Now if you hired that crew to make the movie instead, that crew might use AI as a tool to enhance their artistry, but you still need your specialized artists to use that tool correctly. That movie might make it to the theaters.

blair witch project looked like shit, 'the cinematography doesn't approach a true director of photography', the actors were shit... etc. Given the right script and concept it can be amazing and the imperfection of AI can become part of the aesthetic.
It was still a creative stroke of genius. The shit acting along with the shit cinemotography was preceded by a brilliant marketing campaign where you expected this lack of skill by the film makers.

In music you also have plenty of artists that have no clue how to play their instruments, or progress their songs, but the music is nonetheless amazing.

Skill is not the only quality of art. A brilliant artist works with their limitation to produce work which is better than the sum of its part. It will take AI the luck of ten billion universes before it produces anything like that.

It's a tool. The cleverness and artistry comes from the humans, not from the tools they use.

The AI isn't creating the fresh ideas. People are.

So what you are saying is some aspects of movie making will use AI as parts of their jobs. That is very realistic and probably already happening.

Saying that large video models will be in theaters sounds like a completely different and much more ambitious prediction. I interpreted it as if large video models will produce whole movies on their own from a script of prompts. That there will be a single film maker with only a large video model and some prompts to make the movie. Such films will never be in the theater, unless by some grifter, and than it is certain to be a flop.

You should watch how movies are made sometime. How a script is developed. How changes to it are made. How storyboards are created. How actors are screened for roles. How locations are scouted, booked, and changed. How the gazillion of different departments end up affecting how a movie looks, is produced, made, and in which direction it goes (the wardrobe alone, and its availability and deadlines will have a huge impact on the movie).

What does "EXT. NIGHT" mean in a script? Is it cloudy? Rainy? Well lit? What are camera locations? Is the scene important for the context of the movie? What are characters wearing? What are they looking at?

What do actors actually do? How do they actually behave?

Here are a few examples of script vs. screen.

Here's a well described script of Whiplash. Tell me the one hundred million things happening on screen that are not in the script: https://www.youtube.com/watch?v=kunUvYIJtHM

Or here's Joker interrogation from The Dark Night Rises. Same million different things, including actors (or the director) ignoring instructions in the script: https://www.youtube.com/watch?v=rqQdEh0hUsc

Here's A Few Good Men: https://www.youtube.com/watch?v=6hv7U7XhDdI&list=PLxtbRuSKCC...

and so on

---

Edit. Here's Annie Atkins on visual design in movies, including Grand Budapest Hotel: https://www.youtube.com/watch?v=SzGvEYSzHf4. And here's a small article summarizing some of it: https://www.itsnicethat.com/articles/annie-atkins-grand-buda...

Good luck finding any of these details in any of the scripts. See minute 14:16 where she goes through the script

Edit 2: do watch The Kerning chapter at 22:35 to see what it actually takes to create something :)

I can't upvote this enough. This topic in the media space has generated a huge amount of naive speculation that amounts to "how hard could it be to do <thing i know nothing about>?"
> "how hard could it be to do <thing i know nothing about>?"

This is most Hacker News comments summarized lmao. It's kinda my favorite thing of this place: just open any thread and you immediately see so many people rushing to say ''well just do X or Y'' or ''actually it's X or Y and not Z like the experts claim''. Love it.

In this case, it’s movies and TV, which most people enjoy. So there’s a superficial accessibility to the problem which encourages this attitude.

Of course, HN being the place that it is, the same type of comments are made about quantum entanglement and solar panel efficiency.

I agree with you.

At the same time I am curious in the "that person has too many fingers" sense at what a system trained on tens of thousands of movies plus scripts plus subtitles plus metadata etc. would generate.

I thought about it for a bit and I would want to watch a computer generated Sharknado 7 or Hallmark Christmas movie.

Of course normally other people contribute to a movie after the writer. My comment mentioned three of the important roles. This whole thread is about tech that automates away those roles. That's the whole point.
I think you've misunderstood the objection.

Lets pick something concrete. It's a medieval script, it opens with two knights fighting. OK so later in the script we learn their characters, historic counterparts etc. So your LLM can match nefarious villain to some kind of embedding, and doubtless has trained on countless images of a knight.

But the result is not naively going to understand the level of reality the script is going for - how closely to stick to historic parallels, how much to go fantastical with the depiction. The way we light and shoot the fight and how it coheres with the themes of the scene, the way we're supposed to understand the characters in the context of the scene and the overall story, the references the scene may be making to the genre or even specific other films etc.

This is just barely scraping the surface of the beginnings of thinking about mise en scene, blocking, framing etc. You can't skip these parts - and they're just as much of a challenge as temporal coherence, or performance generation or any of the other hard 'technical issues' that these models have shown no capacity to solve. They're decisions that have to be made to make a film coherent at all - not yet good or tasteful or creative or whatever.

Put another way - you'd need AGI to comprehend a script at the level of depth required to do the job of any HOD on any film. Such a thing is doubtless possible, but it's not going to be shortcut naively the way generation an image is - because it requires understanding in context, precisely what LLMs lack.

> but the result is not naively going to understand the level of reality the script is going for…

We can already get detailed style guidance into picture generation. Declaring you want Picasso cubist, Warner brothers cartoon, or hyper realistic works today. So does lighting instructions, color palettes, on and on.

These future models will not be large language models, they will be multi-modal. Large movie models if you like. They will have tons of context about how scenes within movies cohere, just as LLMs do within documents today.

So, we went from "just hand off movie script to automated director/DP/editor" we're now rapidly approaching:

- you have to provide correct detailed instructions on lighting

- you have to provide correct detailed instructions on props

- you have to provide correct detailed instructions on clothing

- you have to provide correct detailed instructions on camera position and movement

- you have to provide correct detailed instructions on blocking

- you have to provide correct detailed instructions on editing

- you have to provide correct detailed instructions on music

- you have to provide correct detailed instructions on sound effects

- you have to provide correct detailed instructions on...

- ...

- repeat that for literally every single scene in the movie (up to 200 in extreme cases)

There's a reason I provided a few links for you to look at. I highly recommend the talk by Annie Atkins. Watch it, then open any movie script, and try to find any of the things she is talking about there (you can find actual movie scripts here: https://imsdb.com)

There's two reasons to be hopeful about it though: AI/LLMs are very good at filling in all those little details so humans can cherry pick the parts that they like. I think that's where the real value is in for the masses - once these models can generate coherent scenes, people can start using them to explore the creative space and figure out what they like. Sort of like SegmentAnything and masking in inpainting but for the rest of the scene assembly. The other reason is that the models can probably be architected to figure out environmental/character/light/etc embeddings and use those to build up other coherent scenes, like we use language embeddings for semantic similarity.

That's how I've been using the image generators - lots of experimentation and throwing out the stuff that doesn't work. Then once I've got enough good generated images collected out of the tons of garbage, I fine tune a model and create a workflow that more consistently gives me those styles.

Now the models and UX to do this at a cinematic quality are probably 5-10 years away for video (and the studios are probably the only ones with the data to do it), but I'm relatively bullish on AI in cinema. I don't think AI will be doing everything end to end, but it might be a shortcut for people who can write a script and figure out the UX to execute the rest of the creative process by trial and error.

> AI/LLMs are very good at filling in all those little details so humans can cherry pick the parts that they like.

Where did you find AI/ML that are good at filling in actual required and consistent details.

I beg of you to watch Annie Atkins' presentation I linked: https://www.youtube.com/watch?v=SzGvEYSzHf4 and tell me how much intervention would AI/ML need to create all that, and be consistent throughout the movie?

> once these models can generate coherent scenes, people can start using them to explore the creative space and figure out what they like.

Define "coherent scene" and "explore". A scene must be both coherent and consistent, and conform to the overall style of the movie and...

Even such a simple thing as shot/reverse shot requires about a million various details and can be shot in a million different ways. Here's an exploration of just shot/reverse shot: https://www.youtube.com/watch?v=5UE3jz_O_EM

All those are coherent scenes, but the coherence comes from a million decisions: from lighting, camera position, lens choice, wardrobe, what surrounds the characters, what's happening in the background, makeup... There's no coherence without all these choices made beforehand.

Around 4:00 mark: "Think about how well you know this woman just from her clothes, and workspace". Now watch that scene. And then read its description in the script https://imsdb.com/scripts/No-Country-for-Old-Men.html:

--- start quote ---

    Chigurh enters. Old plywood paneling, gunmetal desk, litter
          of papers. A window air-conditioner works hard.
          A fifty-year-old woman with a cast-iron hairdo sits behind
          the desk.
--- end quote ---

And right after that there's a section on the rhythm of editing. Another piece in the puzzle of coherence in a scene.

> Then once I've got enough good generated images collected out of the tons of garbage, I fine tune a model and create a workflow that more consistently gives me those styles.

So, literally what I wrote here: https://news.ycombinator.com/item?id=42375280 :)

That’s the same thing with digital art, even with the most effortless one (matte painting), there’s a plethora of decisions to make and techniques to use to have a coherent result. There’s a reason people go to school or trained themselves for years to get the needed expertise. If it was just data, someone would have written a guide that others would mindlessly follow.
Not sure why you jumped there. I was thinking more like ‘make it look like Bladerunner if Kurosawa directed it, with a score like Zimmer.’

You’re really failing to let go of the idea that you need to prescribe every little thing. Like Midjourney today, you’ll be able to give general guidance.

Now, I don’t expect we’ll get the best movies this way. But paint by numbers stuff like many movies already are? A Hallmark Channel weepy? I bet we will.

> Not sure why you jumped there.

No jump.

Your original claim: "Submit a whole script the way a writer delivers a movie to a director. The (automated) director/DP/editor could maintain internal visual coherence, while the script drives the story coherence."

Two comments later it's this: "We can already get detailed style guidance into picture generation. Declaring you want Picasso cubist, Warner brothers cartoon, or hyper realistic works today. So does lighting instructions, color palettes, on and on."

I just re-wrote this with respect to movies.

> I was thinking more like ‘make it look like Bladerunner if Kurosawa directed it, with a score like Zimmer.’

Because, as we all know, every single movie by Kurosawa is the same, as is every single score by Hans Zimmer, so it's ridiculously easy to recreate any movie in that style, with that music.

> You’re really failing to let go of the idea that you need to prescribe every little thing. Like Midjourney today, you’ll be able to give general guidance.

Yes, and Midjounrey today really sucks at:

- being consistent

- creating proper consistent details

A general prompt will give you a general result that is usually very far from what you actually have in mind.

And yes, you will have to prescribe a lot of small things if you want your movie to be consistent. And for your movie to make any sense.

Again, tell me how exactly your amazing magical AI director will know which wardrobe to chose, which camera angles to setup, which typography to use, which sound effects to make just from the script you hand in?

you can start ,with a very simple scene I referenced in my original reply: two people talking at the table in Whiplash.

> But paint by numbers stuff like many movies already are? A Hallmark Channel weepy? I bet we will.

Even those movies have more details and more care than you can get out of AIs (now, or in foreseeable future)

> Again, tell me how exactly your amazing magical AI director will know which wardrobe to chose, which camera angles to setup, which typography to use, which sound effects to make just from the script you hand in?

I think you're still assuming I always want to choose those things. That's why we're talking past each other. A good movie making model would choose for me unless I give explicit directions. Today we don't see long-range coherence in the results of movie (or game engine) models, but the range is increasing, and I'm willing to bet we will see movie-length coherence in the next decade or so.

By the way, I also bet that if I pasted exactly the No Country for Old Men script scene description from up this thread into Midjourney today it would produce at least some compelling images with decent choices of wardrobe, lighting, set dressing, camera angle, exposure, etc etc. That's what these models do, because they're extrapolating and interpolating between the billion images they've seen that contained these human choices.

AFAIK Midjourney produces single images, so the relevant scope of consistency is inside the single image only. Not between images. A movie model needs coherence across ~160,000 images, which is beyond the state of the art today but I don't see why it's impossible or unreasonable in the long run.

> A general prompt will give you a general result that is usually very far from what you actually have in mind.

Which is only a problem if I have something in mind. Alternatively I can give no guidance, or loose guidance, make half a dozen variations, pick the one I like best. Maybe iterate a couple of times into that variation tree. Just like the image generators do.

This is such an incredibly confident comment. I'm in awe.
Cool since you know, at what point in the process do you swap out all the white ppl? Thanks in advance!
Shane Carruth (Primer) released interesting scripts for "A Topiary" and "The Modern Ocean" which now have no hope of being filmed. I hope AI can bring them to life someday. If we get tools like ControlNet for video, maybe Carruth could even "direct" them himself.
This exists already actually. Kling AI 1.5. Saw the demo on twitter two days ago, which shows a photo-to-video transformation on an image of three women standing on a beach, and the video transformation simulates the camera rotating, with the women moving naturally. Just involves a segment-anything style selection of the women, and drawing a basic movement vector.

https://x.com/minchoi/status/1862975323433795726

Controlnet for video is just controlnet but ran frame by frame resulting in AI Rotoscoping.
brilliant take from Ben Affleck on ai in movies..

"movies will be one of the last things to be replaced by ai"

https://www.youtube.com/watch?v=ypURoMU3P3U

including this quote: "being a craftsman is knowing how to work, art is knowing when to stop"

It is absolutely true that LLMs do not know when to stop.
An adequate prompter (human at the prompt) knows when to stop.
  • jerf
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That's what I describe at the end, albeit quickly in lingo, where the internal coherence is maintained in internal embeddings that are never related to English at all. A top-level AI could orchestrate component AIs through embedded vectors, but you'll never do it with a human trying to type out descriptions.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Under the hood, with the way the text is turned into vector embeddings, it's fairly questionable whether you'd agree that it can even represent such a thing.

The text encoder may not be able to know complex relationships, but the generative image/video models that are conditioned on said text embeddings absolutely can.

Flux, for example, uses the very old T5 model for text encoding, but image generations from it can (loosely) adhere to all rules and nuances in a multi-paragraph prompt: https://x.com/minimaxir/status/1820512770351411268

> but image generations from it can (loosely) adhere to all rules and nuances in a multi-paragraph prompt

Flux certainly does not consistently do so across an arbitrary collection of multi-paragraph prompts, as anyone whose run more than a few long prompts past it would recongize; also, the tweet is wrong in the other direction, as well, longer language-model-preprocessed prompts for models that use CLIP (like various SD1.5 and SDXL derivatives) are, in fact, a common and useful technique. (You’d kind of think that the fact that generated prompt here is significantly longer than the 256 token window of T5 would be a clue that the 77 token limit of CLIP might not be as big of a constraint as the tweet was selling it as, too.)

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> You might as well skip the English middleman and go straight to an embedding not constrained by a human language mapping.

How would you ever tweak or debug it in that case? It doesn't strictly have to be English, but some kind of human-readable representation of the intermediate stages will be vital.

Can't you just give it a photo of a dog, and then say "use this dog in this or that scene"?
Yes, the idea works and was explored with dreambooth/textual inversion for image diffusion models.

https://dreambooth.github.io/ https://textual-inversion.github.io/

Both of those are of course out of date and require significant training instead of just feeding it a single image.

InstantID (https://replicate.com/zsxkib/instant-id) fixes that issue.

Dreambooth style training is in no way out of date.

If you just want a face, InstandID/Pulid work - but it’s not going to be very varied. Doing actual training means you can get any perspective, lighting, style, expression, etc - and have the whole body be accurate.

How would that even work? A dog has physical features (legs, nose, eyes, ears, etc.) that they use to interact with the world around them (ground, tree, grass, sounds, etc.). And each one of those things has physical structures that compose senses (nervous system, optic nerves, etc.). There are layers upon layers of intricate complexity that took eons to develop and a single photo cannot encapsulate that level of complexity and density of information. Even a 3D scan can't capture that level of information. There is an implicit understanding of the physical world that helps us make sense of images. For example, a dog with all four paws standing on grass is within the bounds of possibility; a dog with six paws, two of which are on it's head, are outside the bounds of possibility. An image generator doesn't understand that obvious delineation and just approximates likelihood.
A single photo doesn't have to capture all that complexity. It's carried by all those countless dog photos and videos in the training set of the model.
Actually, it does have to capture all of that complexity because it's a photon-based analysis of reality. You cannot take a photo without doing that.
This is correct and even image generation models aren't really trained for comprehension of image composition yet.

Even the models based off danbooru and E621 still aren't the best at that. And us furries like to tag art in detail.

The best we can really do at the moment is regional prompting, perhaps they need something similar for video.

For those not in this space, Sora is essentially dead on arrival.

Sora performs worse than closed source Kling and Hailuo, but more importantly, it's already trumped by open source too.

Tencent is releasing a fully open source Hunyuan model [1] that is better than all of the SOTA closed source models. Lightricks has their open source LTX model and Genmo is pushing Mochi as open source. Black Forest Labs is working on video too.

Sora will fall into the same pit that Dall-E did. SaaS doesn't work for artists, and open source always trumps closed source models.

Artists want to fine tune their models, add them to ComfyUI workflows, and use ControlNets to precision control the outputs.

Images are now almost 100% Flux and Stable Diffusion, and video will soon be 100% Hunyuan and LTX.

Sora doesn't have much market apart from name recognition at this point. It's just another inflexible closed source model like Runway or Pika. Open source has caught up with state of the art and is pushing past it.

[1] https://github.com/Tencent/HunyuanVideo

Their online version is all in Chinese (or at least some Chinese-looking script I don't understand) ... and they recommend an 80GB GPU to run the thing, which costs ~€15-18k. Yikes, guess I won't be doing this at home anytime soon
[flagged]
something like a white paper with a mood board, color scheme, and concept art as the input might work. This could be sent into an LLM "expander" that increases the words and speficity. Then multiple reviews to tap things in the right direction.
I expect this kind of thing is actually how it's going to work longer term, where AI is a copilot to a human artist. The human artist does storyboarding, sketching in backdrops and character poses in keyframes, and then the AI steps in and "paints" the details over top of it, perhaps based on some pre-training about what the characters and settings are so that there's consistency throughout a given work.

The real trick is that the AI needs to be able to participate in iteration cycles, where the human can say "okay this is all mostly good, but I've circled some areas that don't look quite right and described what needs to be different about them." As far as I've played with it, current AIs aren't very good at revisiting their own work— you're basically just tweaking the original inputs and otherwise starting over from scratch each time.

We will shortly have much better tweaking tools which work not only on images and video but concepts like what aspects a character should exhibit. See for example the presentation from Shapeshift Labs.

https://www.shapeshift.ink/

  • 3form
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
And I think this realistically is going to be the shape of the tools to come in the foreseeable future.
You should see what people are building with Open Source video models like HunYuan [1] and ComfyUI + Control Nets. It blows Sora out of the water.

Check out the Banodoco Discord community [2]. These are the people pioneering steerable AI video, and it's all being built on top of open source.

[1] https://github.com/Tencent/HunyuanVideo

[2] https://banodoco.ai/

The whole point of AI stuff is not to produce exactly what you have in mind, but what you are describing. Same with text, code, images, video...
Sounds like we achieved 50% of AI then. The artifical is there, now we need the intelligence part.
  • baq
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Sora should be evaluated on xkcd strips as inputs.
The adage "a picture is worth a thousand words" has the nice corollary "A thousand words isn't enough to be precise about an image".

Now expand that to movies and games and you can get why this whole generative-AI bubble is going to pop.

> Now expand that to movies and games and you can get why this whole generative-AI bubble is going to pop.

What will save it is that, no matter how picky you are as a creator, your audience will never know what exactly was that you dreamed up, so any half-decent approximation will work.

In other words, a corollary to your corollary is, "Fortunately, you don't need them to be, because no one cares about low-order bits".

Or, as we say in Poland, "What the eye doesn't see, the heart doesn't mourn."

> What will save it is that, no matter how picky you are as a creator, your audience will never know what exactly was that you dreamed up, so any half-decent approximation will work.

Part of the problem is the "half decent approximations" tend towards a clichéd average, the audience won't know that the cool cyberpunk cityscape you generated isn't exactly what you had in mind, but they will know that it looks like every other AI generated cyberpunk cityscape and mentally file your creation in the slop folder.

I think the pursuit of fidelity has made the models less creative over time, they make fewer glaring mistakes like giving people six fingers but their output is ever more homogenized and interchangable.

Empirically, we've passed the point where that's true, for someone not being lazy about it.

https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-ar...

In other words, someone willing to tweak the prompt and press the button enough times to say "yeah, that one, that's really good" is going to have a result which cannot in fact be reliably binned as AI-generated.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I mean, no? None of the AI-generated images managed to be indistinguishable. Some people were much better than others at spotting the differences. He even quotes, at length, an artist giving a detailed breakdown of what's wrong with one of the images he thought was good.
Did you read the article? Respondents performed barely better than chance. Sure, no one was actually 100% wrong[0]. Just almost always wrong, with a noticeable bias towards liking AI art more.

The detailed breakdown you mention? Maybe it's accurate to that artist's thought process, maybe it's more of a rationalization; either way, it's not a general rule they, or anyone, could apply to any of the other AI images. Most of those in the article don't exhibit those "telltale signs", and the one that does - the Victorian Megaship - was actually made by human artist with no AI in the mix.

EDIT:

Another image that stands out to me is Riverside Cafe. Myself, like apparently a lot of other people, going by articles' comments, assumed it's a human-made one, because we vaguely remembered Vang Gogh painted something like it. He did, it's called Café Terrace at Night - and yet, despite immediately evoking the association, Riverside Cafe was made by AI, and is actually nothing like Café Terrace at Night at any level.

(I find it fascinating how this work looks like a copy of Van Gogh at first glance, for no obvious reason, but nothing alike once you pause to look closer. It's like... they have similar low-frequency spectra or something?)

EDIT2:

Played around with the two images in https://ejectamenta.com/imaging-experiments/fourifier/. There are some similarities in the spectra, I can't put my finger on them exactly. But it's probably not the whole answer. I'll try to do some more detailed experimentation later.

--

[0] - Nor should you expect it - it would mean either a perfect calibration, or be the equivalent of flipping a coin and getting heads 30 times in a row; it's not impossible, but you shouldn't expect to see it unless you're interviewing fewer people than literally the entire population of the planet.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yes, I read the article. Did you?

> The average participant scored 60%, but people who hated AI art scored 64%, professional artists scored 66%, and people who were both professional artists and hated AI art scored 68%.

> The highest score was 98% (49/50), which 5 out of 11,000 people achieved. Even with 11,000 people, getting scores this high by luck alone is near-impossible.

This accurately boils down to "cannot reliably be binned as AI-generated". Your objection amounts to a vanishing few people who are informed that this is a test being able to do a pretty good job at it.

If 0.0005% of people who are specifically judging art as AI or not AI, in a test which presumably attracts people who would like to be able to do that thing, can do a 98% accurate job, and the average is around 60%: that isn't reliable.

If that doesn't work for you, I encourage you to take the test. Obviously since you've read the article there are some spoilers, but there's still plenty of chances to get it right or wrong. I think you'll discover that you, too, cannot do this reliably. Let us know what happens.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I can't do it reliably and I don't want to - I learnt to spot certain popular video compression artifacts in my youth, and that has not enhanced my life. But any distinction that random people taking a casual internet survey get right 60% of the time is absolutely one that you can make reliably if you put in the effort. Look at something like chicken sexing.
a somewhat counterintuitive argument is this: AI models will make the overall creative landscape more diverse and interesting, ie, less "average"!

Imagine the space of ideas as a circle, with stuff in the middle being more easy to reach (the "cliched average"). Previously, traversing the circle was incredibly hard - we had to use tools like DeviantArt, Instragram, etc to agglomerate the diverse tastes of artists, hoping to find or create the style we're looking for. Creating the same art style is hiring the artist. As a result, on average, what you see is the result of huge amounts of human curation, effort, and branding teams.

Now reduce the effort 1000x, and all of a sudden, it's incredibly easy to reach the edge of the circle (or closer to it). Sure, we might still miss some things at the very outer edge, but it's equivalent to building roads. Motorists appear, people with no time to sit down and spend 10000 hours to learn and master a particular style can simply remix art and create things wildly beyond their manual capabilities. As a result, the amount of content in the infosphere skyrockets, the tastemaking velocity accelerates, and you end up with a more interesting infosphere than you're used to.

To extend the analogy, imagine the circle as a probability distribution; for simplicity, imagine it's a bivariate normal joint distribution (aka. Gaussian in 3D) + some noise, and you're above it and looking down.

When you're commissioning an artist to make you some art, you're basically sampling from the entire distribution. Stuff in the middle is, as you say, easiest to reach, so that's what you'll most likely get. Generative models let more people do art, meaning there's more sampling happening, so the stuff further from the centre will be visited more often, too.

However, AI tools also make another thing easier: moving and narrowing the sampling area. Much like with a very good human artist, you can find some work that's "out there", and ask for variations of it. However, there are only so many good artists to go around. AI making this process much easier and more accessible means more exploration of the circle's edges will happen. Not just "more like this weird thing", but also combinations of 2, 3, 4, N distinct weird things. So in a way, I feel that AI tools will surface creative art disproportionally more than it'll boost the common case.

Well, except for the fly in the ointment that's the advertising industry (aka. the cancer on modern society). Unfortunately, by far most of the creative output of humanity today is done for advertising purposes, and that goal favors the common, as it maximizes the audience (and is least off-putting). Deluge of AI slop is unavoidable, because slop is how the digital world makes money, and generative AI models make it cheaper than generative protein models that did it so far. Don't blame AI research for that, blame advertising.

A small technical point:

Tastes are almost never normally distributed along a spectrum, but multi-modal. So the more dimensions you explore in, the more you end up with “islands of taste” on the surface of a hyper sphere and nothing like the normal distribution at all. This phenomenon is deeply tied to why “design by committee” (eg, in movies) always makes financial estimates happy but flops with audiences — there is almost no customer for average anything.

I agree with your conclusion.

"Design by committee" is also how most hit movies are made. Hit songs too
Do you have an example?

My experience with customer surveys indicates the opposite — that customers prefer you have an opinion.

An example of a hit movie or song that was created by committee?

Inside Out 2 had the largest box office of any movie in 2024. Checkout the "research and writing" section in its wikipedia article https://en.wikipedia.org/wiki/Inside_Out_2#Research_and_writ... ... psychological consultants, a feedback loop with a group of teenagers, test screenings.

Or how about "Die with a smile" - currently number 1 in the global top 50 on Spotify. 5 songwriters

Or "APT." - currently number 2 in the global top 50 on Spotify. 11 songwriters

You don't have to look very hard

Inside Out 2 has a single writer, who also worked on the first.

Consulting with SMEs, testing with audiences, etc isn’t “design by committee”.

Similarly, “Die With a Smile” seems to have been the work of two people with developed styles with support — again, not a committee:

> The collaboration was a result of Mars inviting Gaga to his studio where he had been working on new music. He presented the track in progress to her and the duo finished writing and recording the song the same day.

Apt seems to have started with a single person goofing around, then pitched as a collaboration and the expanded team entered at that point.

  • etiam
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I like the picture, but I'd be more impressed with the exploration argument if we were collectively actually doing a good job giving recognition to original and substantial works that already exist. It'd be of greater service in that regard to create a high-quality artificial stand-in for that limited-quantity "attention" and "engagement" all the bloodsuckers seem so keen on harvesting.

(And I do blame the advertisers, but frankly anyone handing them new amplifiers, with entirely predictable consequences, is also not blameless.)

  • js8
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I read this argument/analogy and the "AI slop will win" idea reminds me of the idea that "fake news will win".

That is based on perception that it is easier than ever to create fake content, but fails to account for the fact that creating real content (for example, simply taking a video) is even much easier. So while there is more fake content, there is also lot more real content, and so manipulation of reality (for example, denying a genocide) is much harder today than ever.

Anyway, "the AI slop will win" is based on a similar misconception, that total creative output will not increase. But like with fake news, it probably will not be the case, and so the actual amount of good art will increase, too.

I think we are OK as long as normal humans prefer to create real news rather than fake news, and create innovative art rather than cliched art.

> I think we are OK as long as normal humans prefer to create real news rather than fake news, and create innovative art rather than cliched art.

So we're not OK.

I think I need to state my assumptions/beliefs here more explicitly.

First of all, "AI slop" is just the newest iteration on human-produced slop, which we're already drowning in. Not because people prefer to create slop, but because they're paid to do it, because most content is created by marketers and advertisers to sell you shit, and they don't want it to be better than strictly necessary for purpose.

It's the same with fake news, really. Fake news isn't new. Almost all news is fake news; what we call "fake news" is a particular flavor of bullshit that got popular as it got easier for random humans to publish stories competing with established media operations.

In both cases, AI is exacerbating the problem, but it did not create it - we were already drowning in slop.

Which leads me to related point:

> Anyway, "the AI slop will win" is based on a similar misconception, that total creative output will not increase.

It will. But don't forget Sturgeon's law - "ninety percent of everything is crap"[0]. Again, for the past couple decades, we've been drowning in "creative output". It's not a new problem, it's just increasingly noticeable in the past years, because the Web makes it very easy for everyone to create more "creative output" (most of which is, again, advertising), and it finally started overwhelming our ability to filter out the crap and curate the gems.

Adding AI to the mix means more output, which per Sturgeon's law, means disproportionately more crap. That's not AI's fault, that's ours; it's still the same problem we had before.

--

[0] - https://en.wikipedia.org/wiki/Sturgeon%27s_law

It's just like when Bootstrap came out. Terrible-looking websites stopped appearing, but so did beautiful websites.
And as AI oversaturates the cliched average, creators will have to get further and further away from the average to differentiate themselves. If you pour a lot of work into your creation you want to make it clear that it isn't some cliched AI drivel.
You will basically have to provide a video showcasing your workflow.
I promise you that the artists can outlive the VC money.
> I think the pursuit of fidelity has made the models less creative over time, they make fewer glaring mistakes like giving people six fingers but their output is ever more homogenized and interchangeable.

That may be true of any one model (though I don’t think it really is, either, I think newer image gen models are individually capable of a much wider array of styles than earlier models), but it is pretty clearly not true of the whole range of available models, even if you look at a single model “family” like “SDXL derivatives”.

> I think the pursuit of fidelity has made the models less creative over time (...) their output is ever more homogenized and interchangable.

Ironically, we're long past that point with human creators, at least when it comes to movies and games.

Take sci-fi movies, compare modern ones to the ones from the tail end of the 20th century. Year by year, VFX gets more and more detailed (and expensive) - more and better lights, finer details on every material, more stuff moving and emitting lights, etc. But all that effort arguably killed immersion and believability, by making scenes incomprehensible. There's way too much visual noise in action scenes in particular - bullets and lighting bolts zip around, and all that detail just blurs together. Contrast the 20th century productions - textures weren't as refined, but you could at least tell who's shooting who and when.

Or take video games, where all that graphics works makes everything look the same. Especially games that go for realistic style, they're all homogenous these days, and it's all cheap plastic.

(Seriously, what the fuck went wrong here? All that talk, and research, and work into "physically based rendering", yet in the end, all PBR materials end up looking like painted plastic. Raytracing seems to help a bit when it comes to liquids, but it still can't seem to make metals look like metals and not Fischer-Price toys repainted to gray.)

So I guess in this way, more precision just makes the audience give up entirely.

> they will know that it looks like every other AI generated cyberpunk cityscape and mentally file your creation in the slop folder.

The answer here is the same as with human-produced slop: don't. People are good at spotting patterns, so keep adding those low-order bits until it's no longer obvious you're doing the same thing everyone else is.

EDIT: Also, obligatory reminder that generative models don't give you average of training data with some noise mixed up; they sample from learned distribution. Law of large numbers apply, but it just means that to get more creative output, you need to bias the sampling.

Video games (the much larger industry of the two, by revenue) seems to be closer to understanding this. AAA games dominate advertising and news cycles, but on any best-seller list AAA games are on par with indie and B games (I think they call them AA now?). For every successful $60M PBR-rendered Unreal 5 title there is an equally successful game with low-fidelity graphics but exceptional art direction, story or gameplay.

Western movie studios may discover the same thing soon, with the number of high-budget productions tanking lately.

I agree. The one shining hope I have is the incredible art and animation style of Fortiche[0]'s Arcane[1] series. Watch that, and then watch any recent (and identikit) Pixar movie, and they are just streets ahead. It's just brilliant.

[0] https://en.wikipedia.org/wiki/Fortiche

[1] https://en.wikipedia.org/wiki/Arcane_(TV_series)

I was just going to say this. If you have an artistic vision that you simply must create to the minutest detail, then like any artist, you're in for a lot of manual work.

If you are not beholden to a precise vision or maybe just want to create something that sells, these tools will likely be significant productivity multipliers.

  • whstl
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Exactly.

So far ChatGPT is not for writing books, but is great for SEO-spam blogposts. It is already killing the content marketing industry.

So far Dall-E is not for making master paintings, but it's great for stock images. It might kill most of the clipart and stock image industry.

So far Udio and other song generators are not able to make symphonies, but it's great for quiet background music. It might kill most of the generic royalty-free-music industry.

Half decent approximations work a lot better in generating the equivalent of a stock illustrations of a powerpoint slide.

Actual long form art like a movie works because it includes many well informed choices that work together as a whole.

There seems to be a large gap between generating a few seconds of video vaguely like one's notion, and trying to create 90 minutes that are related and meaningful.

Which doesn't mean that you can't build from this starting place build more robust tools. But if you think that this is a large, hard amount of work, it certainly could call into question optisimitic projections from people who don't even seem to notice that there is work need at all.

That's just sad, and why people have a derogative stance towards generative AI: "half-decent" approximation removes all personality from the output, leading to a bunch of slop on the internet.
It does indeed, but then many of those people don't notice they're already consuming half-decent, personality-less slop, because that's what human artists make too, when churning out commercial art for peanuts and on tight deadlines.

It's less obvious because people project personality onto the content they see, because they implicitly assume the artist cared, and had some vision in mind. Cheap shit doesn't look like cheap shit in isolation. Except when you know it's AI-generated, because this removes the artist from the equation, and with it, your assumptions that there's any personality involved.

I'm not so sure, one of the primary complaints about IP farming slop that major studios have produced recently is a lack of firm creative vision, and clear evidence of design by committee over artist direction.

People can generally see the lack of artistic intent when consuming entertainment.

That's true. Then again, complaints about "lack of firm creative vision, and clear evidence of design by committee over artist direction" is something I've seen levied against Disney for several years now; importantly, they started before generative AI found its way into major productions.

So, while GenAI tools make it easier to create superficially decent work that lacks creative intent, the studios managed to do it just fine with human intelligence only, suggesting the problem isn't AI, but the studios and their modern management policies.

It’s like how there are two types of movie directors (or creative directors in general), the dictatorial “100 takes until I get it exactly how I envision it” type, and the “I hired you to act, so you bring the character to life for me and what will be will be” type

Right now AI is more the latter, but many people want it to be the former

AI is neither.

A director letting actors "just be" knows exactly what he/she wants, and choses actors accordingly. Just as the directors that want the most minute detail.

Clint Eastwood tries to do at most one take of a scene. David Fincher is infamous for his dozens of takes.

AI is neither Fincher nor Eastwood.

Do artist really have a fully formed vision in their head? I suspect the creative process is much more iterative rather than one-directional.
No one can have a fully formed vision. But intent, yes. Then you use techniques to materialize it. Word is a poor substitute for that intent, which is why there’s so many sketches in a visual project.
And why physical execution frequently significantly departs from sketches and concept art. The amount of intent that doesn't get translated is pretty staggering in both physical and digital pipelines in many projects.
Your eye sees just about every frame of a film…

People may not think they care, but obviously they do. That’s why marvel movies do better than DC ones.

People absolutely care about details in their media.

Fair point, particularly given the example. My conclusion wrt. Marvel vs. DC is that DC productions care much less about details, in exactly the way I find off-putting.

Not all details matter, some do. And, it's better to not show the details at all, than to be inconsistent in them.

Like, idk., don't identify a bomb as a specific type of existing air-fuel ordnance and then act about it as if it was a goddamn tactical nuke. Something along these lines was what made me stop watching Arrow series.

> Not all details matter, some do

This is a key observation, unfortunately generally solving for what details matter is extremely difficult.

I don’t think video generation models help with that problem, since you have even less control of details than you do with film.

At least before post.

The visuals are the absolute bottom of why DC movies have performed worse over the years.

The movies have just had much worse audience and critical reception.

“A frame is worth a billion rays”

The last production I worked on averaged 16 hours per frame for the final rendering. The amount of information encoded in lighting, models, texture, maps, etc is insane.

What were you working on? It took a month to render 2 seconds of video?
VFX heavy feature for a Disney subsidiary. Each frame is rendered independently of each other - it’s not like video encoding where each frame depends on the previous one, they all have their own scene assembly that can be sent to a server to parallelize rendering. With enough compute, the entire film can be rendered in a few days. (It’s a little more complicated than that but works to a first order approximation)

I don’t remember how long the final rendering took but it was nearly two months and the final compute budget was 7 or 8 figures. I think we had close to 100k cores running at peak from three different render farms during crunch time, but don’t take my word for it I wasn’t producing the picture.

Are they still using CPUs and not GPUs for rendering?

Weren't the rendering algos ported to CUDA yet?

GPU renderers exist but they have pretty hard scaling limits, so the highest end productions still use CPU renderers almost exclusively.

The 3D you see in things like commercials is usually done on GPUs though because at their smaller scale it's much faster.

There's plenty of GPU renderers but they face the same challenge as large language models: GPU memory is much more expensive and limited that CPU memory.

A friend recently told me about a complex scene (I think it was a Marvel or Star Wars flick) where they had so much going on in the scene with smoke, fire, and other special effects that they had to wait for a specialized server with 2TB of RAM to be assembled. They only had one such machine so by the time the rest of the movie was done rendering, that one scene still had a month to go.

I'm not sure how well suited GPUs are to the workload. They're also rather memory constrained. The Moana dataset is from 2016 so it's not exactly cutting edge but good luck loading it into vram.

https://www.disneyanimation.com/data-sets/?drawer=/resources...

https://datasets.disneyanimation.com/moanaislandscene/island...

> When everything is fully instantiated the scene contains more than 15 billion primitives.

Most VFX productions take over 2 CPU hours a frame for final video, and have for a very long time. It takes shorter then a month since this gets parallelized on large render farms.
I would guess there is more than one computer :)

Pixar's stuff famously takes days per frame.

> Pixar's stuff famously takes days per frame.

Do you have a citation for this? My guess would be much closer to a couple of hours per frame.

The point is not to be precise. It's to be "good enough".

Trust me, even if you work with human artists, you'll keep saying "it's not quite I initially invisioned, but we don't have budget/time for another revision, so it's good enough for now." all the time.

Corollary: I couldn't create an original visual piece of art to save my life, so prompting is infinitely better than what I could do myself (or am willing to invest time in building skills). The gen-AI bubble isn't going to burst. Pareto always wins.
If you can build a system that can generate engaging games and movies, from an economic (bubble popping or not popping) point of view it's largely irrelevant whether they conform to fine-grained specifications by a human or not.
In other words:

If you find a silver bullet then everything else is largely irrelevant.

Idk if you noticed but that “if” is carrying an insane amount of weight.

Text generation is the most mature form of genAI and even that isn't even remotely close to producing infinite engaging stories. Adding the visual aspect to make that story into a movie or the interactive element to turn it into a game is only uphill from there.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Maybe your AI bubble! If you define AI to be something like just another programming language yes you will be sadly disappointed. You see it as an employee with its own intuitions and ways of doing things that you're trying to micromanage.

I have a bad feeling that you'd be a horrible manager if you ever were one.

(2020) https://arxiv.org/abs/2010.11929 : an image is worth 16x16 words transformers for image recognition at scale

(2021) https://arxiv.org/abs/2103.13915 : An Image is Worth 16x16 Words, What is a Video Worth?

(2024) https://arxiv.org/abs/2406.07550 : An Image is Worth 32 Tokens for Reconstruction and Generation

Those are indeed 3 papers.
Yes in a nutshell they explain that you can express a picture or a video with relatively few discrete information.

First paper is the most famous and prompted a lot of research to using text generation tools in the image generation domain : 256 "words" for an image, Second paper is 24 reference image per minutes of video, Third paper is a refinement of the first saying you only need 32 "tokens". I'll let you multiply the numbers.

In kind of the same way as a who's who game, where you can identify any human on earth with ~32bits of information.

The corollary being that contrary to what parent is telling there is no theoretical obstacle to obtaining a video from a textual description.

I think something is getting lost in translation.

These papers, from my quick skim (tho I did read the first one fully years ago,) seem to show that some images and to an extent video can be generated from discrete tokens, but does not show that exact images nor that any image can be.

For instance, what combination of tokens must I put in to get _exactly_ Mona Lisa or starry night? (Tho these might be very well represented in the data set. Maybe a lesser known image would be a better example)

As I understand, OC was saying that they can’t produce what they want with any degree of precision since there’s no way to encode that information in discrete tokens.

If you want to know what tokens you want to obtain _exactly_ Mona Lisa, or any other image, you take the image and put it through your image tokenizer aka encode it, and if you have the sequence of token you can decode it to an image.

VQ-VAE (Vector Quantised-Variational AutoEncoder), (2017) https://arxiv.org/abs/1711.00937

The whole encoding-decoding process is reversible, and you only lose some imperceptible "details", the process can be either trained with a L2Loss, or a perceptual loss depending what you value.

The point being that images which occurs naturally are not really information rich and can be compressed a lot by neural networks of a few GB that have seen billions of pictures. With that strong prior, aka common knowledge, we can indeed paint with words.

Maybe I’m not able to articulate my thought well enough.

Taking an existing image and reversing the process to get the tokens that led to it then redoing that doesn’t seem the same as inserting token to get a precise novel image.

Especially since, as you said, we’d lose some details, it suggests that not all images can be perfectly described and recreated.

I suppose I’ll need to play around with some of those techniques.

After encoding the models are usually cascaded either with a LLM or a diffusion model.

Natural Image-> Sequence of token, but not all possible sequence of token will be reachable. Like plenty of letters put together form non-sensical words.

Sequence of token -> Natural Image : if the initial sequence of token is unsensical the Natural image will be garbage.

So usually you then modelize the sequence of token so that it produce sensical sequences of token, like you would with a LLM, and you use the LLM to generate more tokens. It also gives you a natural interface to control the generation of token. You can express with words what modifications to the image you should do. Which will allow you to find the golden sequence of token which correspond to the mona-lisa by dialoguing with the LLM, which has been trained to translate from english to visual-word sequence.

Alternatively instead of a LLM you can use a diffusion model, the visual words usually are continuous, but you can displace them iteratively with text using things like "controlnet" (stable diffusion).

You are half right. Its funny because I use the same same. Mine is "A picture is worth a thousand words. thats why it takes 1000 words to describe the exact image that you want! Much better to just use Image to Image instead".

Thats my full quote on this topic. And I think it stands. Sure, people won't describe a picture. instead, they will take an existing picture or video, and do modifications of it, using AI. That is much much simpler and more useful, if you can file a scene, and then animate it later with AI.

  • ben_w
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Now expand that to movies and games and you can get why this whole generative-AI bubble is going to pop.

The prior sentence does not imply the conclusion.

A picture is worth a thousand words.

A word is worth a thousand pictures. (E.g Love)

It is abstraction all the way

it is all Information to be precise.
Actually, I've gotten some great results with image2text2image with less than a thousand words. Maybe not enough for a video, but for some not too crazy images, it is enough!
Sure it's going to pop. But when is the important question.

Being too early about this and being wrong are the same.

Comment was probably rather about the 360 degree turning heads etc.
I agree that people who want any meaningful precision in their visual results will inevitably be disappointed.
And another thing that irks me: none of these video generators get motion right...

Especially anything involving fluid/smoke dynamics, or fast dynamic momements of humans and animals all suffer from the same weird motion artifacts. I can't describe it other than that the fluidity of the movements are completely off.

And as all genai video tools I've used are suffering from the same problem, I wonder if this is somehow inherent to the approach & somehow unsolvable with the current model architectures.

I think one of the biggest problems is the models are trained on 2D sequences and don't have any understanding of what they're actually seeing. They see some structure of pixels shift in a frame and learn that some 2D structures should shift in a frame over time. They don't actually understand the images are 2D capture of an event that occurred in four dimensions and the thing that's been imaged is under the influence of unimaged forces.

I saw a Santa dancing video today and the suspension of disbelief was almost instantly dispelled when the cuffs of his jacket moved erratically. The GenAI was trying to get them to sway with arm movements but because it didn't understand why they would sway it just generated a statistical approximation of swaying.

GenAI also definitely doesn't understand 3D structures easily demonstrated by completely incorrect morphological features. Even my dogs understand gravity, if I drop an object they're tracking (food) they know it should hit the ground. They also understand 3D space, if they stand on their back legs they can see over things or get a better perspective.

I've yet to see any GenAI that demonstrates even my dogs' level of understanding the physical world. This leaves their output in the uncanny valley.

They don't even get basic details right. The ship in the 8th video changes with every camera change and birds appear out of nowhere.
As far as I can tell it's a problem with CGI at all. Whether you're using precise physics models or learned embeddings from watching videos, reproducing certain physical events is computationally very hard, whereas recording them just requires a camera (and of course setting up the physical world to produce what you're filming, or getting very lucky). The behind the scenes from House of the Dragon has a very good discussion of this from the art directors. After a decade and a half of specializing in it, they have yet to find any convincing way to create fire other than to actually create fire and film it. This isn't a limitation of AI and it has nothing to do with intelligence. A human can't convincingly animate fire, either. It seems to me that discussions like this from the optimist side always miss this distinction and it's part of why I think Ben Affleck was absolutely correct that AI can't replace filmmaking. Regardless of the underlying approach, computationally reproducing what the world gives you for free is simply very hard, maybe impossible. The best rendering systems out there come nowhere close to true photorealism over arbitrary scenarios and probably never will.
What's the point of poking holes in new technology and nitpiking like this? Are you blind to the immense breakthroughs made today and yet you focus what irks you about some tiny detail that might go away after a couple of versions?
At this phase of the game a lot of people are pretty accustomed to the pace of technological innovation in this space, and I think it’s reasonable for people to have a sense of what will/won’t go away in a few versions. Some of Sora’s issues may just require more training, some of these issues are intrinsic to their approach and will not be solvable with their current method.

To that end, it is actually extremely important to nit-pick this stuff. For those of us using these tools, we need to be able to talk shop about which ones are keeping up, which are work like shit in practice, and which ones work but only in certain situations, and which situations those are.

Neural networks use smooth manifolds as their underlying inductive bias so in theory it should be possible to incorporate smooth kinematic and Hamiltonian constraints but I am certain no one at OpenAI actually understands enough of the theory to figure out how to do that.
> I am certain no one at OpenAI actually understands enough of the theory to figure out how to do that

We would love to learn more about the origin of your certainty.

I don't work there so I'm certain there is no one with enough knowledge to make it work with Hamiltonian constraints because the idea is very obvious but they haven't done it because they don't have the wherewithal to do so. In other words, no one at OpenAI understands enough basic physics to incorporate conservation principles into the generative network so that objects with random masses don't appear and disappear on the "video" manifold as it evolves in time.
> the idea is very obvious but they haven't done it because they don't have the wherewithal to do so

Fascinating! I wish I had the knowledge and wherewithal to do that and become rich instead of wasting my time on HN.

No one is perfect but you should try to do better and waste less time on HN now that you're aware and can act on that knowledge.
Nah, I'm good. HN can be a very amusing place at times. Thanks, though.
How does your conclusion follow from your statement?

Neural networks are largely black box piles of linear algebra which are massaged to minimize a loss function.

How would you incorporate smooth kinematic motion in such an environment?

The fact that you discount the knowledge of literally every single employee at OpenAI is a big signal that you have no idea what you’re talking about.

I don’t even really like OpenAI and I can see that.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I've seen the quality of OpenAI engineers on Twitter and it's easy enough to extrapolate. Moreoever, neural networks are not black boxes, you're just parroting whatever you've heard on social media. The underlying theory is very simple.
Do not make assumptions about people you do not know in an attempt to discredit them. You seem to be a big fan of that.

I have been working with NLP and neural networks since 2017.

They aren’t just black boxes, they are _largely_ black boxes.

When training an NN, you don’t have great control over what parts of the model does what or how.

Now instead of trying to discredit me, would you mind answering my question? Especially since, as you say, the theory is so simple.

How would you incorporate smooth kinematic motion in such an environment?

Why would I give away the idea for free? How much do you want to pay for the implementation?
cop out... according to you, the idea is so obvious it wouldn't be worth anything.
[flagged]
lol. Ok dude you have a good one.
You too but if you do want to learn the basics then here's one good reference: https://www.amazon.com/Hamiltonian-Dynamics-Gaetano-Vilasi/d.... If you already know the basics then this is a good followup: https://www.amazon.com/Integrable-Hamiltonian-Systems-Geomet.... The books are much cheaper than paying someone like me to do the implementation.
Seriously... The ability to identify what physics/math theories the AI should apply and being able to make the AI actually apply those are very different things. And you don't seem to understand that distinction.
Unless you have $500k to pay for the actual implementation of a Hamiltonian video generator then I don't think you're in a position to tell me what I know and don't know.
lolz, I doubt very much anyone would want to pay you $500k to perform magic. Basically, I think you are coming across as someone who is trying to sound clever rather than being clever.
My price is very cheap in terms of what it would enable and allow OpenAI to charge their customers. Hamiltonian video generation with conservation principles which do not have phantom masses appearing and disappearing out of nowhere is a billion dollar industry so my asking price is basically giving away the entire industry for free.
Sure, but I imagine the reason you haven't started your own company to do it is you need 10s of millions in compute, so the price would be 500k + 10s of millions... Or you can't actually do it and are just talking shit on the internet.
I guess we'll never know.
Yeah I mean I would never pay you for anything.

You’ve convinced me that you’re small and know very little about the subject matter.

You don’t need to reply to this. I’m done with this convo.

Ok, have a good one dude.
There are physicists at OpenAI. You can verify with a quick search. So someone there clearly knows these things.
I'd be embarrassed if I was a physicists and my name was associated with software that had phantom masses appearing and disappearing into the void.
Why don't you write a paper or start a company to show them the right way to do it?
I don't think there is any real value in making videos other than useless entertainment. The real inspired use of computation and AI is to cure cancer, that would be the right way to show the world that this technology is worthwhile and useful. The techniques involved would be the same because one would need to include real physical constraints like conservation of mass and energy instead of figuring out the best way to flash lights on the screen with no regard for any foundational physical principles.

Do you know anyone or any companies working on that?

AI isn't trying to sell to you: a precise artist with real vision in your brain. It is selling to managers who want to shit out something in an evening that approximates anything, that writes ads that no one wants to see anyway, that produces surface level examples of how you can pay employees less because "their job is so easy"
  • spuz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yes and the thing is, even for those tasks, it's incredibly difficult to achieve even the low bar that a typical advertising manager expects. Try it yourself for any real world task and you will see.
Counterpoint: our CEO spent 25 minutes shitting out a bunch of AI ads because he was frustrated with the pace of our advertising creative team. They hated the ads that he created, for the reasons you mention, but we tested them anyways and the best performing ones beat all of our "expert" team's best ads by a healthy margin (on all the metrics we care about, from CTR to IPM and downstream stuff like retention and RoAS).

Maybe we're in a honeymoon period where your average user hasn't gotten annoyed by all the slop out there and they will soon, but at least for now, there is real value here. Yes, out of 20 ads maybe only 2 outperform the manually created ones, but if I can create those 20 with a couple hundred bucks in GenAI credits and maybe an hour or two of video editing that process wipes the floor with the competition, which is several thousand dollars per ad, most of which are terrible and end up thrown away, too. With the way the platforms function now, ad creative is quickly becoming a volume-driven "throw it at the wall and see what sticks" game, and AI is great for that.

> Maybe we're in a honeymoon period where your average user hasn't gotten annoyed by all the slop out there and they will soon

It’s this. A video ad with a person morphing into a bird that takes off like a rocket with fire coming out of its ass, sure it might perform well because we aren’t saturated with that yet.

You’d probably get a similar result by giving a camera to a 5 year old.

But you also have to ask what that’s doing long term to your brand.

> Counterpoint: our CEO spent 25 minutes shitting out a bunch of AI ads because he was frustrated with the pace of our advertising creative team. They hated the ads that he created, for the reasons you mention, but we tested them anyways and the best performing ones beat all of our "expert" team's best ads by a healthy margin (on all the metrics we care about, from CTR to IPM and downstream stuff like retention and RoAS).

My guess is that the criticism of AI not being that good is correct, but many people don't realize that most humans also aren't that good, and that it's quite possible that the AI performs better than mediocre humans.

This shouldn't be much of a surprise, we've seen automation replace low skilled labor in a lot of industries. People seem uncomfortable with the possibility that there's actually a lot of low skilled labor in the creative industry that could also be easily replaced.

A/B/C/D testing is the perfect grounds for that. You can keep automatically generating and iterating quickly while A/B tests are constantly being ran. This data on CTR can later be used to train the model better as well.
You seem to speak from experience of being that manager... I'm not going to ask what you shit out in your evenings.
Way back in the days of GPT-2, there was an expectation that you'd need to cherry-pick atleast 10% of your output to get something usable/coherent. GPT-3 and ChatGPT greatly reduced the need to cherry-pick, for better or for worse.

All the generated video startups seem to generate videos with much lower than 10% usable output, without significant human-guided edits. Given the massive amount of compute needed to generate a video relative to hyperoptimized LLMs, the quality issue will handicap gen video for the foreseeable future.

Plus editing text or an image is practical. Video editors typically are used to cut and paste video streams - a video editor can't fix a stream of video that gets motion or anatomy wrong.
Right, but you're thinking as someone who has a vision for the image/video. Think from someone who is needing an image/video and would normally hire a creative person for it, they might be able to get away with AI instead.

The same "prompt" they'd give the creative person they hired... Say, "I want an ad for my burgers that make it look really good, I'm thinking Christmas vibes, it should emphasize our high quality meat, make it cheerful, and remember to hint at our brand where we always have smiling cows."

Now that creative person would go make you that advert. You might check it, give a little feedback for some minor tweaks, and at some point, take what you got.

You can do the same here. The difference right now is that it'll output a lot of junk that a creative person would have never dared show you, so that initial quality filtering is missing. But on the flip side, it costs you a lot less, can generate like 100 of them quickly, and you just pick one that seems good enough.

Real artists struggle matching vague descriptions of what is in your head too. This is at least quicker?
Real artists take comic book scripts and turn them into actual comic books every month. They may not match exactly what the writer had in mind, but they are fit for purpose.
> They may not match exactly what the writer had in mind, but they are fit for purpose.

That's what GenAI is doing, too. After all, the audience only sees the final product; they never get know what the writer had in mind.

I haven't used SORA, but none of the GenAI I'm aware of could produce a competent comic book. When a human artist draws a character in a house in panel 1, they'll draw the same house in panel 2, not a procedurally generated different house for each image.

If a 60 year old grizzled detective is introduced in page 1, a human artist will draw the same grizzled detective in page 2, 3 and so on, not procedurally generate a new grizzled detective each time.

A human artist keeps state :). They keep it between drawing sessions, and more importantly, they keep very detailed state - their imagination or interpretation of what the thing (house, grizzled detective, etc.) is.

Most models people currently use don't keep state between invocations, and whatever interpretation they make from provided context (e.g. reference image, previous frame) is surface level and doesn't translate well to output. This is akin to giving each panel in a comic to a different artist, and also telling them to sketch it out by their gut, without any deep analysis of prior work. It's a big limitation, alright, but researchers and practitioners are actively working to overcome it.

(Same applies to LLMs, too.)

Btw there’s a way to match characters in a batch in the forge webUI which guarantees that all images in the batch have the same figure in it. Trivial to implement this in all other image generators. This critique is baseless.
So prove it. If you are in good faith arguing an AI, via automation can draw a comic script with consistent figures, please tell an AI to draw the images in the first 3 pages of this script I pulled from the comic book script archive:

https://www.comicsexperience.com/wp-content/uploads/2018/09/...

Or if you can't do this, explain why the feature you mentioned cannot do this, and what it or good for?

As long as you're not asking for a zero-shot solution with a single model run three times in a row, this should be entirely doable, though I imagine ensuring the result would require a complex pipeline consisting of:

- An LLM to inflate descriptions in the script to very detailed prompts (equivalent to artist thinking up how characters will look, how the scene is organized);

- A step to generate a representative drawing of every character via txt2img - or more likely, multiple ones, with a multimodal LLM rating adherence to the prompt;

- A step to generate a lot of variations of every character in different poses, using e.g. ControlNet or whatever is currently the SOTA solution used by the Stable Diffuison community to create consistent variations of a character;

- A step to bake all those character variations into a LoRA;

- Finally, scenes would be generated by another call to txt2img, with prompts computed in step 1, and appropriate LoRAs active (this can be handled through prompt too).

Then iterate on that, e.g. maybe additional img2img to force comic book style (with a different SD derivative, most likely), etc.

Point being, every subproblem of the task has many different solutions already developed, with new ones appearing every month - all that's left to have an "AI artist" capable of solving your challenge is to wire the building blocks up. For that, you need just a trivial bit of Python code using existing libraries (e.g. hooking up to ComfyUI), and guess what, GPT-4 and Claude 3.5 Sonnet are quite good at Python.

EDIT: I asked Claude to generate "pseudocode" diagram of the solution from our two comments:

http://www.plantuml.com/plantuml/img/dLLDQnin4BthLmpn9JaafOR...

Each of the nodes here would be like 3-5 real ComfyUI nodes in practice.

I appreciate the detailed response. I had a feeling the answer was some variation of "well I could get an AI to draw that but I'd have to hack at it for a few hours...". If a human has to work at it for hours, it's more like using Blender than "having an AI draw it" in my mind.

I suspect if someone went to the trouble to implement your above solution they'd find the end result isn't as good as they'd hoped. In practice you'd probably find one or more steps don't work correctly- for example, maybe today's multimodal LLM's can't evaluate prompt adherence acceptably. If the technology was ready the evidence would be pretty clear- I'd expect to see some very good, very quickly made comic books shown off by AI enthusiast on reddit rather then the clearly limited/ not very good comic book experiments which have been demonstrated so far.

> If a human has to work at it for hours, it's more like using Blender than "having an AI draw it" in my mind.

A human has to work at it too; more than few hours when doing more than few quick sketches (memory has its limits; there's a reason artists keep reference drawings around), and obviously they already put years into learning their skills than before, but fair - the human artist already knows how to do things that any given model doesn't yet[0], we kind of have to assemble the overall flow ourselves for now[1].

Then again, you only need to assemble it once, putting those hours of work up front - and if it's done, and it works, it becomes fair to say that AI can, in fact, generate self-consistent comic books.

> I suspect if someone went to the trouble to implement your above solution they'd find the end result isn't as good as they'd hoped. In practice you'd probably find one or more steps don't work correctly- for example, maybe today's multimodal LLM's can't evaluate prompt adherence acceptably.

I agree. I obviously didn't try this myself either (yet, I'm very tempted to try it, to satisfy my own curiosity). However, between my own experience with LLMs and Stable Diffusion, and occasionally browsing Stable Diffusion subreddits, I'm convinced all individual steps work well (and have multiple working alternatives), except for the one you flagged, i.e. evaluating prompt adherence using multimodal LLM - that last one I only feel should work, but I don't know for sure. However, see [1] for alternative approach :).

My point thus is, all individual steps are possible, and wiring them together seems pretty straightforward, therefore the whole thing should work if someone bothers to do it.

> If the technology was ready the evidence would be pretty clear- I'd expect to see some very good, very quickly made comic books shown off by AI enthusiast on reddit rather then the clearly limited/ not very good comic book experiments which have been demonstrated so far.

I think the biggest concentration of enthusiasm is to be found in NSWF uses of SD :). On the one hand, you're right; we probably should've seen it done already. On the other hand, my impression is that most people doing advanced SD magic are perfectly satisfied with partially manual workflows. And it kind of makes sense - manual steps allow for flexibility and experimentation, and some things are much simpler to wire by hand or patch up with some tactical photoshopping, than to try and automate them fully. In particular, things judging the quality of output is both easy for humans and hard to automate.

Still, I've recently seen ads of various AI apps claiming to do complex work (such as animating characters in photos) end-to-end automatically - exactly the kind of work that's typically done in partially manual process. So I suspect fully-automated solutions are being built on a case-by-case basis, driven by businesses making apps for the general population; a process that lags some months behind what image gen communities figure out in the open.

--

[0] - Though arguably, LLMs contain the procedural knowledge of how a task should be done; just ask it to ELI5 or explain in WikiHow style.

[1] - In fact, I just asked Claude to solve this problem in detail, without giving it my own solution to look at (but hinting at the required complexity level); see this: https://cloud.typingmind.com/share/db36fc29-6229-4127-8336-b... (and excuse the weird errors; Claude is overloaded at the moment, so some responses had to be regenerated; also styling on the shared conversation sucks, so be sure to use the "pop out" button on diagrams to see them in detail).

At very high level, it's the same as mine, but one level below, it uses different tools and approaches, some of which I never knew about - like keeping memory in embedding space instead of text space, and using various other models I didn't know exist.

EDIT: I did some quick web search for some of the ideas Claude proposed, and discovered even more techniques and models I never heard of. Even my own awareness of the image generation space is only scratching the surface of what people are doing.

I work with professional artists all the time and this is not the case. They're generally quite good at extrapolating from a couple paragraphs into something fantastic, often exactly what I had in mind.

In comparison I've messed around with prompting image generator models quite a bit and it's not possible to get remotely close to the quality level of even rough paid concept work by a professional, and the credits to run these models aren't particularly cheap.

With real art you can start from somewhere and keep building on that foundation. Say you pick an angle to shoot from and test different actors and scenes from that angle. With AI you’re re-rolling the dice for every iteration. If you’re happy that it looks 80% correct then sure it’s maybe passable.

I think people are getting way ahead of their skis here. Even in 2D I can’t for example generate inventory images for weapons and items for a game yet. Which is an orders of magnitude simpler test case than video. They all are slightly different styles. If I don’t care that they all look different in strange ways then it’s useful - but any consumer will think it looks like crap

The point is if you are the artist and have something in your head. It’s the same problem with image editing. I am sure you have experienced this.
There is no problem unless you insist on reflecting what you had in mind exactly. That needs minute controls, but no matter the medium and tools you use, unless you're doing it in your own quest for artistic perfection, the economic constraints will make you stop short of your idea - there's always a point past which any further refinement will not make a difference to the audience (which doesn't have access to the thing in your head to use as reference), and the costs of continuing will exceed any value (monetary or otherwise) you expect to get from the work.

AI or not, no one but you cares about the lower order bits of your idea.

Nobody else really cares about the lower order bits of the idea but they do care that those lower order bits are consistent. The simplest example is color grading: most viewers are generally ignorant of artistic choices in color palettes unless it’s noticeable like the Netflix blue tint but a movie where the scenes haven’t been made consistently color graded is obviously jarring and even an expensive production can come off amateur.

GenAI is great at filling in those lower order bits but until stuff like ControlNet gets much better precision and UX, I think genAI will be stuck in the uncanny valley because they’re inconsistent between scenes, frames, etc.

Yup, 100% agreed on that, and mentioned this caveat elsewhere. As you say - people don't pay attention to details (or lack of it), as long as the details are consistent. Inconsistencies stand out like sore thumbs. Which is why IMO it's best to have less details than to be inconsistent with them.
>There is no problem unless you insist on reflecting what you had in mind exactly.

Not disagreeing, just noting: this is not how [most?] people's minds work {I don't think you're holding to that opinion particularly, I'm just reflecting on this point}. We have vague ideas until an implementation is shown, then we examine it and latch on to a detail and decide if it matches our idea or not. For me, if I'm imagining "a superhero planting vegetables in his garden" I've no idea what they're actually wearing, but when an artist or genAI shows me it's a brown coat then I'll say "no something more marvel". Then when ultimately they show me something that matches the idea I had _and_ matches my current conception of the idea I had... then I'll point out the fingernails are too long, when in the idea I hadn't even perceived the person had fingers, never mind too-long fingernails!

I'd warrant any actualised artistic work has some delta with the artists current perception of the work; and a larger delta with their initial perception of it.

I disagree. Even without exactness, adding any reasonable constraints is impossible. Ask it to generate a realistic circuit diagram or chess board or any other thing where precision matters. Good luck going back and forth getting it right.

These are situations with relatively simple logical constraints, but an infinite number of valid solutions.

Keep in mind that we are not requiring any particular configuration of circuit diagram, just any diagram that makes sense. There are an infinite number of valid ones.

That's using the wrong tool for a job :). Asking diffusion models to give you a valid circuit diagram is like asking a painter to paint you pixel-perfect 300DPI image on a regular canvas, using their standard paintbrush. It ain't gonna work.

That doesn't mean it can't work with AI - it's that you may need to add something extra to the generative pipeline, something that can do circuit diagrams, and make the diffusion model supply style and extra noise (er, beautifying elements).

> Keep in mind that we are not requiring any particular configuration of circuit diagram, just any diagram that makes sense. There are an infinite number of valid ones.

On that note. I'm the kind of person that loves to freeze-frame movies to look at markings, labels, and computer screens, and one thing I learned is that humans fail at this task too. Most of the time the problems are big and obvious, ruining my suspension of disbelief, and importantly, they could be trivially solved if the producers grabbed a random STEM-interested intern and asked for advice. Alas, it seems they don't care.

This is just a specific instance of the general problem of "whatever you work with or are interested in, you'll see movies keep getting it wrong". Most of the time, it's somewhat defensible - e.g. most movies get guns wrong, but in way people are used to, and makes the scenes more streamlined and entertaining. But with labels, markings and computer screens, doing it right isn't any more expensive, nor would it make the movie any less entertaining. It seems that the people responsible don't know better or care.

Let's keep that in mind when comparing AI output to the "real deal", as to not set an impossible standards that human productions don't match, and never did.

The issue isn’t any particular constraint. The issue is the inability to add any constraints at all.

In particular, internal consistency is one of the important constraints which viewers will immediately notice. If you’re just using sora for 5 second unrelated videos it may be less of an issue but if you want to do anything interesting you’ll need the clips to tie together which requires internal consistency.

So what I am getting a use-case for brain-computer interface.
When I first started learning Photoshop as a teenager I often knew what I wanted my final image to look like, but no matter how hard I tried I could never get the there. It wasn't that it was impossible, it was just that my skills just weren't there yet. I needed a lot more practice before I got good enough to create what I could see in my imagination.

Sora is obviously not Photoshop, but given that you can write basically anything you can think of I reckon it's going to take a long time to get good at expressing your vision in words that a model like Sora will understand.

Free text is just the fundamentally wrong input for precision work like this. Because it is wrong for this doesn’t mean it has NO purpose, it’s still useful and impressive for what it is.

FWIW I too have been quite frustrated iterating with AI to produce a vision that is clear in my head. Past changing the broad strokes, once you start “asking” for specifics, it all goes to shit.

Still, it’s good enough at those broad strokes. If you want your vision to become reality, you either need to learn how to paint (or whatever the medium), or hire a professional, both being tough-but-fair IMO.

I don't think it'll be long before GUI tools catch up for editing video.

Things like rearranging things in the scene with drag'n'drop sound implementable (although incredibly GPU heavy)

If you have a specific vision, you will have to express the detailed information of that vision into the digital realm somehow. You can use (more) direct tools like premiere if you are fluent enough in their "language". Or you can use natural language to express the vision using AI. Either way you have to get the same amount of information into a digital format.

Also, AI sucks at understanding detail expressed in symbolic communication, because it doesn't understand symbols the way linguistic communication expects the receiver to understand them.

My own experience is that all the AI tools are great for shortcutting the first 70-80% or so. But the last 20% goes up an exponential curve of required detail which is easier and easier to express directly using tooling and my human brain.

Consider the analogy to a contract worker building or painting something for you. If all you have is a vague description, they'll make a good guess and you'll just have to live with that. But the more time you spend with them communicating (through description, mood boards rough sketches etc) the more accurate to your detailed version it will get. But you only REALLY get exactly what you want if you do it yourself, or sit beside them as they work and direct almost every step. And that last option is almost impossible if they can't understand symbolic meaning in language.

Agreed. It’s still much better than what I could do myself without it, though.

(Talking about visual generative AI in general)

Yeah, but if I handed you a Maxfield Parrish it would be better than either of us can do — but not what I asked for.

I find generative AI frustrating because I know what I want. To this point I have been trying but then ultimately sitting it out — waiting for the one that really works the way I want.

For me even if I know what I want, if I’m using gen AI I’m happy to compromise and get good enough (which again, is so much better than I could do otherwise).

If you want higher quality/precision, you’ll likely want to ask a professional, and I don’t expect that to change in the near future.

  • adamc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That limits its value for industries like Hollywood, though, doesn't it? And without that, who exactly is going to pay for this?
To me, currently, visual generative ai is an evolution and improvement of stock images, and has effectively the same purpose.

People pay for stock images.

  • adamc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, maybe for some purposes. In business, people sometimes pay for stock images but often don't have the expertise or patience to really spend a lot of time coaching a video into fruition. Maybe for advertising or other contexts where more effort is worth it (not just powerpoints), but it feels like a slim audience.
With tools like Apple Intelligence and its genmoji (emoji generation) and playground (general diffusion image generation) I expect it to also take on some of the current entertainment and social use-cases of stickers and GIFs.

But that’s probably something you don’t pay for directly, instead paying for e.g. a phone that has those features.

  • jddj
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Advertisers, I guess. Same folks who paid for everything else around here
  • adamc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, I just question if there are enough customers to make this work.
The thing about Hollywood is that movies aren't made by a producer or director creating a description and an army of actors, tech and etc doing exactly that.

What happens is a description becomes a longer specification or script that's still good and hangs together in itself and then further iterations involving professionals who can't do "exactly what the director wants" but rather do something further that's good and close enough to what the director wants.

Also, a team of experts and professionals that knows better than the director how a specific thing work.
  • diob
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I believe it. I was just using AI to help out with some mandatory end of year writing exercises at work.

Eventually, it starts to muck with the earlier work that it did good on, when I'm just asking it to add onto it.

I was still happy with what I got in the end, but it took trial and error and then a lot of piecemeal coaxing with verification that it didn't do more than I asked along the way.

I can imagine the same for video or images. You have to examine each step post prompt to verify it didn't go back and muck with the already good parts.

  • planb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Iterations are the missing link.

With ChatGPT, you can iteratively improve text (e.g., "make it shorter," "mention xyz"). However, for pictures (and video), this functionality is not yet available. If you could prompt iteratively (e.g., "generate a red car in the sunset," "make it a muscle car," "place it on a hill," "show it from the side so the sun shines through the windshield"), the tools would become exponentially more useful.

If you use it in a utilitarian way it'll give you a run for your money, if you use for expression, such as art, learning to embrace some serendipity, it makes good stuff.
As only a cursory user of said tools (but strong opinions) I felt the immediate desire to get an editable (2D) scene that I could rearrange. For example I often have a specific vantage point or composition in mind, which is fine to start from, but to tweak it and the elements, I'd like to edit it afterwards. To foray into 3D, I'd be wanting to rearrange the characters and direct them, as well as change the vantage point. Can it do that yet?
This is the conundrum of AI generated art. It will lower the barrier to entry for new artists to produce audiovisual content, but it will not lower the amount of effort required to make good art. If anything it will increase the effort, as it has to be excellent in order to get past the slop of base level drudge that is bound to fill up every single distribution channel.
Still three or four order of magnitudes cheaper and easier than to produce said video through traditional methods.
  • nomel
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think inpainting and "draw the label scene" type interfaces are the obvious future. Never thought I'd miss GauGAN [1].

https://www.youtube.com/watch?v=uNv7XBngmLY&t=25

> A way to test this is to take a piece of footage or an image which is the ground truth, and test how much prompting and editing it takes to get the same or similar ground truth starting from scratch.

Sure, if you then do the same in reverse.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Not too far in the future you will be able to drag and drop the position of the characters as well as the position of the camera, among other refiment tools.
For those scenarios would be helpful a draft generation mode: 16 colors, 320x200...
Yeah, it almost feels like gambling - 'you're very close, just spend 20 more credits and you might get it right this time!'
Sounds like another way of saying a picture is worth a thousand words.
[dead]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
For those curious (and still locked out) here’s direct a comparison of Sora vs. the open-source leaders (HunyuanVideo, Mochi and LTX):

https://app.checkbin.dev/snapshots/1f0f3ce3-6a30-4c1a-870e-2...

Pros:

- Some of the Sora results are absolutely stunning. Check out the detail on the lion, for example! - The landscapes and aerial shots are absolutely incredible. - Quality is much better than Mochi & LTX out of the box. Mochi/LTX seem to require specifically optimized workflows (I've seen great img2vid LTX results on Reddit that start with Flux image generations, for example). Hunyuan seems comparable to Sora!

Cons:

- Still nearly impossible to access Sora despite the “launch”. My generations today were in the 2000s, implying that it’s only open to a very small number of people. There’s no api yet, so it’s not an option for developers. - Sora struggles with physical interactions. Watch the dancers moonwalk, or the ball goes through the dog. HunyuanVideo seems to be a bit better in this regard. - Can't run it locally mode (obviously) - I haven't tested this, but I think it's safe to assume Sora will be censored extensively. HunyuanVideo is surprisingly open (I've seen NSFW generations!) - I’m getting weird camera angles from Sora, but that could likely be solved with better prompting.

Overall, I’d say it’s the best model I've played with, though I haven’t spent much time on other non-open-source ones. Hunyuan gives it a run for its money, though!

I can't speak to any of those videos in a technical sense but personally, I don't feel like any of them are good?

The vibe they give me is similar to the iPhone photography commercials where yes, in theory, a picnic in the park could look exactly like this except for all the parts that seem movie perfect.

I guess it's really more of a colour grading question where most of the Sora colour grading triggers that part of my brain that says "I'm watching a movie and this isn't real" without quite realising why.

A few of the Hunyuan videos in contrast seem a bit more believable even though they have some obvious glitches at times.

The other thing I think Sora has is that thing in commercials where no one else except the protagonist exists and nothing is ever inconvenient. The video of the teacher in a classroom with no students reminds me of that as well as the picnic in the park where there's wide open space with no one around.

I suppose it depends if the goal is to generate believable video and how you define believable.

Hunyuan was more realistic but lower quality than Sora, shorter videos with lower resolution or bitrate. The downside to Sora's sharpness is that it makes mistakes more apparent. Also funny that Sora didn't understand the rolling dunes metaphor.
Based on this it really seems like Hunyuan is a significantly better model. In nearly every example I preferred its output.
  • pen2l
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Every day that passes I grow fonder of Google's decision to delay or otherwise keep a lot of this under the wraps.

The other day I was scrolling down on YouTube shorts and a couple videos invoked an uncanny valley response from me (I think it was a clip of an unrealistically large snake covering some hut) which was somehow fascinating and strange and captivating, and then scrolling down a few more, again I saw something kind of "unbelievable"... I saw a comment or two saying it's fake, and upon closer inspection: yeah, there were enough AI'esque artifacts that one could confidently conclude it's fake.

We'd known about AI slop permeating Facebook -- usually a Jesus figure made out of unlikely set of things (like shrimp!) and we'd known that it grips eyeballs. And I don't even know in which box to categorize this, in my mind it conjures the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.

I can imagine now some of the conversations that might have happened at Google when they choose to keep a lot of innovations related to genAI under the wraps (I'm being charitable here of their motives), and I can't help but agree.

And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity, because I'm almost certain it'll be used more for bad things than good things, I'm certain its application on bad things will secure more eyeballs than on good things.

I saw my first AI video that completely fooled commenters: https://imgur.com/a/cbjVKMU

This was not marked as AI-generated and commenters were in awe at this fuzzy train, missing the "AIGC" signs.

I'm quite nervous for the future.

I know there are people acting like this is obvious that this is AI, but I get why people wouldn't catch it, even if they know that AI is capable of creating a video like this.

A) Most of the give aways are pretty subtle and not what viewers are focused on. Sure, if you look closely the fur blends in with the pavement in some places, but I'm not going to spend 5 minutes investigating every video I see for hints of AI.

B) Even if I did notice something like that, I'm much more likely to write it off as a video filter glitch, a weird video perspective, or just low quality video. For example, when they show the inside of the car, the vertical handrails seem to bend in a weird way as the train moves, but I've seen similar things from real videos with wide angle lenses. Similar thoughts on one of the bystander's faces going blurry.

I think we just have to get people comfortable with the idea that you shouldn't trust a single unknown entity as the source or truth on things because everything can be faked. For insignificant things like this it doesn't matter, but for big things you need multiple independent sources. That's definitely an uphill battle and who knows if we can do it, but that's the only way we're going to get out the other side of this in one piece.

I agree. Also, tangentially related: I use a black and white filter on my phone, and it is way harder to distinguish fake and real media without the color channels to help. I couldn't immediately find anything in the subway clip which gave it away.
I've definitely seen skin blurring filters that everyone already uses to make it really hard to know
Hijacking this top comment to say that I found the AI video creator: https://www.instagram.com/bugugugugu_aigc/
  • n1b0m
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I agree. Apart from the text appearing backwards it all looked pretty real to me.
My assumption was the uploader wanted to make the creator's "AIGC" less obvious. It definitely did that to me.
Yeah, that's a weird one. I doubt the video was generated that way. I assume someone flipped the video for "artistic" purposes.
Reversing text is a known loophole to getting around copyright guardrails in image-generation models.
How does that work? Would you prompt the model to write "hello Kitty but in reverse" on the train so the resulting image isn't flagged?
Much more likely they just flipped the video in an editor after it was generated. Its common enough to see flipped video with backwards text on social media, most people wouldn't give it a second thought.
I'm beginning to write off most images as AI. I actually think that's where this is all headed.
  • thih9
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
There are projects like https://contentcredentials.org/ . If we want, with some effort we could distinguish between real and ai generated. If.
No individual actor - human or corporate - stands to benefit enough because "trust in reality" is neither easily measured nor financialized.
  • thih9
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Some do care, e.g. some camera manufacturers or some news agencies. Surprisingly some social media platforms[1] want clear labels for AI generated content.

[1]: e.g. tiktok https://newsroom.tiktok.com/en-us/partnering-with-our-indust...

that's the easiest position imo. It's AI unless proven otherwise. No one has the time to place this much detailed on a random video when the purpose of the video is just entertainment. What this might lead to though is people losing (or not learning) the skills needed to separate real content from AI generated content
And even if it isn't AI, it is quite possibly deceptively edited. Content provenance will be important in the future.
A precondition is likely that one has mainly watched CGI-heavy movies for most of one's life. Compared to old school analog movies or fairly raw photography that looks as fake as the Coca-Cola Santa. There's a rather obvious lack of detail that real photography would have catched.
> A precondition is likely that one has mainly watched CGI-heavy movies for most of one's life.

Indeed, a great (if counterintuitive) example of this is The Wolf of Wall Street. I bet a lot of people would be surprised at just how much CGI is used in that just for set/location.

The OG film for that was Forrest Gump. It is often lauded as one of the first movies to use CGI heavily but in completely, and intentionally, unnoticeable ways...
True, but in that case you knew it had to be CGI because Kennedy didn't talk to Tom Hanks in any capacity.
Sure, it's like a weird dream where sometimes shadows don't come from the sun and the scenery has this absurd, acutely unreal polish.
A) also true that many people don't put a lot of thought into very much at all. They'd never consider actively thinking if a video is fake or not. These are the targets of short form content.
B is / will be huge; the largest amount of "mindless" content is consumed on phones, with half attention, often with other distractions going on and in between doing other stuff, and can be watched on older / lower fidelity devices, slower internet connections, etc. AI content needs high resolution / big screens and focused attention to "discover".

The truth is... most people will simply not care. Raised eyebrow, hm, cute, next. Critical watching is reserved for critics like the crowd on HN and the like, but they represent only a small percentage of the target audience and revenue stream.

You can see the perspective/angle of the objects changing slightly as the camera moves in a way that makes it pretty obvious they're CG, AI or otherwise. That's always been a problem with AI generated imagery in video/animation; it changes too much frame to frame. If researchers figure out how to address that, yeah, we've got a problem. Until then - this looks worse tha

Then there's the usual giveways for CG - sharpness, noise, lighting, color temperature, saturation - none of them match. There's also no diffuse reflection of the intense pink color.

Yes. The lack of diffuse reflection from the pink train is the clearest giveaway, and AI videos in general have problems with getting shadows and radiosity right. There's also the existence of the real-world Hello Kitty Shinkansen and the APM Cat Bus in Japan that makes this image more plausible.
That last point is also important; if it's not surprising, people will just accept it without being too critical about it. And since these AI tools are trained with real / existing content, creating realistic-enough content will be the norm. I think the first big AI generators - dall-e and co - had their model trained on more fantastical / artistic sources, and used that primarily as their model, also because realistic generation (like humans) wasn't yet good enough, or too uncanny. But uncanny and art work well together.
Also consider one of of the reasons AI generated video has CG like artifacts is because it is trained on CG video. Better CG generation, and more real video for training will reduce these over time.
Honestly, stuff like that could also be because of compression. We're all used to see low quality videos online.
  • dagmx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Most people have terrible eyes for distinguishing content.

I’ve worked in CG for many years and despite the online nerd fests that decry CG imagery in films, 99% of those people can’t tell what’s CG or not unless it’s incredibly obvious.

It’s the same for GenAI, though I think there are more tells. Still, most people cannot tell reality from fiction. If you just tell them it’s real, they’ll most likely believe it.

> I’ve worked in CG for many years and despite the online nerd fests that decry CG imagery in films, 99% of those people can’t tell what’s CG or not unless it’s incredibly obvious.

I've noticed people assume things are CG that turn out to be practical effects, or 90% practical with just a bit of CG to add detail.

  • dagmx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yep I’ve had that happen many times , where people assume my work is real and the practical is CG.

Worse, directors often lie about what’s practical and we’ll have replaced it with CG. So people online will cheer the “practicals” as being better visually, while not knowing what they’re even looking at.

I’ve seen interviews with actors even where they talk about how they look in a given shot or have done something, and not realize they’re not even really in the shot anymore.

People just have terrible eyes once you can convince them something is a certain way.

But films without CG are clearly superior and it’s not even in contention.

Lawrence of Arabia or Cleopatra alone have incredible fully live shot special effects which can not be easily replicated with CG and have aged like fine wine, unlike the trash early CG of the 80s and 90s which ruined otherwise great films like the last starfighter

  • dagmx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I’m sorry, but you make an absurd argument.

You’re taking the best films of an era and comparing them to an arbitrary list of movies you don’t like? Adding to that, you’re comparing it to films in the infancy of a technology?

This is peak confusion of causality and correlation. There are tons of great films in that time frame with CG. Unless you’re going to argue that Jurassic Park is bad.

Jurassic Park isn't just a good example of CG, it also a good example of making the right choices on practical vs CG (in the context of technology of the time) and using a reasonable budget. You can have great CG and crappy CG by cutting corners. Plenty of people that decry CG don't actually know how much there is, even in non-sci-fi movies like romcoms, just for post-editing. But when it is done well nobody notices, the complaints only come when it looks like crap. Great use of technology to achieve the artistic vision will stand the test of time.
It's also directed by one of the best directors in history.
The worst bit about working in CG, or film-making in general, is finding it harder to enjoy films because you are hypersensitized to bad work.
  • dagmx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yeah, totally. It’s not even just bad work, but I’m constantly breaking down shots as I’m watching them.

Especially because I’ve done both on set and virtual production, it’s hard to suspend disbelief in a lot of films.

> Still, most people cannot tell reality from fiction. If you just tell them it’s real, they’ll most likely believe it.

This goes for conversation too! My neighbour recently told me about a mutual neighbour who walks 200 miles per day working on his farm. When I explained that this is impossible he said "I'll have to disagree with you there"

Maybe not strictly impossible, just slightly better than an ultramarathon world record pace?

https://www.reddit.com/r/Ultramarathon/comments/xhbs4d/sorok...

https://en.wikipedia.org/wiki/Aleksandr_Sorokin

So, not very convenient for a non-world-champion runner to do (let alone while doing farm work) (let alone on more than one occasion).

That's a cultural issue that seems to have developed in the past years (decades? idk), where people take their own opinion (or what they think is their own opinion) as unchallengeable gospel.

In my opinion anyway, I'm gonna have to disagree with any counterpoints in advance.

  • dagmx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is partially the result of being taught that every opinion is valid. What was taught as a nicety (don’t dismiss other people’s opinions was the intention) has evolved into all opinions are equal.

If all opinions are equal, and we’ve reinforced that you can find anything to strengthen an opinion, then facts don’t actually matter.

But I don’t think it’s actually all that recent. History is full of people saying that facts or logic don’t matter. The Americas were “discovered” by such a phenomenon.

What's weird is the projection you get when you challenge someone's opinion in any way. All of a sudden, you're the arrogant one who thinks they're always right, no matter how diplomatic (or undeniably correct) about the issue you are. Or is that just me?
  • 5040
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
>Most people have terrible eyes for distinguishing content

A related phenomenon is not being able to hear the difference between 128kbps and 320kbps. I find the notion astonishing, and yet lots of people cannot tell the difference.

> Most people have terrible eyes for distinguishing content.

But also in the case of the fluffy train there's nothing to compare it against. The reason CGI humans look the most fake is because we're trained from birth to read a human face. Someone that looks at trains on a regular basis will probably discern this as being fake quicker than most.

  • krick
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Looks dope though. But what impressed me recently was some crypto-scam video, featuring "a clip" from Lex Fridman Podcast where Elon Musk "reveals" his new crypto or whatever (sadly, the one I saw is currently deleted). It didn't really look good, they were talking with weird pauses and intonations, and as awkward these 2 normally are, here they were even more unnatural. There was so much audacity to it I laughed out loud.

But what I was thinking while enjoying the show was: people wouldn't do that, if it didn't work.

This is the point. There is no such thing as "completely fools commenters". I mean, it didn't fool you, apparently. (But don't be sad, I bet you were fooled by something else: you just don't know it, obviously.) But some of it always fools somebody.

I really liked how Thiel mentioned on some podcast that ChatGPT successfully passed Turing test, which was implicitly assumed to be "the holy grail of AI", and nobody really noticed. This is completely true. We don't really think about ChatGPT, as something that passes Turing test, we think how fucking stupid useless thing mislead you with some mistake in calculations you decided to delegate to it. But realistically, if it doesn't it's only because it is specifically trained to try to avoid passing it.

I wish you were right that there is no way to completely fool viewers, but I know you are not. I was fooled! Note that I call out "AIGC." If that wasn't there (I only noticed it on repeat views), I would have simply had no way to tell. These are early, primitive AI generated videos, and I'm already unable to differentiate. Many in this thread talk about movie CG; there are countless movie scenes that fool all viewers.
If someone were to train a model on Joe Rogan podcasts whole run, I’m sure it would spit out extremely impressive fake results already
> people wouldn't do that, if it didn't work.

You can't assume that with scams. Quite often, scams are themselves sold as a get-rich-quick scheme, which like all GRQ schemes, they wouldn't be if they worked well.

  • peab
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Think about this: you very well may have already seen AI videos that fooled you - you wouldn't know if you did.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
One of the clearest signs in the current gen is that the typography looks bad still.
People are smart enough to know that what you see in movies isn't real. It will just take a little time for people to realize that now applies to all videos and images.
The frequency is so high, and I am getting so burned out on checking comments to gauge how much everything is changing, that I've nearly given up subconsciously. Pretty close to just ignoring all images I see.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is definitely something the Japanese would do, but it is not a real train unless a thousand salarymen are crammed into it.
The bigger problem is that people think something this ridiculous could happen.
  • marci
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Weirder things have been created. I could definitely see one being made for a movie.
> I'm quite nervous for the future.

Videos like these were already achievable through VFX.

The only difference here is a reduction in costs. That does mean that more people will produce misinformation, but the problem is one that we have had time to tackle, and which gave rise to Snopes and many others.

I mean the only real tell for me is how expensive this stunt would be. I personally think this is a really cool use of genAI. But the consequences will be far reaching.
Some of the comments were like, "come on guys, if this was real it would be way dirtier"
The face of the girl on the left at the start in the first second should have been a giveaway.
My intuition went for video compression artifact instead of AI modeling problem. There is even a moment directly before the cut that can be interpreted as the next key frame clearing up the face. To be honest, the whole video could have fooled me. There is definitely an aspect in discerning these videos that can be trained just by watching more of them with a critical eye, so try to be kind to those that did not concern themselves with generative AI as much as you have.
Yeah, it's unfortunate that video compression already introduces artifacts into real videos, so minor genAI artifacts don't stand out.

It also took me a while to find any truly unambiguous signs of AI generation. For example, the reflection on the inside of the windows is wonky, but in real life warped glass can also produce weird reflections. I finally found a dark rectangle inside the door window, which at first stays fixed like a sign on the glass. However it then begins to move like part of the reflection, which really broke the illusion for me.

No one is looking at her face though, they're looking at the giant hello kitty train. And you were only looking at her face because you were told it's an AI-generated video. I agree with superfrank that extreme skepticism of everything seen online is going to have to be the default, unfortunately.
Hard to not discount that as a compression artifact.
Just like all the obvious signs[1] the moon landings were faked.

[1]: https://web.archive.org/web/20120829004513/http://stuffucanu...

Just wanted to say I really enjoyed this!
One thing that's not intuitive to spot but actually completely wrong, is that in the second clip we're apparently inside the train but the train is still rolling under us.
  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Or, y'know, the camera's moving smoothly backwards through the train? Would be bit of an odd choice (and high-effort to make it that smooth versus someone just carrying it) but not impossible by any means.
Also "HELLO KITTY" being backwards is odd - writting on trains doesn't normally come out like that eg https://www.groupe-sncf.com/medias-publics/styles/crop_1_1/p...
All the text is mirrored. It's not unusual doing this to avoid copyright-filters. This kinda adds to distracting suspicions.
The whole video was probably mirrored before being posted. Doesn't seem to be related to being AI generated.
On the other hand, because these tools like this are being made available before output is perfected, you and many others are being trained in AI discernment; being able to detect fake things will be a helpful skill to have for some time: another form of critical thinking.

It would be FAR worse if a privately held advanced AI's outputs were unleashed without the population being at least somewhat cautious of everything. The real danger imho comes from private silos of advanced general intelligence that aren't shared and used to gain power, control, and money.

I think as these things will get bigger and better much faster than we can learn to discern.
With zero doubt. Faster than we expect. And yet, it's nice that we are learning to distrust what we see before the "real real" stuff comes out.
Open source has already caught up with SOTA:

https://www.reddit.com/r/StableDiffusion/comments/1hav4z3/op...

These are even unfair comparisons because they're leveraging text-to-video instead of the more powerful image-to-video. In the latter case, the results are indistinguishable.

Video generation is about to be everywhere, and we're about to have the "Stable Diffusion" moment for video.

Look at the comments: people are already fawning over open source being uncensored.

Cat's out of the bag.

Very convenient for those who are waiting for the waters to get muddier.
I'm wondering that as well but I also wonder if it's a bit like CGI where it's somewhat hit a limit on realness. I'm not saying CGI doesn't get better but is a 2024 Gollum that much more realistic than 2004 Gollum? Maybe I'm wrong but I wonder if that plastic feel to AI lessens but still sticks around.
>you and many others are being trained in AI discernment

HN is a hyper specialized group of people. The average person can not do this and as we've seen devours up misinformation with no second thoughts.

On one hand, I like to think that society is getting trained to recognize AI and distrust it. But at the same time my retired boomer parents are over for the holidays and I catch them watching youtube videos completely oblivious to the fact it's an AI voice and just reading an LLM generated script with B roll for eye candy. Often times it's just stolen auto generated captions from larger creators regurgitated by an AI voice. I'll point it out and they don't believe me that the voice is fake.
AI voices have gotten scarily good. They are easy to recognize because most creators use the same voices with the same intonations and don't care to cut out the mistakes. But if you don't recognize the voice it takes a couple sentences to discern that it's AI even with an ear trained on the difference.

But it is funny to see how much stuff gets uploaded with zero quality control and still gets traction. These models really don't deal will with "innocent" letter substitutions, Iike using I instead of l.

I've heard enough slop using the ElevenLabs voices that I can recognize them almost immediately now. But you're right. Higher end models with less familiar voices are harder to notice. One consistent failing is that they are always too perfect. No mistakes or signs of cuts to edit out where a human VA would have made a mistake. Its all very smooth and perfect. As if they nailed it in the first shot. Once the cheap/free models manage to fix that then we are in real trouble. Also, some really lazy slop creators don't bother to fix issues with pronunciation. But that's not the fault of the model really.
"More human than human" is our motto. https://youtu.be/ZbgmYhqFO-4?t=30
And yet, OP referred to a thread where the reality of the shorts were being questioned by "average" people. Imagine a world where OpenAI were the first out the gates with this and just started producing their own videos without telling anyone about their technology or letting creators play with it. They'd make loads of money, probably could topple governments... I'm glad these tools are being made generally available versus the alternative.
It saddens me. Innovations in AI 'art' generation (music, audio, photo) have been a net negative to society and are already actively harming the Internet and our media sphere.

Like I said in another comment, LLMs are cool and useful, but who in the hell asked for AI art? It's good enough to fool people and break the fragile trust relationship we had with online content, but is also extremely shit and carries no meaning or depth whatsoever.

  • anxoo
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
>who in the hell asked for AI art?

everyone who has ever used stock photography, custom illustrators, and image editing. as AI improves, it will come after all of those industries.

that said, it is not OpenAI's goal to beat shutterstock, nor is it the goal of anthropic or google or meta. their goal is to make god: https://ia.samaltman.com/ . visual perception (and generation) is the near-term step on that path. every discussion of AI that doesn't acknowlege this goal, what all of these billions of dollars are aiming for, is myopic and naive.

  • rurp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
There was a recent discussion in another HN thread that I think summed it up well. Good art rewards a careful viewer; the more you look at and think about good art, the more you get out of it. AI art does the opposite and punishes thoughtful consumers. There's no logical underpinning to the various details, it's just stuff mashed together in a superficially nice looking way.
I think AI "art" can be as useful as the text generators, i.e. only within certain limits of dull and stupid stuff that needs to exist but has little to no value.

For example, you need to generate a landing page for your boring company: text, images, videos and the overall design (as well as code!) can be and should be generated because... who cares about your boring company's landing page, right?

One could ask why the boring company landing page exists in the first place though. If it's not providing value to humans to warrant actual attention being paid to it...
The world is in need of soap. Not the fancy beautiful artistic kind, but the kind that comes in containers and you put in bathrooms. This objectively saves lives and is one of those boring things I can imagine.
Then you don't understand the purpose of a landing page. If the boring company hires somebody to make the landing page who actually understands their job, the landing page will have great importance.
> the landing page will have great importance.

Most companies don't need this. They need a page that has their contact info and some general information about services they provide so they can have a bare minimum internet presence and show up on google maps.

Absolutely, if your company doesn't want to make sales or if you want to be bothered all the time by people calling and mailing only for them to find out your product isn't a fit for them. Or if you want third party sellers to take over most of your business like Booking.com, AirBnB DoorDash or Amazon.

Companies who understand the importance of a customer friendly and functional web presence get a great return on their investment. And it's much better for the customer.

I have an ice cream shop by me that doesn't even have a website. They're mobbed every day, because good ice cream is fairly self explanatory, and doesn't need a web presence
You’re conflating “website” with “landing page”.

Your ice cream shop doesn’t need a landing page because of word of mouth and foot traffic.

Some project management platform for plumbers needs a highly tuned webpage because they’re competing with 20 other such systems, and there’s no line to walk past and assume it’s there because the software is good.

Believing that if you build great plumbing SAAS software people, paying customers will magically appear, is naive.

A great product can sell itself. But that doesn’t mean that marketing and sales aren’t necessary in order to get the product in front of people, assuage their concerns, reassure them that it solves their problems, show social proof from others using it, and close the deal. A good landing page will do all of this ;)

> Like I said in another comment, LLMs are cool and useful, but who in the hell asked for AI art?

I did. I started messing around with computer graphics on DOS with QBASIC and consider AI art to be just an extension of that.

On the other hand I don't care all that much for LLMs most of the time. They're sometimes useful, but while I find AI art I enjoy very regularly, using a LLM for something is more a once every couple weeks event for me.

How do you know they are a net negative? What's your source?
My opinion ;-)

That's what HN is for

It's quite well-supported on here, that's for sure.

Somewhere there's a site for "hackers" where it isn't, and I hope I stumble across that site at some point.

Do add "in my opinion" or prefix with "I think", because your definite wording implied you were stating a verifiable fact. Telling opinions like they are facts and then backtracking with "oh but it was just my opinion" is a big problem in (online?) society / discourse, and has led to a lot of misinformation and anti-scientific takes spreading.

"The earth is flat" - "Can you prove it?" - "Oh it's just my opinion". It's dishonest.

I agree with the first part. For me, AI art is the chance to have a somewhat creative outlet that I wouldn’t have otherwise, because I’m much worse at painting that I can stand. Drawing by prompts helps me be creative and work through some stuff - for that it’s also nice and interesting to see that the result differs from my mental image. I will tweak the prompt to some extent and to some extent go with some unintentioned elements of the drawing. I keep the drawing on my phone in the notes app with a title and the prompt.

To get back to the beginning: I really do agree that the societal impact on the whole appears to be negative. But there are some positives and I wanted to share my example of that.

That describes most art. At least ai art can be pretty and doesn’t have the same political message.
Go on civil.AI, it’s primarily used for hardcore waifu porn.
You mean civitai.com? There's a lot more on there than just that...
[dead]
  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Much of the time I don't want "meaning or depth", I just want a pretty picture of whatever it was. AI art is great, it's just that the people it most benefits are the people you don't see or hear much from (and, rude as this is to say, people who write less convincingly).
They should have kept this amazing tech under the wraps because you have a bad feeling about it? Hate to break it to you, but there have been fake videos on the internet ever since it has existed. There are more ways to fake videos than GenAI. If you haven't been consuming everything on the internet with a high alert bs sensor, then that's an issue of its own. You shouldn't trust things on the internet anyway unless there is overwhelming evidence.
  • callc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Amazing tech != socially good

Of course, as knowledgeable people in tech we can look at the last few years of AI improvements as technically remarkable. pen2l is talking about social impact.

I hope our trade can collectively become adults at the big table of Real Engineers. Consider the impact on humanity of your work. If you don’t care, then you are either recklessly irresponsible, don’t know any better, or are intentionally causing harm at scale.

Very well put. There's always been this Silicon Valley instinct that all technological advance is always good for humanity, and it's just not that simple.
  • callc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Thanks OskarS

Tech is a very powerful tool that can automate the most mundane tasks and also automate harm like mass surveillance and erosion of ownership rights of your devices. The sheer ability to create new markets and replace inefficient non-automated markets leads to huge $$$ making opportunities which people may mistake as being good in itself (good for economy / GDP = good for humanity)

Cannot even quality "It has always been shit, so no problem at it becoming even shittier" as a hot take.
> If you haven't been consuming everything on the internet with a high alert bs sensor, then that's an issue of its own

"just be privileged as I was to get all the necessary education to be able to not be fooled by this tech". Yeah, very realistic and compassionate.

  • cma
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
With a heavy dose of "if masses of people are fooled by this, it can't affect me as long as I can see through it. No possible repercussions of mass people believing completely made up stuff that could affect laws, etc."
  • JPKab
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This entire thread reeks of "I'm smart enough to know that videos can be faked, but Jethro in the trailer park isn't because he's just a plumber, and therefore this tech needs to be censored or else Jethro might believe stuff that makes him vote in a way I don't like" going on here.

While the average person overestimates their own intelligence, the average techy dramatically underestimates the intelligence of the average member of the public. The weirdos that latch onto every fake video and silly conspiracy theory are dramatically overrepresented in every online comments thread, but supposed geniuses in the tech/NGO/academic community forget this and assume a broad swath of the public believes in stuff like "Pizza gate" because nuanced thinking is a skill only the enlightened few possess.

Some people aren't very skeptical at baseline, it doesn't mean that those concerned about the ability of others to recognize AI are disparaging people based on intelligence.

For example, some people can be very intelligent, yet not be discerning of information that resonates with prior biases. You see this in those who are devoutly religious, politically polarized, etc.

There is reason to believe that such biases will lend to ontological misinterpretations from algorithmically generated information.

You can see mistakes in interpretation on a day to day basis by the population at large. There are swaths of widely held beliefs that aren't based in truth. Pretty much anyone is likely to believe at least some stereotype, folklore, urban legend, or myth.

It isn’t about being smart (you assumed this is what ‘education’ was pointing at). Most people aren’t even aware of what’s happening besides extremely superficial things that they get here and there on the news. Can’t you honestly see the real potential for massive damage coming out of all this?
With respect to the American public, the majority can and do utilize nuanced thinking as a survival skill. The problem of modern American era, is not that our public is low in average intelligence. Rather, that on average, we have been miseducated to seek the eradication of discomfort, uncertainty, inconveniences, and unknowns.
  • cma
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That radio station in hotel Rawanda could be a bad thing for you and people you cared about even if you personally could discern the lies so it wasn't fooling you.
Actually you overestimate the general public's ability to discern what's real or not. On top of that, most people don't even care if it's real. This is exactly why Trump won.

Example: if a gen ai vid of a politician doing some crazy crime came out. Even if it were proven fake, people would start questioning everything and still act as if the politician were guilty

  • JPKab
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"This is exactly why Trump won"

See the part of my comment you are replying to where I specifically stated that the motivation for all of this is that "Jethro doesn't vote the way I want him to". You've proven my point.

The censorious attitudes on HN were non-existent before Trump won in 2016. I know this for a fact. I've had my account on here since 2012, after 2 years of being just a reader.

Meanwhile, you overestimate how immune to misinformation and lies the average HN techy is. Just a few years ago, the majority of people on here believed, with utter conviction, that the bat-borne coronavirus lab in Wuhan had absolutely no connection with the bat-borne coronavirus epidemic that started in Wuhan and that only bigots and ignoramuses could draw such a conclusion. I experienced this whenever I brought up the blatantly obvious, common sense connection in these same comment threads in late 2020 or into mid 2021. The absolutely absurd denial of common sense by otherwise intelligent people was reminiscent of trying to talk to a religious fundamentalist about evolution while pointing at dinosaur fossils and having them continue to deny what was staring them in the face.

  • sekai
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> "just be privileged as I was to get all the necessary education to be able to not be fooled by this tech". Yeah, very realistic and compassionate.

This has nothing to do with privilege, a person in Indian slums on his 2005 PC with internet access can have better internet BS radar than an Ivy League student.

I think that would be an exception rather than the rule, to be honest.

I think though, that if you are in the position of doing serious critical reflection about this stuff, which is in my opinion necessary for being in a position of discernment wrt this stuff, then you are privileged. This is the idea I wanted to convey.

  • JPKab
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What education do you specifically think is necessary for people with average IQs all over the world to not be fooled by this, given that they are aware that videos can easily be faked in 2024? A high school degree? A bachelors?
>>given that they are aware that videos can easily be faked in 2024?

That's a ridiculous assumption. In my experience no one outside of tech circles is even remotely aware that this kind of thing is possible already.

With all due respect, I think you may be out of touch.
  • JPKab
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
You think that the average member of the public isn't aware that videos can be faked with AI, or non-AI special effects, and your source of data for this is "your own experience"? Really?

My family is mostly working class in an economically depressed part of the Virginia/West Virginia coal country, and every single one of them is aware of this. None of them work in tech, obviously. None have college degrees.

I maintain that the attitude driving this paternalistic, censorious attitude is arrogance and condescension.

A prime example of how broadly aware the public all over the world is of AI faked videos was the reaction in the Arab world to the October 7th videos posted by Hamas. A shocking (and depressing) percentage of Arabs, as well as non-Arab Muslims all over the world, believed the videos and pictures were fakes produced with AI. I don't remember the exact number, but the polling I saw in November showed it was over 50% who believed they were fakes in countries as disparate as Egypt and Indonesia.

>>isn't aware that videos can be faked with AI, or non-AI special effects

These two are very different things. My family believes all kinds of videos on the internet are fake. None of them have any idea what a tool like Sora can do. The gap between "oh this was probably special effects" to "you have to notice pixels shimmering around someone's hand to tell" is enormous.

>>My family is mostly working class in an economically depressed part of the Virginia/West Virginia coal country, and every single one of them is aware of this.

Your working class family has time to keep up with the advancements in generative AI for video? They have more free time than I do then. If we're sharing anecdotes about families then my family is from Polish coal country and their idea of AI is talking to your car and it responding poorly.

>>I maintain that the attitude driving this paternalistic, censorious attitude is arrogance and condescension.

I'm confused - who is displaying this "censorious" attitude here?

>> and your source of data for this is "your own experience"? Really?

Yes, really. I mean do you have anything else? You are also quoting things from your own experience.

I’m not (exclusively) talking about formal education. There are lots of people (I would dare say the majority of the planet) that don’t have the ‘digital literacy’ required to handle what’s happening right now. Being from a developed country I am very much worried about this.
Fooled by what? Some of it looks real but is incredulous enough that it should set off your BS sensor. Other stuff is/will be more subtle and we will have no way of knowing.
Too charitable indeed. Google was simply unprepared and has inferior alternatives.

My prediction is that next year they will catch up a bit and will not be shy about releasing new technology. They will remain behind in LLMs but at least will more deeply envelope their own existing products, thus creating a narrative of improved innovation and profit potential. They will publicly acknowledge perceived risks and say they have teams ensuring it will be okay.

  • tziki
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
>They will remain behind in LLMs

The latest Gemini version (1206) is at least tied for the best LLM, if not the best outright.

I wish Google would allow me to remove the AI stuff from search results.

99% of the times it's either useless or wrong.

Strong plus one here. Not only that, but it uses gobs of energy in total. Google has reneged on all of its carbon promises to stay in the running for AI domination and to head off disruption to search ads business. Since I've unconsciously trained my brain to not look at the top search results anymore because they long ago turned into impossible-to-distinguish ads, I've quickly learned to just ignore the stupid AI summary. So it's an absurd waste of computational power to generate something wrong that I don't even want to see, and I can't even tell them to stop when they're wasting their own money to do so.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It’s often wrong anyway. Much like you, the thing that annoys me most about it though is all the power they must be using having it run on every single search by anyone.
  • Lcchy
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I have been using Kagi for a year now and it's been liberating. Its an ad/seo-free search engine.

https://kagi.com/

Sorry for the name dropping, I have no affiliation and am just a very happy user, so I wanted to share it as it felt adequate.

Add a -ai to the end of your Google search query. There are also browser extensions that stop the AI content from displaying. I use the one for Chrome called "Remove Google Search Generative AI".
Great tip! But it only remove's Google's terrible AI summary, not AI generated content from showing up in searches, which is what the OP wishes for. A combination of -ai and before:2022-01-01 is probably the closest we can get to that
This is vaporware/false advertising.

We don't have tech to correctly "detect ai" in 2024, which is why education has broken down over the last few years with serial cheating in every institution.

Every company so far that claimed to detect AI generated slug has failed.

Nobody has any clue what is AI stuff these days. Apart from the obvious ones, no one can tell a generative AI apart from 3D rendered stuff or low-res photos. Put image compression on top and it's definitely impossible.
I meant the bit of AI that Google adds on top the actual search results.
Ah sure, that stuff is just annyoing. I don't need a - probably wrong - summary of the top hit either.
I wonder what the outcome will be when new models are trained on AI-generated data. These companies are already running out of quality training data. So when most of the data on the internet is synthetic, will they find ways of separating the signal from the noise, or will all the noise lead to a convergence of performance across all models to something that is much inferior than what we have today?

This tech will make the internet even more unbearable to use, without mentioning its huge potential for abuse. This is far worse than whatever positives it might have, which are still unclear. What a shitshow.

Udm?
This is all inevitable. At worst it's pulling the issues forward by a few months or years, and I don't think anyone will meaningfully address the problem until it's staring us in the face.

I believe the internet needs a distributed trust and reputation layer. I haven't fully thought through all the details, but:

- Some way to subscribe to fact checking providers of your choice.

- Some way to tie individuals' reputation to the things they post.

- Overlay those trust and reputation layers.

I want to see a score for every webpage, and be able to drill into what factored into that score, and any additional context people have provided (e.x. Community Notes).

There's a huge bootstrapping and incentive problem though. I think all the big players would need to work together to build this. Social media, legacy media companies, browsers, etc.

This also presupposes people actually care about the truth, which unfortunately doesn't always seem like the case.

  • bko
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don't think Google delayed or kept this under wraps for any noble reasons. I think they were just disorganized as evidenced by their recent scrambling to compete in this space.
I don't even know if this will be possible, or how it would work, but it seems like the next iteration of social media will be based on some verification that the user is not using AI or is a bot. Currently they are all incentivized to not stop bot activity because it increases user counts, ad revenue, etc.

Maybe the model is you have to pay per account to use it, or maybe the model will be something else.

I doubt this will make everyone just go back to primarily communicating in person/via voice servers but that is a possibility.

Twitter Blue is paid and yet every single bot account has it in order to boost views.
> Maybe the model is you have to pay per account to use it

Spammers can afford more money per bot for their operations than the average user can justify to spend on social media.

  • mnau
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So Musk was right?
No, because Musk encourages AI slop if people are willing to pay.

What we probably need (this is going to sound crazy, but I don’t have a better suggestion), is some kind of networked trust system.

Like Community Notes? It's actually a darn good system.
exactly one lab has passed the test of morals vs profit at this point, and thats deepmind, and they were thoroughly punished for it.

Every value oAI has claimed to have hasn't lasted a milisecond longer than there was profit motive to break it, and even anthropic is doing military tech now.

  • dmix
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
LLMs aren’t AGI
> the image of those people on slot machines, mechanically and soullessly pulling levers because they are addicted. It's just so strange.

Worse, the audience is our parents and grandparents. They have little context to be able to sort out reality from this stuff

Shorts are designed to trade your valuable attention for trite, low-effort content. Most decent shorts are just clips of longer-form content.

Do yourself a favor and avoid that kind of content, opting instead for long-form consumption. The discovery patterns are different, but you're less inclined to encounter fake content if you develop a trust network of good channels.

This is also my strategy. AI content makes me focus even harder on the source of the content instead of the apparent quality, because the current set of GenAI techniques are best at imitating surface-level quality features.
What are some good channels you recommend?
The way AI goes, it will actually raise the cost of valid services: cost of bullshit and spam is going down, which will raise the cost of valid, non-ai powered services to raise above the noise or be able to filter it out. There is only negative value to what "open"-ai is adding to the world right now. By playing the long-term AI safety card, of the hypothetical scenario some AI supposedly getting conscious in the future, they try to pass themselves clean and innocent in all the damage they cause to society.

I just hope the online, social media space gets enshitified to an such a degree that it stops playing a major role in society, though sadly that is not how things usually seem to work.

On the other hand by making public what technologies capabilities are - doesn't it stop the problem of people having this tech in secret and using it before anybody is aware it's even possible?

ie a company developing this tech, keeping under wraps and say only using for special government programmes....

Pandora’s box is open, not releasing models and tools is just going to result in someone else doing it.
They didn’t keep it under wraps, it’s just the team considered the paper shipping not the product. They still shipped the papers that decentralized the knowledge.

Could even argue shipping the product and not the paper would have done more for AI safety, least it would be controlled.

The best part is that eventually, over time, the AI slop will feed into training data more and more. I suspect it will be like the Kessler Syndrome of AI models.
The ability to make strange videos as a consumer... it's not inherently good or bad, it'll just be... weird
It doesn't take AI to fool people. They have been propagandised and lied to on a major scale since mass media.

They also lie themselves: they cannot detect overt bias or reflect on themselves and be aware of their hidden motives, resentments and wishful thinking. Including me and you.

Most people hold important beliefs about the world that are comically inaccurate.

AI changes absolutely nothing how many true or false beliefs the average Joe holds.

> And I can't help but be saddened about OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity

Yeah, and it's especially hypocrite coming from them who said they'd refuse to disclose anything about GPT-3 because they said it was dangerous. And then a few years latter: “Hey remember about this thing we told you it was too dangerous before? Now we have a monetization strategy so we're giving access to everyone, today.”

> there were enough AI'esque artifacts that one could confidently conclude it's fake.

And yet, you would not have known how to recognize those artifacts without "OpenAI's decisions to unload a lot of this before recognizing the results of unleashing this to humanity".

You could have said the same thing about photo shop... Some people will learn to spot BS and think critically even if they can't quite put their finger on it and the video is very good (What, Trump fought a T-Rex, AND WON?), some people could be fooled by anything, and there is a lot in between.
[dead]
[flagged]
So is yours! Mine isn't, however. I am a hard-nosed real boy now.
Write something that an LLM could never write.

(This is my latest favorite prompt and interview/conversation question)

If you’re not actively publishing at top conferences (I.e. NeurIPS), than this is a trash question and shows the lack of knowledge that many who are now entering the field will have.

Anything that you or others can answer to this which isn’t some stupid “gotcha” puzzle shit (lol it’s video cus LLMs aren’t video models amiright?) will be wrong because of things like structured decoding and the fact that ultra high temperature works with better samplers like min_p.

https://openreview.net/forum?id=FBkpCyujtS&noteId=mY7FMnuuC9

3e4a3ad9f05fdfb609dda6e5f512e52506f4c1053962e21bfd93f1ed81582d16ca0fef9574fb07ab62f8f5b1373b4ddd541804c0d176f4a557d900b05047e853

(This is the hash of a string randomly popped in my mind. An LLM will write this with almost 0 probability --- until this is crawled into the training sets)

  • Kiro
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
You go first.
[dead]
Considering google image search is polluted by AI-generated images at this moment, perhaps google is afraid of making the search even worse?
What I desperately need is a model that generates perfectly made PowerPoint slides. I have to create many presentations for management, and it’s a very time consuming task. It’s easy to outline my train of thoughts and let an LLM write the full text, but then to create a convincing presentation slide by slide takes days.

I know there is Beautiful.ai or Copilot for PowerPoint, but none of the existing tools really work for me because the results and the user flow aren’t convincing.

Have you checked out Marp? https://marp.app/

Basically it generates slides from markdown, which is great even without LLMs. But you can get LLMs to output in markdown/Marp format and then use Marp to generate the slides.

I haven't looked into complicated slides, but works well for text-based ones.

Looks interesting. I am on the hunt for clean tools for producing presentations. I really like Powerpoint, mainly because of their animation and vector editing features. However, I don't want to keep using a proprietary tool.
You could also try Hyperdeck which uses Markdown for slides as well, but supports most of the animation features of Powerpoint as well as MathML and stuff like that (no vector editing though)

http://hyperdeck.io

If you're the coder/hacker type you might like Typst. It's a typesetting tool but it can create presentations too and I like it better than Powerpoint where I have to manually edit and align everything on every page to ensure a consistent style.
Fascinating. I did not know that you can create presentations with Typst
just use a canva template
Thanks, I did not know that you could use Canva for presentations. However, it is still proprietary software. (Worse, it lives on the web)
  • jcims
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Work at a bank? That’s my reaction to 95% of the cool stuff I see every week.
No, I do not work at a bank. What do you mean?
Banks need to be secure and be able to uphold regulatory standards.
Same for Mermaid diagram.
I want something that can take my ugly line drawing and make it a cool looking line drawing without distorting the main idea
I don't really understand how this would work. Writing long paragraphs to prompt the AI is much more tedious than writing a few bullet points for the slides.

If you need the AI to help you brainstorm a good narrative, that is a different story

Never used it but seen it mentioned in that space: https://gamma.app/
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
there is a YC company that does that I think: https://www.rollstack.com/ i've never used them but I think they have many satisfied customers, maybe worth a shot!
Wow this is bad. And by bad i mean worse than leading open source and existing alternatives.

Is it me or does it seem like OpenAI revolutionized with both chatGPT and Sora, but they've completely hit the ceiling?

Honestly a bit surprised it happened so fast!

I think we're in the snapdragon age of AI for the next little bit, if you were around for early smartphones.

Each company would either rush to get a phone out with the new snapdragon chip, or take their time to polish a release and have a better phone late cycle. But the real improvements we're just the chip.

Nvidia chips/larger data centers are the chips. the models are the plethora of android phones each generation.

That kept going until progress stabilized. Then the best user experience & vertical integration won over chasing chip performance (apple).

Same goes with DALLE. It was cool to try it the first week or so but now the output is so much worse than Midjourney and stable diffusion. For me it can’t even generate straight lines and everything looks comic-ish.
DALL-E 3 image quality has always been subpar, but its prompt adherence is on par with FLUX. Midjourney has some of the worst prompt adherence, but some of the best image quality.
DALL-E 3 image quality was absolutely amazing... for about 3 days. Then they must have panicked, because after that, everything it emitted included that ridiculous telltale orange/blue tint.
To me this is just a simple artifact of size & attention.

Another example of this is stuff like Bluesky. There's a lot of reasons to hate Twitter/X, but people going "Wow, Bluesky is so amazing, there's no ads and it's so much less toxic!" aren't complimenting Bluesky, they're just noting that it's smaller, has less attention, and so they don't have ads or the toxic masses YET.

GenAI image generation is an obvious vector for all sorts of problems, from copyrighted material, to real life people, to porn, and so on. OpenAI and Google have to be extraordinarily strict about this due to all the attention on them, and so end up locking down artistic expression dramatically.

Midjourney and Stable Diffision may have equal stature amongst tech people, but in the public sphere they're unknowns. So they can get away with more risk.

>OpenAI and Google have to be extraordinarily strict

Why? Did the inventors of VHS tapes "have to be extraordinarily strict" and bake in safeguards because people might violate copyright laws, make porn, or tape something illegal?

Enforcing laws is the responsibility of the legal system. It sets a concerning precedent when companies like OAI would rather lobotomize their flagship products than risk them generating any Wrongthink.

If you're going to say something like this, you need to back it up with specific alternatives that provide a better result.

Besides just citing your sources, I'm genuinely curious what the best ones are for this so I can see the competition :)

HunYuan released by Tencent [1] is much better than Sora. It's 100% open source, is compatible with fine tuning, ComfyUI, control nets, and is receiving lots of active development.

That's not the only open video model, either. Lightricks' LTX, Genmo's Mochi, and Black Forest Labs' upcoming models will all be open source video foundation models.

Sora is commoditized like Dall-E at this point.

Video will be dominated by players like Flux and Stable Diffusion.

[1] https://github.com/Tencent/HunyuanVideo/

Something being available OSS is very different from a turnkey product solution, not to mention that Tencent's 60 GiB requirement requires a setup with like at least 3-4 GPUs which is quite rare & fairly expensive vs something time-sharing like Sora where you pay a relatively small amount per video.

I think the important thing is task quality and I haven't seen any evaluations of that yet.

> Something being available OSS is very different from a turnkey product solution, not to mention that Tencent's 60 GiB requirement requires a setup with like at least 3-4 GPUs which is quite rare & fairly expensive vs something time-sharing like Sora where you pay a relatively small amount per video.

It took two weeks to go from Mochi running on 8xH100s to running on 3090s. I don't think you appreciate the rapidity at which open source moves in this space.

HunYuan landed less than one week ago with just one modality (text-to-video), and it's already got LoRA training and fine tuning code, Comfy nodes, and control nets. Their roadmap is technically impressive and has many more control levers in scope.

I don't think you realize how "commodity" these models are and how closed off "turn key solutions" quickly get out-innovated by the wider ecosystem: nobody talks about or uses Dall-E to any extent anymore. It's all about open models like Flux and Stable Diffusion.

{Text/Image/Video}-to-Video is an inadequate modality for creative work anyway, and OpenAI is already behind on pairing other types of input with their models. This is something that the open ecosystem is excelling at. We have perfect syncing to dance choreography, music reactive textures, and character consistency. Sora has none of that and will likely never have those things.

> something time-sharing like Sora where you pay a relatively small amount per video.

Creators would prefer to run all of this on their own machines rather than pay for hosted SaaS that costs them thousands of dollars.

And for those that do prefer SaaS, there are abundant solutions for running hosted Comfy and a constellation of other models as on-demand.

If you've got a 4090 and ComfyUI can you run HunYuan?
There are already Hunyuan fp8 examples running on a 4090 on r/stablediffusion.
RunwayML too but not sure they also won't get commoditized by OSS video generation.
What are the leading alternatives? (Open source or otherwise)
You have to be specific. What's more important to you?

- uncensored output (SD + LoRa)

- Overall speed of generation (midjourney)

- Image quality (probably midjourney, or an SDXL checkpoint + upscaler)

- Prompt adherence (flux, DALL-E 3)

EDIT: This is strictly around image generation. The main video competitors are Kling, Hailuo, and Runway.

SD does not generate video, does it?
It does as of recently.
  • amrrs
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Minimax (from China) and Kling 1.5 from China. Recently Tencent launched its own.

You can see more model samples heee https://youtu.be/bCAV_9O1ioc

Those look... far worse? What am I missing.
  • amrrs
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Exactly I don't know how people are saying SORA is bad. I know there are restrictions with humans. But with the storyboard and other customisations, it's definitely up there!
FLUX
MidJourney (commercial), Standard Diffusion XL
> Standard Diffusion XL

you probably meant Stable Diffusion XL. (autocorrect victim)

Sora was not really that big of a revolution, it was just catching up with competitors. I would even say in gen video they are behind right now.
Sora had some sweet cherry picked initial hype videos. That was more impressive than anything we could do at the time. Now, yea, it's questionable if it's on-par let alone better.
Wasn't just cherry picked. The balloon kid video had a VFX team cleaning up the output. They've said that now.
What is the best model in your opinion right now?
There are a lot of them, but Runway seems to have good controls and they are aligned with people who will actually use it - filmmakers and content creators.

In terms of image quality. Runway, Luma, and a few of the Chinese models all give "ok" results. I haven't seen anything from Sora to convince me they have done any kind of significant leap.

The issue there is alignment. It's cheap for Runway or Luma to continue in this path since it's their only path to profitability, they do nothing else.

But for OpenAI, I don't think this is near their top list of priorities. I doubt that they will be able to keep adding features like their competitors. Seems to me like this is the equivalent of a side project for them.

edit after watching direct comparison videos, I've changed my mind. Sora is ahead.

UPDATE: After watching direct comparison videos between prompts, I do think now that Sora is ahead. It's not a huge leap but it seems much better at keeping fine details roughly aligned.

For anyone who is curious where to find tons of SORA videos, go to reddit r/aivideo

HunYuan by Tencent. It's 100% open source too.
RunwayML
Bad also in the sense once you get over the "boy, it's amazing they can do that", you immediately think "boy, they really shouldn't do that".
My working theory is that OpenAI is the 'moonshot' kind of company full of super smart researchers who like tackling hard problems, but have no time and effort for things like 'how do we create an UX people actually want to use', which actually requires a ton of painful back-and-forth and thoughtful design work.

This is not a problem as long as they do the ChatGPT thing, and sell an API and let others figure out how to build an UX around it, but here they seem to be gunning for creating a boxed product.

Yeah… they have defined the UX that everyone else is copying thus far. So I feel like you are pretty far off the mark.
No doubt. I was waiting so long for Sora but Runway already burned me out on AI video.

It was fun for a few days but far more limited than I would have ever expected.

Maybe Sora 5.0 will be something special. Right now though all these video models are basically shit.

What are some of the open source video models?
  • wslh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Could it be that text sources are plenty, and more dense than training for videos, and images?
Their example videos: https://openai.com/sora/, of the doors opening, are hilarious.

1. The first set of doors doesn't have any doorknobs or handles. https://ibb.co/PwqfzBq

2. The second set of doors has handles, and some very large/random hinges on the left door. https://ibb.co/JkDtc6r

3. The third set doesn't have any handles, but I can forgive that, because we're in a spaceship now. The problem is that the inside of the doors seem to have windows, but the outside of the doors, doesn't have any windows. https://ibb.co/nwpXmtq & https://ibb.co/wr6v2g1

4. The best/most hilarious part for me. The doors have handles, but they are on the hinge side of the door. No idea how this would work. https://ibb.co/gWXDcfr

There are more examples of its limitations.

The video with dogs shows three taxis transforming into one, the number of people under the tree changing https://player.vimeo.com/video/1037090356?h=07432076b5&loop=...

An example from the HunyuanVideo is terrible as well. Look at that awful tongue: https://hunyuanvideoai.com/part-1-3.mp4

And what we see in that marketing is probably the best they could generate. And I suppose it took a lot of prompt tweaking and regenerations.

The internet is already full of junk shorts and useless videos and soon there will be even more junk content everywhere. :(

I think they trained on one too many closet bifold doors [1].

If you look at the edge of the doors as they swing open, it seems their movement resembles bifold door movement (there's a wiggle to it common to bifold doors that normal doors never have). Plus they seem to magically reveal an inner fold that wasn't there before.

[1]: https://duckduckgo.com/?t=h_&q=interior+bifold+closet+doors&...

I feel like there is a sweet spot for AI generation of images and videos that I would describe as "charmingly bad", like the stuff we got from the old CLIP+VQGAN models. I feel like Sora has jumped past that into the valley of "unappealingly bad".
I think that's why humor and memes are such good targets for this type of stuff. If you look up videos like "luma memes compilation," it takes well-known memes and distorts them in uncanny, freaky, and bizarre ways. Yet the fact the original subject is a meme somehow bypasses the uncanny valley repulsion. We seem to accept that much more readily, for whatever reason.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Technically it's amazing that this is possible at all. Yet I don't see how the world is better off for it on net. Aside from eliminating jobs in FX/filming/acting/set design/etc, what do we really gain? Amateur filmmakers can be more powerful? How about we put the same money into a fund for filmmakers to access. The negatives are plentiful, from the mundane reduction of our media to monolithic simulacra to putting the nail in the coffin for truth to exist unchallenged, let alone the 'fine tunes' that will continue to come for deepfakes that are literal (sexual) harassment.

Humans are not built for this power to be in the hands of everyone with low friction.

> Amateur filmmakers can be more powerful?

YouTube turned everyone into broadcasters. Sora could help bring countless untold stories to life, straight from the imagination.

> Humans are not built for this power to be in the hands of everyone with low friction.

Why is having power concentrated in few hands better?

> Why is having power concentrated in few hands better?

Because most people are dangerous morons. I don't think most people should be allowed to operate a car, let alone the most powerful tool for misinformation that has ever existed

I mean, you're clearly not wrong, but how do you propose implementing your worldview without doing even more harm to humanity?

The only thing worse than a powerful, dangerous tool in the hands of the masses is a powerful, dangerous tool controlled exclusively by powerful, dangerous people. (Cue the usual moronic analogies involving thermonuclear weapons...)

The problem of distributing access to dangerous things like AI and weapons has been a problem that humans have faced for a few thousand years. Governments are instituted among men, deriving their just powers from the consent of the governed. If there was a formula for good governance, we'd have fewer problems, but generally I believe in democracy, transparency, and liberalism.
I've taken to calling the digital artists I work with "the old masters" in light of the flood of inexpert, low effort AGI content. And they do use generative AI, pretty liberally for concept work and reference, but they know what they're doing and can turn it into great things.

I thought we lost a lot in the transition from analog to digital media, but that doesn't mean there's not a peak to any modern craft, just that there hasn't been a unified or named movement highlighting the best and worst, outside of social media algorithms.

Humans are not built for this power to be in the hands of everyone with low friction

Now we have this with social media, everyone is their own little FSB propaganda machine…yay

  • JPKab
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
[flagged]
The Ancient Egyptian god Thoth invented writing and presented it to the King:

"This invention, O king," said Thoth, "will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered."

The king replied:

"This invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

"You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

Not available in France yet, I'd we interested to know if it's a matter of progressive rollout, or some form of legislation (EU or otherwise ?) that's making OpenAI cautious ? Something like the EU AI Act [1] ?

In a sane world, any video produced by Sora would be required to have a form of watermarking that's on par with what intellectual property owners require.

We've put people in jail for sharing copyrighted movies, and don't see why we would refrain from mandating that AI generated videos have some caption that says, I don't know, "This video was generated with AI" ?

People would not respect the mandate, and we would consider that illegal, and use the monopoly on force to take money out of their bank account.

I know, it sounds mad and soooo 20th century - maybe that's why OpenAI overlords are not deeming peasants in France worthy of "a cat in a suit drinking coffee in an office" and "you'll never believe what the other candidate is doing to your kids".

[1] https://www.imatag.com/blog/ai-act-legal-requirement-to-labe...

EDIT: apparently some form of watermarking is built in (but it's not obvious in the examples, for some reason.)

> While imperfect, we’ve added safeguards like visible watermarks by default, and built an internal search tool that uses technical attributes of generations to help verify if content came from Sora.

[2] https://openai.com/index/sora-is-here/

>any video produced by Sora would be required to have a form of watermarking that's on par with what intellectual property owners require

It's a completely different thing. IP owners want watermarks on their IP so they can prosecute people who use their IP without giving credit, nobody's forcing them to watermark it.

I agree that's why they do it.

I happen to think that some states will want to prosecute people who publish realistic-looking AI generated images without making it explicit that they're generated. I'm wondering if watermarking could be an effective tool for that.

(If I was on a bad mood, I would say that we should make it explicit when images are too heavily photoshoped, too ; but that's an other debat, because tools like Sora make manufacturing lies several order of magnitude cheaper.)

> People would not respect the mandate, and we would consider that illegal, and use the monopoly on force to take money out of their bank account.

Imagine a culture that would harness their frustration at being left out in the direction of innovating on their own.

Defining the status quo on things like watermarks by leading the field and then demonstrating how to act from the front.

Seems like they'd be more effective than one that settles for derision and calling for taxes and rules from the back of the pack, so they can presumably profit off the terrible evil things being built.

That's going to sound luddite and backwards, but to be completely honest, I'm not 100% "frustrated" about being "left out" from "far west"-style AI image generation.

At this point, really, I can think of exactly two use cases:

* cheaply producing ads

* cheaply producing fake news

And it's terrifying, and the people jumping in the bandwagon are scaring me.

There is this quote in "13 days" [1] where people are discussing the Cuban missile crises, and, while everyone is gladly / obliviously preparing for the upcoming nuclear holocaust, one gray-haired diplomat raises his hand and says "One of us in the room should be a coward" before asking for a more prudent option.

Maybe it's the age old tension between the "new world" racing forward and the "old world" hitting the brakes. Not necessarily a bad dynamic in the long run. [2]

Feel free to call me, and the whole block I live in, "coward" on this front.

[1] https://en.wikipedia.org/wiki/Thirteen_Days_(film)

[2] https://en.wikisource.org/wiki/French_address_on_Iraq_at_the...

I’ve used AI art generation to make birthday cards for all of my nieces and nephews, to entertain my friends (making them into crappy superheroes, anime girls, etc.), to quickly “brainstorm” logos, make assets for an app I’m making…

You know, stuff you’d use any images for.

You clearly are very frustrated that you're being left out of their AI, to the point you're wishing you could use violence to take their money over it.

The problem is you seem to think your involvement in the advancements should be orthogonal to your involvement in regulation.

That doesn't work in a world with sovereign nations: as cartoonish as comparing this to nuclear holocaust is, who do you think had more of a role in disarmament, the nuclear-weapon states, or the non-nuclear powers signing treaties with other non-nuclear powers?

If France had their own OpenAI releasing their own Sora with all the regulations you can dream of there'd be more of a discussion to be had over how a SOTA model should be rolled out, with actual counterfactuals to the approach the US and China have taken.

(Of course, Mistral is mostly American money... so I wouldn't bank on them taking a different road.)

Not gonna happen in EU too soon, imo. It still works with a VPN so the fence is not so protected.
  • andai
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Each of Google's AI things were delayed by months, I assumed due to GDPR, but I haven't seen such delays from OpenAI yet.
MKBHD's review of the new Sora release:

https://www.youtube.com/watch?v=OY2x0TyKzIQ

Love the callout of them definitely training on his own videos
...which they shouldn't have been able to get? I had thought that it was against the YouTube ToS? (my personal understanding, unrelated to my employer)
  • Havoc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
AI companies don’t give a shit about ToS. Hell most of the big players actively ignored copyright entirely in bulk. See thousand upon thousands of pirated books in the pile dataset.

And right after that news broke they “fixed” the problem by stopping to disclose training data sources. Thats why early models had papers eg Llama 1 listed this and now nobody does. It’s just an unspoken yet open secret now.

How did they get access to pirated books?
  • leobg
  • ·
  • 1 week ago
  • ·
  • [ - ]
Anna’s archive has files specifically for training LLMs. But I’d guess the big players secured their share beforehand, by scraping those sites. I have zero proof, it’s just a guess.
The companies are fairly brazen, at least internally, about just scraping whatever, wherever and not caring about ToS of any website. All they really care about is blocking "bad" data that might make the models racist or sexual, etc.
  • paxys
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If AI companies respected ToS there would be no AI
Interesting to see how bad the physics/object permanence is. I wonder if combining this with a Genie 2 type model (Google's new "world model") would be the next step in refining it's capabilities.
This feels like computer graphics and the 'screen space' techniques that got introduced in the Xbox 360 generation - reflection, shadows etc. all suffered from the inability to work with off screen information and gave wildly bad answers once off screen info was required.

The solution was simple - just maintain the information in world space, and sample for that. But simple does not mean cheap, and it led to a ton of redundant (as in invisible in the final image) having to be kept track of.

Until these models can figure out physics, it seems to me they will be an interesting toy
They can figure out a fair bit of physics. It's not a "no physics" vs "physics" thing. Rather it's a "flawed and unreliable physics" thing.

It's similar to the LLM hallucination problem. LLMs produce nonsense and untruths - but they are still useful in many domains.

It's a pretty binary thing in the sense that "bad physics" pretty quickly decoheres into no physics.

I saw one of these models doing a Minecraft like simulation and it looked sort of okay but then water started to end up in impossible places and once it was there it kept spreading and you ended up in some lovecraftian horror dimension. Any useful physics simluation at least needs boundary conditions to hold and these models have no boundary conditions because they have no clear categories of anything.

But they don't, they just understand pixel relationships (right?)
You can model a lot of basic physics through observing 1,000,000 videos
Here’s an idea - what if the fact that we have a body that has weight and consequence helps us understand physics? What if just visual data won’t get there because visual data lacks the sense of self? Could be interesting
Not consistently though. I think some model of understanding of physics is emergent but it doesn’t seem emergent enough. The model doesn’t understand object permanence either.
I quit watching this guy after he filmed himself speeding a 100 mph through residential. Just another privileged YouTuber.
Link should be annoucement post: https://openai.com/index/sora-is-here/
  • dang
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Ok, we changed the URL to that from https://sora.com/ above.
  • bnrdr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The bear video is quite funny - two bear cubs merge into one and another cub appears out of the adult bear’s leg.
Yeah, I had to stare at that one for a while. I thought there were four bears (I guess there technically were).
> We’re introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.

That's an interesting way of saying "we're probably gonna miss some stuff in our safety tools, so hopefully society picks up the slack for us". :)

Flashbacks to when they were cagey about releasing the GPT models because they could so easily be used for spam, and then just pretended not to see all the spam their model was making when they did release it.

If you happen to notice a Twitter spam bot claiming to be "an AI language model created by OpenAI", know that we have conducted an investigation and concluded that no you didn't. Mission accomplished!

> not to see all the spam their model was making when they did release it.

All replaced by open source LLMs at this point.

Most AI video will be produced by Hunyuan [1], LTX [2], and Mochi [3] in short order. These are the Flux / Stable Diffusion models for generative video. These can all be fine tuned to produce incredible results, and work with the Comfy ecosystem for wildly creative and controllable workflows.

I don't think it'll be possible for a closed source tool to compete with the open image/video ecosystem. Dall-E certainly didn't stay competitive for long. It's a totally different game.

[1] https://github.com/Tencent/HunyuanVideo

[2] https://huggingface.co/Lightricks/LTX-Video

[3] https://github.com/genmoai/mochi

> I don't think it'll be possible for a closed source tool to compete with the open image/video ecosystem.

And I don't think the current status quo of open source models being entirely subsidised by startups and corporations is sustainable, they're all hemorrhaging money and their investors will only have so much patience before they expect returns. Enjoy it while it lasts.

It's game theory. If you don't have market share for your closed model, you release it as open source and let a community build upon it.

Mochi is better positioned to build tools on top of their community model. They're already thinking about control.

Weights are commodity. Products have value.

You said yourself that you don't think proprietary tools can compete with the open source stack, so which is it? If Comfy is as good as or better than any paid frontend that Mochi themselves can come up with then there's absolutely no reason for anyone to give Mochi any money under their current license model.

Stability was supposed to be doing a similar "give away the models but sell products built on them" strategy and it doesn't seem to be working for them, by all accounts they're barely able to keep the lights on.

Users, not tools, should be judged.

It is unlikely anyone is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.

Funny how all the negative uses to which something like this might be put are regulated or criminalized already - if you try to scam someone, commit libel or defamation, attempt widespread fraud, or any of a million nefarious uses, you'll get fined, sued, or go to jail.

Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?

This idea that AI is somehow special, that they absolutely must monitor and censor and curtail usage, that they claim total responsibility for the behavior of their users - Anthropic and OpenAI don't seem to realize that they're the bad guys.

If you build tools of totalitarian dystopian tyranny, dystopian tyrants will take those tools from you and use them. Or worse yet, force your compliance and you'll become nothing more than the big stick used to keep people cowed.

We have laws and norms and culture about what's ok and what's not ok to write, produce, and publish. We don't need corporate morality police, thanks.

Censorship of tools is ethically wrong. If someone wants to publish things that are horrific or illegal, let that person be responsible for their own actions. There is absolutely no reason for AI companies to be involved.

> Would you want Microsoft to claim they're responsible for the "safety" of what you write with Word? For the legality of the numbers you're punching into an Excel spreadsheet? Would you want Verizon keeping tabs on every word you say, to make sure it's in line with their corporate ethos?

Would you want DuPont to check the toxicity of Teflon effluents they're releasing in your neighbourhood? That's insane. It's people's responsibility to make sure that they drink harmless water. New tech is always amazing.

Yes, because we know a.) that the toxicity exists and b.) how to test for it.

There is no definition of a "safe" model without significant controversy nor is there any standardized test for it. There are other reasons why that is a terrible analogy, but this is probably the most important.

It's called Overton window what's politically acceptable. Unlike toxicity, it is fully subjective.

https://en.m.wikipedia.org/wiki/Overton_window

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don't see how that analogy works, especially so as in your attempt to make a point you have DuPont as the explicit actor in the direct harm, and the people drinking the water aren't even involved... like, I do not think anyone disagrees that DuPont is responsible in that one.

I also, to draw a loose parallel, think that Microsoft should be responsible for the security and correctness of their products, with potentially even criminal liability for egregiously negligent bugs that lead to harm for their users: it isn't ever OK to "move fast and break things" with my personal data or bank account. But like, that isn't what we are talking about constantly with limiting the use cases of these AI products.

I mean, do I think OpenAI should be responsible if their AI causes me to poison myself by confidently giving me bad cooking instructions? Yes. Do I think OpenAI should be responsible if their website leaks my information to third parties? Of course. Depending on the magnitude of the issue, I could even see these as criminal offenses for not only the officers of the company but also the engineers who built it.

But, I do not at all believe that, if DuPont sells me something known to be toxic, that it is DuPont's responsibility to go out of their way to technologically prevent me from using it in a way which harms other people: down that road lies dystopian madness. If I buy a baseball bat and choose to go out clubbing for the night, that one's on me. And like, if I become DuPont and make a factory to produce Teflon, and poison the local water with the effluent, the responsibility is with me, not the people who sold me the equipment or the raw materials.

And, likewise, if OpenAI builds an AI which empowers me to knowingly choose to do something bad for the world, that is not their problem: that's mine. They have no responsibility to somehow prevent me from egregiously misusing their product in such a way; and, in fact, I will claim it would be immoral of them to try to do so, as the result requires (conveniently for their bottom line) a centralized dystopian surveillance state.

  • r00f
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Well, humans do understand such thing as scale.

C4 and nuke are both just explosives, and there are laws in place that prohibit exploding them in the middle of the city. But the laws that regulate storage and access to the nukes and to C4 are different, and there is a very strong reason for that.

Censorship is bad, everyone agrees on that. But regulating access to technology that has already proven that it can trick people into sending millions to fraudsters is a must, IMO. And it'd better be regulated before in overthrows some governments, not after.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Is this a "guns don't kill" argument ?

Microsoft Word and Excel aren't generative tools. If Excel added a new headline feature to scan your financial sheets and auto-adjust the numbers to match what's expected when audited, you bet there would be backlash.

And regarding scrutiny, morphine is a immensely usefulness tool and it's use surely extremely monitored.

On the general point, our society values intent. Tools can just be tools when their primary purpose is in line with our values and they only behave according to the user's intent. AI will have to prove a lot to match both criteria.

> And regarding scrutiny, morphine is a immensely usefulness tool and it's use surely extremely monitored.

I went to high school in a fairly affluent area and I promise you this is not true. If you have money and know how to talk to your doctor, you can get whatever you want. No questions asked.

You can even get prescription methamphetamine - and Walgreens will stock generic for it!

Definitely not if you're a white male under 60 years old. They won't even give you opioids after surgery now because you are "high risk" .

If you're really rich it may be a different story, but any of the "middle class" good luck. And if you do find a doctor with some compassion, they are probably about to retire.

All I can say is that I am speaking from life experience. It sounds like our experiences have been different.
> If you have money and know how to talk to your doctor

That's a decently high bar I think ?

Imagine what you can do if you have money and know how to talk to your local police...

  • boznz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> If Excel added a new headline feature to scan your financial sheets and auto-adjust the numbers to match what's expected when audited

- Sounds like what my accountant already does.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Right, but accountants have qualifications and, more importantly, have to sign their name and accept liability for the accounts they're submitting. That's the part that's missing when "computer says ok".
Your accountant's cooking the books is handmade and a work of art, passed down by generations of accountants before them and they'll proudly stand in front of any auditor to claim their prowess at their craft.
I disagree. Analogous would be how we have very limited regulations on guns, but you can’t just have a tank, fighter jet, or ICBM.

Some tools are a lot more powerful than others and we have to take special care with them.

  • oblio
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Analogous would be how we have very limited regulations on guns

This is strictly limited to the US. In most advanced democracies you need a stack of papers to get even a small handgun.

Right but a gun can be had and presumably a nuclear warhead can’t, so even in countries who call the wrong sport “football” the law takes into account that some tools need to be regulated more than others.
There are private citizens that own and operate all of those things.
Pray, what private citizens operate ICBMs?
> but you can’t just have a tank, fighter jet, or ICBM.

?? You 100% can in the USA it just costs a lot of money.

  • oblio
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Operational military ones? Tanks that are basically very dense semis don't really count for this point.
No he’s just doing an aaaaaactually comment. Wouldn’t be HN if someone didn’t.

You cannot own tanks or jets capable of using military ordnance in the US (and I’d wager nearly any country that has anything resembling rule of law). You can own decommissioned ones that are rendered militarily useless.

I can write an erotic fiction about your husband or wife or son or daughter in microsoft word, but it's a little different if I scrape their profiles and turn it into hardcore porn and distribute it to their classmates coworkers isn't it?
  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
But you can do that without using AI, and we have laws (harassment etc.) that apply. So where does AI come into the equation?
You are posting this under a pseudonym. If you did publish something horrific or illegal, it would have been the responsibility of this web site to either censor your content, and/or identify you when asked by authorities. Which do you prefer?
> when asked by authorities

Key point right here.

You let people post what they will, and if the authorities get involved, cooperate with them. HN should not be preemptively monitoring all comments and making corporate moralistic judgments on what you wrote and censoring people who mention Mickey Mouse or post song lyrics or talk about hotwiring a car.

Why shouldn't OpenAI do the same?

  • 9rx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It seems reasonable to work with law enforcement if information provides details about a crime that took place in the real world. I am not sure what purpose censoring as a responsibility would serve? Who cares if someone writes a fictional horrific story? A site like this may choose to remove noise to keep the quality of the signal high, but preference and responsibility are not the same.
This website is not a tool - not really.

Your keyboard is.

Censoring AI generation itself is very much like censoring your keyboard or text editor or IDE.

Edit: Of course, "literally everything is a tool", yada yada. You get what I mean. There is a meaningful difference between that translate our thoughts to a digital medium (keyboards) and tools that share those thoughts that others.

A website is almost certainly a tool. It has servers and distributes information typed on thousands of keyboards to millions of screens.
HN is the one doing the distribution, not the user. The latter is free to type whatever it wants, but it is not entitled to have HN distributes his words. Just like a publisher do not have to publish a book he doesn’t want to.
When someone posts on FB, they don't consider that FB is publishing their content for them
Maybe you should talk with image editor developers, copier/scanner manufacturers and governments about the safeguards they shall implement to prevent counterfeiting money.

Because, at the end of the day, counterfeiting money is already illegal.

...and we should not censor tools, and judge people, not the tools they use.

  • rixed
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Interestingly, you must know that any printing equipment that is good enough to output realistic banknotes are regulated to embed a protection preventing this use case.

Even more interestingly, and maybe that could help understand that even in the most principled argument there should be a limit: molecular 3d printers able to reproduce proteins (yes, this is a thing) are regulated to recognise a design from a database of dangerous pathogens and refuse to print.

Gimp doesn't have the secret binary blob to "prevent counterfeiting" and there is no flood of forged money

https://www.reddit.com/r/GIMP/comments/3c7i55/does_gimp_have...

  • jpc0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Gimp makes printers now?
So guns are ok? How about bombs?
  • 8note
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
that works for locally hosted models, but if its as a service, openai is publishing those verboten works to you, the person who requested it.

even if it is a local model, if you trained a model to spew nazi propaganda, youre still publishing nazi propaganda to the people who then go use it to make propaganda. its just very summarized propaganda

Does this apply to the spell checker in Office 365 or Google Docs?
Are hunting knives regulated the same way as rocket launchers? Both can be used to kill but at much different intensity levels.
Censorship of tools...

Then let's parents choose when teenagers can start driving.

Also let's legalize ALL drugs.

Weapons should all be available to public.

Etc. Etc.

----

It's very naive to think that we shouldn't regulate "tools"; or that we shouldn't regulate software.

I do agree that on many cases the bad actors who misuse tools should be the ones punished, but we should always check the risk of putting something out there that can be used for evil.

"Teens having a laugh" can escalate quickly to, "... at someone else's expense," and this distinction is EXACTLY the sort of subtlety an algorithm can't filter.

This does not need to become a thread about bullying and self harm, but it should be recognized that this example is not benign or victimless.

This genie is out of the bottle, let us hope that laws about users are enough when the tools evolve faster than legislative response.

[edit:spelling]

> It is unlikely no one is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.

And the teens are having a laugh by... creating deepfake nudes of their classmates? The tools are bad, and the toolmakers should feel deep guilt and shame for what they released on the world. Do you not know the story of Nobel and dynamite? Technology must be paired with morality.

I am sure a school has a way to deal with pupils sharing such images, as the recent cases have proven. Deep fakes or real pictures. It it a social problem with existing framework of decades of proven history and should be dealt so.
I can assure you that at right now teens are sharing real nudes of their class mates. Do you want to restrict cameras and high speed internet too?
Technology is paired with morality. It’s just not the one you want.
Is it? It seems to me to be paired with shareholders' interests, and nothing more.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Exactly. You can make anything you want in Photoshop, Word, Excel, Blender, etc. The company isn't held accountable for what the User makes with it.
Yes and one could kill a hundred people with their fists, but we regulate super powerful weapons more than fists.

I think the degree of power matters.

> Users, not tools, should be judged.

You can argue that that’s how it should be, but that isn’t how it is. And we don’t know what a world that adhered to that principle would look like, it’s possible it would be a disaster. There are a lot of bad things people can do where it’s difficult to catch someone after they’ve done it, and prevention at the tool level is the only way to really effectively stop people.

I’m not saying I like the idea of any of these methods when it comes to AI, but it feels naive to act like there isn’t precedent for stuff like this.

> It is unlikely anyone is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.

Citation needed bigtime. Sure, people doing organized disinformation campaigns won’t log into OpenAI’s website and use Sora, they’ll probably be running Hunyuan Video with an on-prem or cloud-based GPU cluster, but this feels like as good a time as any to discuss the implications of video generation tools as they stand in December 2024.

There are certain tools for which we heavily restrict which users have access to the entire supply chain. That's still about users, I suppose, but it's also about tools.
In China, the whole Internet is heavily restricted. Bad tools.
> no one is going to perform act of terrorism with this

Especially certain someone that’s worth a billion dollars, is 100 years old and their name ends with inc.

> or any kind of deep fakes that buy Easter European elections

Finally people do not label Slovakia as Eastern Europe...

The problem isn't whether we should regulate AI. It's whether it's even possible to regulate them without causing significant turmoil and damage to the society.

It's not hyperbole. Hunyuan was released before Sora. So regulating Sora does absolutely nothing unless you can regulate Hunyuan, which is 1) open source and 2) made by a China company.

How do we expect the US govt to regulate that? Threatening sanction China unless they stop doing AI research???

  • ssl-3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Easy-peasy. Just require all software to be cryptographically signed, with a trusted chain that leads to a government-vetted author, and make that author responsible for the wrongdoings of that software's users.

We're most of the way there with "our" locked-down, walled-garden pocket supercomputers. Just extend that breadth and bring it to the rest of computing using the force of law.

---

Can I hear someone saying something like "That will never work!"?

Perhaps we should meditate upon that before we leap into any new age of regulation.

This is well on its way thanks to Microsoft's aggressive push to put a TPM in every Windows 11 PC.
  • ssl-3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That's exactly the kind of logical conclusion I had hoped for someone here to reach in this bizarre sea of emotional pleas.

After over two decades of careful preparation, we're the stroke of a legislative pen away from having all of the software on our computers regulated by our friends in the government.

It's not even a slippery slope argument. In order to be effective, "We must regulate AI!" means the same thing as "We must regulate computer software!"

The two things are so identical that they're not even so different as two sides of the same coin are.

(Be careful what you wish for; you might just get it.)

"to give society time to explore its possibilities and co-develop norms and safeguards"

Or, "this safety stuff is harder than we thought, we're just going to call 'tag you're it' on society"

Or,

-Oppenheimer : speaking "man, this nuclear safety stuff is hard, I'm just going to put it all out there and let society explore developing norms and safeguards".

-Society : Bombs Japan

-Oppenheimer : "No, not like that, oops".

  • usrnm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Oppenheimer was making a bomb from day 1, he knew exactly what he was doing and how it would be used. There aren't so many different use cases for a bomb, after all. It was a nice movie, but it does not absolve him
  • Arnt
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Aren't you kind of saying that you don't have any answers so therefore OpenAI should have provided the answers?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Eh, society did a pretty good job overall.

The bomb was the end of conventional warfare between nuclear nations. MAD has created an era of peace unlike anything our species has ever seen before.

  • rurp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Well it works great, until is doesn't. We're perpetually a few bad decisions from a few possibly deranged actors away from obliterating all of those gains and then some.
Right, and in the meantime nuclear-armed countries mostly get to avoid the horrible, endless churn of death and war and teenagers being sent off to the meat grinder to push some border here or some border there.

We have eliminated warfare between nuclear countries, conflicts have been reduced to nuclear/non-nuclear or proxy warfare, and that's a very solid reduction in suffering.

"Climate Change is likely to mean more fires in the future, so we've lit a small fire at everyone's house to give society time to co-develop norms and safeguards."
Specially since they were originally supposed to be a non-profit focused on AI safety and Sam Altman single-handedly pivoting to a for-profit after taking all the donations and partnering with probably the single most evil corporation that has ever existed, Microsoft.
Microsoft is more evil than Enron? Than the company that faked blood tests? This is some pretty extreme hyperbole. I’d pick Google over Microsoft for one.
"We're releasing this like rats on a remote island, in hopes of seeing how the ecosystem is going to respond".
The onus will be on the rest of society to defend itself from all the grift that will result from this.
  • pesus
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If the worst we ultimately get from this kind of tech is grifting, I will consider that a very positive outcome.
text, image, video, and audio editing tools have no 'safety' and 'alignment' whatsoever, and skilled humans are far more capable of creating 'unsafe' and 'unethical' media than generative AI will ever be.

somehow, the society had survived just fine.

the notion that generative AI tools should be 'safe' and 'aligned' is as absurd as the notion that tools like Notepad, Photoshop, Premiere and Audacity should exist only in the cloud, monitored by kommissars to ensure that proles aren't doing something 'unsafe' with them.

The irony is that users want more freedom and fewer safeguards.

But these companies are rightfully worried about regulators and legislatures, often led by a pearl-clutching journalists, so we can't have nice things.

Recent events (many events in many places) show "users" don't think too hard before acting. And sometimes they act with inadequate or inaccurate information. If we want better outcomes, it behooves us to hire people to do the thinking that ordinary users see no point in doing for themselves. We call the people doing the hard thinking scientists, regulators, and journalists. The regulators, when empowered to do so by the government, can stop things from happening. The scientists and journalists can just issue warnings.

Giving people what they want when they want it doesn't always lead to happy outcomes. The people themselves, through their representatives, have created the institutions that sometimes put a brake on their worst impulses.

[dead]
Do we not want new stuff? If the answer is "Sure, but only if whoever invents the stuff does all the work and finds all rough edges" then the answer is actually just "No, thanks".
Oh, I have no problem with them doing it this way. I just thought it was a funny way to do it.
It's a little disingenuous to jump to "we don't want new stuff" when people voice criticism of deepfake generators or AI models trained on stolen content
'when civilization collapses because all photo, audio and video evidence is 100% suspect, i mean, how could you blame us'
[dead]
A bit off-topic, but how much does a 4-letter (or less) .com go for these days? I wonder if they bought this via an intermediary so that the seller wouldnt see "OpenAI" and tack on a few zeros.

edit: previously, this thread pointed to sora.com

Pretty off-topic, but yes, domains and land are often bought via shell companies for this reason. They probably didn't settle upon the name Sora until they already secured the .com . That's a famous YC piece of advice. If you can't get the .com then rename. But for domains that everyone wants, like chat.com, OpenAI paid 8 figures for that one.
Huh didn't realize chat.com redirected to chatgpt
His review video is so much better than the announcement video at explaining what has been released.
  • gzer0
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
For the $20/month subscription: you get 50 generations a month. So it is included in your subscription already! Nice.

For the Pro $200/month subscription: you get unlimited generations a month (on a slower que).

I wonder what I will be doing with 20 garbage videos. And this probably includes revisions too. It takes 10 attempts to get something remotely useful as an image (and that's just for a blog post).
Who is the audience for this product? A lot of people like video because it's a way of experience something they currently cannot for one reason or another. People don't want to see arbitrary fake worlds or places on earth that aren't real. Unless it's video game or something. But I see this product being used primarily to trick Facebook users

I guess the CGI industry implications are interesting, but look at the waves behind the AI generated man. They don't break so much as dissolving into each other. There's always a tell. These aren't GPU generated versions of reality with thought behind the effects.

> People don't want to see arbitrary fake worlds or places on earth that aren't real.

Isn't there a multi-billion dollar industry in California somewhere that caters exactly to that demand?

> Unless it's video game or something.

The "or something" pretty much covers the gotcha you're trying to use. OP is acknowledging that fantasy media is a thing before going on to their actual point.

> Who is the audience for this product?

Infants, people just coming out of anesthesia, the concussed, the hypoxic, the mortally febrile and so on

To me this is what all AI feels like. People want "hard to make things" because they feel special and unordinary. If anybody with a prompt can do it, it ain't gonna sell
"People don't want to see arbitrary fake worlds or places on earth that aren't real."

What? This is 90% of the Instagram/TikTok experience, and has been for years. No one cares if something is real. They care how it makes them feel.

The audience for this is every "creator" or "influecner". No one cares if the content is fake. They'll sell you a vacation package to a destination that doesn't exist and people will still rate it 3/5 stars for a $15 Starbucks gift card.

I know a bunch of marketing people that have fully incorporated these tools into their workflow. So that's one group.

Also seen GenAI replace more and more stock media in many facets of business/professional services.

> primarily to trick Facebook users

You say it like that's not the majority of the web.

Anyone who wants to waste your time/attention/money(!!) for cheap. Think all the bullshit useless jobs aka marketers, scammers, identity thieves.

Other than that, it's also so people can spam every single website with millions of hours of AI generated spam and earn 7 cents off of the 5000 people the algorithm randomly decides to show it to.

Legitimate uses outside of that kinda shit? I fail to see one.

Raises billion dollar, claims of agi by 2025, cannot handle new user sign up traffic.
I don't even get why I have to "sign up." I'm already a paying customer with an existing account.
Billion is table stakes , OpenAI has raised over 6 billion dollars this year alone
Why has “table stakes” become such a popular phrase in the last 12-24 months?
Has it?

It is a gambling term, most VC funded startups are gambles, AI ones particularly so, it felt apt.

perhaps it correlates to a raise in investing that no longer is based on sound fundamentals in both traditional and new age assets(like crypto) perhaps makes people identify more with gambling.

I was neither criticizing your comment nor choice of words - I actually agree with both. Was just curious about the phrase. Your guess seems like a good one!
This is by design, they want the news articles saying "this is so popular it crashed their website!"
Strange way to advertise no ?
That’s because in this case scaling to big traffic needs more hardware which is very expensive and even if you have money the manufacturers may not have the capacity you need.
They can always queue the traffic for actual video creation but anyone can manage simple traffic overload to a website. They are not even launching globally.
  • ·
  • 1 week ago
  • ·
  • [ - ]
“Sora is here”

No it’s not. I’ve been trying to access all day: “Sora account creation is temporarily unavailable We're currently experiencing heavy traffic and have temporarily disabled Sora account creation. If you've never logged into Sora before, please check back again soon.”

I finally got access. Generated roughly 15 videos. Quality ranged from okay to very very bad.

I wouldn't get your hopes up - it's not at all as good as they've hyped it.

A little worried how young children watching these videos may develop inaccurate impressions of physics in nature.

For instance, that ladybug looks pretty natural, but there's a little glitch in there that an unwitting observer, who's never seen a ladybug move before, may mistake as being normal. And maybe it is! And maybe it isn't?

The sailing ship - are those water movements correct?

The sinking of the elephant into snow - how deep is too deep? Should there be snow on the elephant or would it have melted from body heat? Should some of the snow fall off during movement or is it maybe packed down too tightly already?

There's no way to know because they aren't actual recordings, and if you don't know that, and this tech improves leaps and bounds (as we know it will), it will eventually become published and will be taken at face value by many.

Hopefully I'm just overthinking it.

> For instance, that ladybug looks pretty natural, but there's a little glitch in there that an unwitting observer, who's never seen a ladybug move before, may mistake as being normal. And maybe it is! And maybe it isn't?

Well, none of the existing animation movies follow exact laws of physics.

Animation doesn't follow exact laws of physics, but the specific ways they don't follow physics have very deliberate intent behind them. There's a pretty clear difference between the coyote running off a cliff and taking 2 seconds to start falling, and a character awkwardly floating over the ground because an AI model got confused.
>but the specific ways they don't follow physics have very deliberate intent behind them.

That is only true for well crafted things. There's plenty of stuff that's just wrong for no reason beyond ease of creation or lack of care about the output.

It is a good point…

Although, plenty of kids have tied a blanket around their necks and jumped off some furniture or a low roof, right? Breaking a leg or twisting an ankle in their attempt to imitate their favorite animated superhero.

oh yes, Suipercideman
Clearly you haven't seen any Bollywood movies: https://youtu.be/PdvRwe39NCs
  • cj
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Take the example to the extreme: In 10 years, I prompt my photo album app "Generate photorealistic video of my mother playing with a ladybug".

The juxtaposition of something that looks extremely real (your mother) and something that never happened (ladybug) is something that's hard for the mind to reconcile.

The presence of a real thing inadvertently and subconsciously gives confidence to the fake thing also being real.

I think this hooks in quite well to the existing dialogue about movies in particular. Take an action movie. It looks real but is entirely fabricated.

It is indeed something that society has to shift to deal with.

Personally, I'm not sure that it's the photoreal aspect that poses the biggest challenge. I think that we are mentally prepared to handle that as long as it's not out of control (malicious deep-fakes used to personally target and harass people, etc.) I think the biggest challenge has already been identified, namely, passing off fake media as being real. If we know something is fake, we can put a mental filter in place, like a movie. If there is no way to know what is real and what is fake, then our perception reality itself starts to break down. That would be a major new shift, and certainly not one that I think would be positive.

I'm still waiting on the future waves of PTSD from hyper realistic horror games. I can't think of a worse thing to do then hand a kid a VR headset (or game system) and have them play a game that is designed to activate every single fight or flight nerve in the body on a level that is almost indistinguishable from reality. 20 years ago that would have been the plot to a torture porn flick.

Even worse than that is when people get USED to it and no longer have a natural aversion to horrific scenes taking place in the real world.

This AI stuff accelerates that process of illusion but in every possible direction at once.

As much as people don't want to believe it, by beholding we are indeed changed.

That argument can and probably was pointed towards movies with color, movies with audio before that, comics, movies without audio, books, etc.

I don’t think that slippery slope holds up.

IIRC there’s pretty solid research showing that even children beyond the age of 8 can tell the difference between fiction and reality.

Distinguishing reality from fiction is useful, but it doesn’t shape our desires or define our values. As a culture, we’ve grown colder and more detached. Think of the first Dracula film—audiences were so shaken by a simple eerie face that some reportedly lost control in the theater. Compare that visceral reaction to the apathy we feel toward far more shocking imagery today.

If media didn’t profoundly affect us, how could exposure therapy rewire fears? Why would billions be spent on advertising if it didn’t work? Why would propaganda or education exist if ideas couldn’t be planted and nurtured through storytelling?

Is there any meaningful difference between a sermon from the pulpit and a feature film in the theater? Both are designed to influence, persuade, and reshape our worldview.

As Alan Moore aptly put it: "Art is, like magic, the science of manipulating symbols, words, or images to achieve changes in consciousness."

In my opinion the old adage holds true, you are what you eat. And we will soon be eating unimaginable mountains of artificial content cooked up by dream engines tuned to our every desire and whim.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Distinguishing reality from fiction is useful, but it doesn’t shape our desires or define our values. As a culture, we’ve grown colder and more detached. Think of the first Dracula film—audiences were so shaken by a simple eerie face that some reportedly lost control in the theater. Compare that visceral reaction to the apathy we feel toward far more shocking imagery today.

Huh? The first half of this contradicts the second. We haven't "grown colder and more detached", we've adapted to the fact that images are no longer reliable indicators of reality. What we do and don't value in the real world hasn't changed.

> And we will soon be eating unimaginable mountains of artificial content cooked up by dream engines tuned to our every desire and whim.

Always has been. Multi-channel TV was already that, and attracted the same kind of doomerism.

I looked at the Sora videos and all the subject "weights" and "heft" are off. And in the same way that Anna Taylor-Joy's jump in the The Gorge at the end of the new movie trailer looked not much better than years-ago Spiderman swinging on a rope.
Wouldn’t this same concern apply to historical fiction in general?
Feels like you're looking for a strawman argument, and may have found one.

I would retort that animation and real-life-looking video do different things to our psyche. As an uneducated wanna-be intellectual, I would lean toward thinking real-looking objects more directly influence our perception of life than animations.

Animation can look real though, e.g sci-fi vfx. But maybe you’re concerned about how prolific it may be? I could see that. Personally I think it’ll be fine. It’s just that disruptive tools create uncertainty. Or maybe I’m overcompensating to avoid being the “old man yelling at cloud” dude.
Now you're intentionally mixing VFX and animation. Animation, at least in my meaning, was more cartoon.
Well none of the existing animation movies…a to be anything other than animation?

You just know there’ll be people making content within the week for social media that will be trying to pass itself off as real imagery.

gravity acts immediately, you don't hover in the air for few seconds before falling
then how will I have time to flash my sign to the audience that says "uh-oh"?
  • sdf4j
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I grew up watching Looney Tunes interpretation of physics and turned out just fine.
There's big difference between cartoonishly incorrect and uncanny valley plausibly correct.
There's a huge amount of such stuff in movies.

Special effects, weapons physics, unrealistic vehicles and planes, or the classic 'hacking'.

  • ics
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
There’s also a huge difference in what people, even children, expect when sitting down to watch a movie versus seeing a clip of some funny cat/seal hybrid playing football while I’m looking for the Bluey episode we left off on. My daughter is almost five and cautiously asks “is that real?” about a lot of things now. It definitely makes me work harder when trying to explain the things that don’t look real but actually are; one could definitely feel like it takes some of the magic away from moments. I feel alright in my ability to handle it, it’s my responsibility to try, but it isn’t as simple as the Looney Tunes argument or, I believe, dramatic effects in movies and TV.
Yet, in a movie setting it's clear something is a special effect or alike which is not the case for GenAI. Massive underestimation of the potential impact in this thread, scary.
Maybe. Or maybe some people massively underestimate our ability to cope with fiction and new media types.

I am sure that there were people decrying radio for all these same reasons (“how will the children know that the voices aren’t people in the same room?”)

Not a bad point, those representations have, in some cases, caused widespread misunderstandings among people who learn about those concepts from movies... and this is all while simultaneously knowing "it's just a movie".
Yes but a movie is a movie whereas these AI-generated videos will likely be used to replace stock footage in other (documentary, promotional, etc.) contexts
  • ssl-3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If the producer wants to publish bad physics, they get bad physics.

If the producer wants to publish good physics, they get good physics.

It doesn't matter if it is AI, CGI, live action, stop motion, pen-and-ink animation, or anything else.

The output is whatever the production team wants it to be, just as has been the case for as long as we've had cinema (or advertising or documentaries or TikToks or whatevers).

Nothing has changed.

You don't have full control over AI-generated images though, or not to the same extent producers have with CGI.

There's a video on sora.com at the very bottom, with tennis players on the roof, notice how one player just walks "through" the net. I don't think you can fix this other than by just cutting the video earlier.

There's already techniques for controlling AI generated images. There's ControlNet for Stable Diffusion and there are already techniques to take existing footage and style-morphing it with AI. For larger budget productions I would anticipate video production tooling to arise where directors and animators have fine grained influence and control over the wireframes within a 3D scene to directly prevent or fix issues like clipping, volumetric changes, visual consistency, text generation, gravity, etc. Or even just them recording and producing their video in a lower budget format and then having it re-rendered with AI to set the style or mood but adhering to scene layout, perspective, timing, cuts, etc. Not just for mitigating AI errors but also just for controlling their vision of the final product.

Or they could simply brute force it by clipping the scene at the problem point and have it try, try again with another re-render iteration from that point until it's no longer problematic. Or just do the bulk of the work with AI and do video inpainting for small areas to fix or reserve the human CGI artists for fixing unmitigatable problems that crop up if they're fixable without full re-rendering (whichever probably ends up less expensive).

Plus with what we've recently seen with world models that have been released in the last week or so, AI will soon get better at having a full and accurate representation of the world it creates and future generations of this technology beyond what Sora is doing simply won't make these mistakes.

  • ssl-3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
>You don't have full control over AI-generated images though,

So the AI just publishes stuff on my behalf now?

No, comrade.

People don't watch The Matrix expecting a documentary on how we all got plugged in. If someone generated the referenced ladybug movie for use in a science classroom, that's a problem.
I agree. The issue is in using it for teaching science though, not in generating it.

Similar to how it's fine to create fiction, but not to claim it to be true.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
And it's already harmful in some cases. E.g. people drag people out of a crashed car because they think it's going to explode, sometimes seriously injuring them.
Did you see the movie Battleship? Or a good percent of recent and not so recent action movies, at least Matrix could be argued that it was about a virtual reality.
  • ma2t
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"A body at rest remains at rest until it looks down and realizes it has stepped off of a cliff."
these will be a lot less violent too ;-) for a little while at least.
Between omnipresent cgi in movies and tv, animation, and video game physics (all of which are human-coded approximations of real physics, often intentionally distorted for various reasons), that ship has long since sailed.
no one is shooting blockbuster-grade CGI for stock footage though; the casualness of this is what will be the most impactful
People use cgi to generate the background in romcoms because it's cheaper than getting permits for location shooting.

The ship is across the ocean...

> The sinking of the elephant into snow - how deep is too deep? Should there be snow on the elephant or would it have melted from body heat? Should some of the snow fall off during movement or is it maybe packed down too tightly already?

Should there be an elephant in the snow? The layers of possible confusion, and subtle incorrect understandings go much deeper.

Yes, they were used to traverse mountains paths.
With the same reasoning, do reindeer actually fly and pull a sleigh carrying a 200-pound man along with tons of gifts? I believe you're underestimating human intelligence and our ability to apply logic and reasoning.
  • anonu
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> inaccurate impressions of physics

Or just inaccurate impressions of the physical world.

My young kids and I happened to see a video of some very cute baby seals jumping onto a boat. It was not immediately clear it was AI-generated, but after a few runs I noticed it was a bit too good to be true. The kids would never have known otherwise.

YouTube Shorts are full of AI animal videos with distorted proportions, living in the wrong habitat, and so on. They popped up on my son’s account and I hate them for the reasons you outline. They aren’t cartoonish enough explain away, nor realistic enough to be educational.
  • jonpo
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
And have you watched the brain rot that is Tik toks?
I’d be more worried about the inevitable “we’re under nuclear attack, head for shelter” CNN deepfakes.
  • evan_
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Or “go kill every member of $marginalized_group you can find”
I dont think you are overthinking it.

Facebook seems full of older people interacting with AI generated visual content who don't seem to understand that it is fake.

Our society already had a problem with people (not) participating in consensus reality. This is going to pour gasoline on the fire.

  • Terr_
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> A little worried how young children watching these videos may develop inaccurate impressions of physics in nature.

I'm less concerned with physics for children--assuming they get enough time outdoors--and more about adulthood biases and media-literacy.

In particular, a turbocharged version of a problem we already have: People grow up watching movies and become subconsciously taught that flaws of the creation pipeline (e.g. lens flare, depth of field) are signs of "realism" in a general sense.

That manifests in things such as video-games where your human character somehow sees the world with crappy video-cameras for eyes. (Excepting a cyberpunk context, where that would actually make sense.)

Fair! I watched a lot of Superman as a kid and I killed myself jumping off a building
Don't be an asshole. When learning to fly, learn by starting on the ground first, not from a tall building. --Bill Hicks
Yes, entertainment spreads lots of myths. But bad physics from AI movies is only a tiny part of the problem. This is similar to worries about the misconceptions people might get from playing too many video games, reading too many novels, watching too much TV, or participating too much in social media.

It helps somewhat that people are fairly aware that entertainment is fake and usually don’t take it too seriously.

> A little worried how young children watching these videos may develop inaccurate impressions of physics in nature.

And why don't we worry this about CGI?

CGI is not always made with a full physical simulation, and is not always intended to accurately represent real-world physics.

Me too. While I'm generally optimistic about generative art, at this point the models still have this dreamlike quality; things look OK at first glance, but you often get the feeling something is off. Because it is. Texture, geometry, lights, shadows, effects of gravity, etc. are more or less inconsistent.

I do worry that, as we get exposed more and more to such art, we'll become less sensitive to this feeling, which effectively means we'll become less calibrated to actual reality. I worry this will screw with people's "system 1" intuitions long-term (but then I can't say exactly how; I guess we'll find out soon enough).

Here's the obligatory AI enthusiast answer:

What is physics besides next token/frame prediction? I'm not sure these videos deserve the label "inaccurate" as who's to judge what way of generating next tokens/frames is better? Even if you you judge the "physical" world to be "better", I think it's much more harmful to teach young children to be skeptical of AI as their futures will depend on integrating them in their lives. Also, with enough data, such models will not only match, but probably exceed "real-physics" models in quality, fidelity, and speed.

  • 8note
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
i wouldnt expect young children to learn how to walk by watching people walk on a screen, regardless of if its a real person walking, or an ai animation.

the real world gives way more stimulus

watching the animations might help them play video games, but i again imagine that the feedback is what will do the real job.

even for the real ladybug video, who says the behaviour on screen is similar to what a typical ladybug does? if its on video, the ladybug was probably doing something weird amd unexpected

Sure this is problematic for society although I'm not concerned about what you are mentioning. I remember as a kid noticing how in looney tunes wile e coyote could run off the cliff a few steps and thinking maybe there's a way to do that. Or kids arguing about whether it was possible to perform a sonic boom like in street fighter. Or jumping off the playground with an umbrella etc
Don't be, physics laws miss interpretation are very quick to correct with a reality check. I'm more worried for kids that have to learn how the world works trough a screen. Just let them play outside and interact with other kids and nature. Let them fall and cry, and scratch and itch, it will make them stronger and healthier adults.
> Hopefully I'm just overthinking it.

I think it's unnecessary to worry about obviously bad stuff in nascent and rapidly developing technology. The people who spent most time with it (the developers) are aware of the obviously bad stuff and will work to improve it.

A little worried how young children watching these videos may develop inaccurate impressions of physics in nature.

Pretty sure cartoons and actions movies do that already, until youtube videos of attempted stunts show what reality looks like.

Young generation that will grow up with this tools will have completely different approach to anything virtual. Remember how prople though that camera stole part of their soul when they see themselves copied on picture?
Video games and movies have existed for a long time. I think children today will end up being more discerning than us because they will grow up sifting through AI generated content.
That could be nice. If you think that rabbits crawl like on the sora.com homepage, but then you see one hopping in real life, you might have more of a sense of wonder about the world.
AI physics isn't worth worrying about compared to other inaccurate things kids see in movies. It doesn't seem to hurt them.

If you really want something to worry about, consider that movies regularly show pint-sized women successfully drop kicking men significantly bigger than themselves in ways that look highly plausible but aren't. It's not AI but it violates basic laws of biology and physics anyway. Teaching girls they can physically fight off several men at once when they aren't strong enough to do that seems like it could have pretty dangerous consequences, but in practice it doesn't seem to cause problems. People realize pretty quick that movie physics isn't real.

You are not overthinking it, moreover, text LLM have the same problem in that they are almost good. Almost. Which is what gives me the creeps.
I share your concern as well and at times worry about what I'm seeing too.

I suppose the reminder here is that seeing does not warrant believing.

I am not sure if you have kids or not but you are in for a big surprise if you don’t have kids. Watching videos =\= real life.
I know this sounds judgmental, but this reminds me of the idiom “touch grass”. Children should be outdoors observing real life and not be consuming AI slop. You are not overthinking this, this will most likely be bad for children and everyone in the long run.
Also, I guess its just normal for a car lane to just merge seamlessly into a pedestrian zone
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Yes Bugs bunny and willie the coyote harmed ours physics.
Don’t worry, you are.
Kids are fine with fiction.
Many people say:

> these things will get bigger and better much faster than we can learn to discern

I would like to ask “Why?”

Clearly, these models are just one case of “NN can learn to map anything from one domain to another” and with enough training/overfitting they can approximate reality to a high degree.

But, why would it get better to any significant extent?

Because we can collect an infinite amount of video? Because we can train models to the point where they become generative video compression algorithms that have seen it all?

> But, why would it get better to any significant extent?

Two years ago, the very best closed-source image model was unable to represent anything remotely realistic. Today, there's hundreds of open source models that can generate images that are literally indistinguishable from reality (like Flux). Not only that, there's an entire collection of tools and techniques around style transfer, facial reconstruction, pose control, etc. It's mindblowing, and every week there's a new paper making it even better. Some of that could have been more training data. Most of it wasn't.

I guess it's fair to extrapolate that same trend to video, since it's the arc text, audio and images have taken? No reason it would be different.

I get that. But, let’s say you have a glass, you fill it to one third, then to half, then to three quarter, then to full. Can you expect to fill it beyond full? Not every process has an infinite ramp.

It seems frontier labs have been throwing all the compute and all the data they could get their hands on at model training for at least the past 2 years. Is that glass a third full or is it nearly full already?

Is the process of filling that particular glass linear or does the top 20% of the glass require X times as much water to fill as the bottom 20%?

I don’t see how that analogy makes any sense. We’re not talking about containers of a known and fixed size here, nor a single technique, nor a single method. Stuff like LLMs using Transformer architectures might have reached a plateau, for instance. But there’s tons of techniques _around_ those models that keep making them more capable (o1, etc), and also other architectures.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
As there was no mention of an API for either Sora or o1 Pro, I think this launch further marks OpenAI’s transition from an infrastructure company to a product company.
It seems like there going that direction - especially the way they setup the Sora interface, It feels its nearing a video editing product.
“Right before the TikTok ban goes into effect” is incredible market timing for the release of a tool that is useless for anything other than terrible TikTok spam videos
Hey now, no need to downplay the product here, it's also useful for spamming other video sharing platforms! Think Facebook timelines, which are already full of AI image barf, Twitter feeds, which mostly consist of AI text barf, and Youtube Shorts, which is full of existing AI animation barf!

Soon, lots of people can pay a modest sum to make the internet just a worse for everyone in exchange for a chance to make their money back!

Who legitimately asked for, or wants this? It's cool on it's face, sure.

What legitimate problem does it solve? Isn't AI supposed to make our lives easier, or is that just "not what it's supposed to be bro", or whatever. I've lost track at this point with all the hallucinations and poor/bad/really fucking bad responses. It's not 100% of the time, but that's the point of companies like OpenAI releasing stuff like this to the public... to be helpful and believable.

Deep fakes were bad enough. Shit like this is not helpful when given to the largely ignorant public. It's not going to be used for anything helpful, conducive, or otherwise beneficial.

It's impressive. Sure. I just fail to see what it's the solution to.

It's not a solution to anything, it's simply just money. If they don't release a for-profit video generator, someone else will
Not available in

> the United Kingdom, Switzerland and the European Economic Area. We are working to expand access further in the coming months

Excellent to announce this lack of access after the launch of pro. At least I have no business reason for sora so it's not a loss there so much but annoying nonetheless.

Here's something I find interesting: We have multiple paid accounts with OpenAI. In other words, we are paying customers. I have yet to see a single announcement or new development that we learn about through email. In most cases we learn these things when they get covered by some online outfit, posted on HN, etc.

OpenAI isn't the only company that seems to act in this manner. I find this to be interesting. Your paying customers actively want to know about what you are doing and, more than likely, would love to get a heads-up before the word goes out to the world. Hearing about things from third parties can make you feel like a company takes your business for grant it or does not deem it important enough to feed you news when it happens.

Another example of this is Kickstarter, although, their problem is different. I have only ever backed technology projects on KS. That's all I am interested in. And yet, every single email they send is full of projects that don't even begin to approach my profile (built over dozens of backed projects). As a result of this, KS emails have become spam to be deleted without even reading them. This also means I have not backed projects I would have seriously considered and I don't frequent the site as much as I used to.

Getting back on topic: It will be interesting to see how Sora usage evolves.

I'd be interested to know how many people around here pay for ChatGPT. Is it common? I would have thought technical people would be choosing AI tools other than chatGPT. So you use it mainly for coding? Or are you chatting with it and doing all those "fun" things?
We are developing with agents and tools with these technologies, so full immersion is a necessity. There's so much going on it is almost impossible to stay on top of it.

Regarding paying for access, for me it is about a combination of reasons. I want to support their efforts and that of others, so we have paid accounts where possible. Beyond that, it is about being up to date on the state of the art. Some of it is paid, and some is FOSS.

Yes, they do seem to take your business for granted. In my case, I will keep paying for it. I need OpenAI's tools more than OpenAI needs my business, and probably yours too.
“I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things.”

― Douglas Adams, The Salmon of Doubt: Hitchhiking the Galaxy One Last Time

It will not be available in the EU for now. I always feel disadvantaged when I read that sentence
I'm not in the EU, but when I see something that is US only, I tend to assume its doing something with privacy/user data/otherwise that is restricted in the EU.

Which means I generally avoid things that are not EU available even if they are available to me. Its not 100% but its a fairly decent measure of how much companies care about users to ensure they meet EU privacy laws from the start, vs if they provide some limited version or delayed version to the EU.

[flagged]
I wonder how all those European companies are doing it. They ship everything all the time, avoid the $billions fines, yet make mistakes like everybody else.

> how much the EU slowed down innovation

You say this all the time, yet we're doing fine. How come?

> I wonder how all those European companies are doing it.

Carefully crafted/gerrymandered laws that only rent seek from American big tech.

> You say this all the time, yet we're doing fine. How come?

You're not doing fine. I don't know how you can look back at the stagnation of the past two decades in the EU and think you're "doing fine." One of our companies is worth more than your entire tech industry. Your engineers get paid a fifth of what they could make here, so they often move here. In tech, you've fallen so far behind others superpowers that it's not even funny, and you're gleefully positioning yourself to fall even further behind. Your relative share of the global GDP is dropping.

You think you're doing fine, but if the EU doesn't plan on amending the regulatory-industrial complex that has caused its undeniable stagnation, it will eventually fall into irrelevancy, and be on the losing side of the rising global wealth inequality.

> Carefully crafted/gerrymandered laws that only rent seek from American big tech.

Alright then; who else should have been covered with the DMA in your opinion? Which other companies created unfair tax arrangements that have avoided scrutiny for decades?

Oh, nobody as large as Apple? Huh. Sounds like they're not targeting American companies at all, but instead prioritizing the biggest violators.

  • rtsil
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Maybe if a lot of you in big tech hadn't misused our personal data and sold it to the highest bidders, or hadn't stiffled small tech innovations through monopoly, the EU wouldn't need to regulate you so hard.
Bingo. People on this website love Apple products so much that they can't see past their own materialism to admit Apple is a bad business. It's fine to like Jony Ive's designs; fact of the matter is that Tim Cook is preventing innovation with his business decisions. Apple users are being segregated from novel and useful software because the first-party distributor gets cold feet thinking about it.

I guess they'll get their rude awakening someday. If xvectors comments here are any indication, it seems like they're starting to get out of the proverbial bed at least.

  • bcye
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is quite an exaggeration. Afaik there has been only a single GDPR fine over 1 billion € (Meta) and for some reason Apple seems to manage just fine (with GDPR).
> for some reason Apple seems to manage just fine (with GDPR).

Just fine?

Like the EU forcing Apple to pay $14B in back taxes after voiding a legal and consensual tax agreement between Apple and Ireland? [1]

Or the DMA resulting in an absurd $2B fine related to music streaming, in a transparent attempt to prop up Spotify (the dominant market leader in this space)?

Both of these in the last couple of months alone? It's just rent-seeking with a pretend "we're doing it for the good of the people" facade.

[1]: https://en.wikipedia.org/wiki/Apple%27s_EU_tax_dispute

[2]: https://www.reuters.com/technology/apple-set-face-fine-under...

> Like the EU forcing Apple to pay $14B in back taxes after voiding a legal and consensual tax agreement between Apple and Ireland?

They're back taxes. The EU did right by every single law-abiding business when they forced Apple to remediate their unnatural and unfair arrangement. Not a single naturally competitive business suffered as a result of either action. The EU does not suffer economically by weeding out businesses that exploit it to avoid paying taxes, only Apple does.

A company and a country made a legal, consensual agreement (that the Irish public was also in favor of) and the EU stepped in and "re-interpreted" it to rent-seek.

This is transparent and obvious to everyone outside of the EU. Rent-seeking behavior is the reason companies are less interested in going to the EU.

> The EU does not suffer economically [...]

The EU suffers economically when it falls behind technologically.

The company was trying to rent-seek by profiting off access to markets and infrastructure supported by the public without paying their fair share of it, and achieve unfair competitive advantage against equivalent companies by violating EU regulations.

>The EU suffers economically when it falls behind technologically.

Is moving faster better? Certainly to generate wealth for a subset of the population but rarely for the general public.

This view that the US is doing better because a small group of rich people are increasing their share of the wealth while most of the country is at best treading water or worse seeing their economic power decrease, where the average person in the EU is actually better off is myopic at best and malicious at worse.

> and the EU stepped in and "re-interpreted" it to rent-seek.

No, they overrode the Irish decision because it was illegally anticompetitive. Please stop using Hacker News if your intention is to solely be butthurt over unfair rulings when they get corrected. Everyone on this website knows that Apple wields illegal anticompetitive power, nobody here should be surprised when Apple is forced to remediate tax fraud and deliberate DMA violations.

> The EU suffers economically when it falls behind technologically.

Well then it's a good thing Apple isn't leading the industry.

"Noooooo! Think of how many Vision Pro sales that Apple would miss out on by pulling out of Europe!" ...said nobody ever.

>> for some reason Apple seems to manage just fine (with GDPR).

Nice bait and switch since your examples have nothing to do with GDPR.

Still Apple is doing just fine despite your examples.

If this slop is what you consider a revolution, I dread to see what else the SV visionaries come up with next. Perhaps some iris scans in exchange for fake digital currency? Oh wait, Mr Worldcoin Altman already got us covered there!
And in the UK and Switzerland unfortunately

https://help.openai.com/en/articles/10250692-sora-supported-...

  • andz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
is it a regulatory issue?
I suspect so, OpenAI is subject to the EU AI Act [0]. Last time they released the Advanced Voice Mode it also took some time before it became available in the EU. Not sure why UK and Switzerland are delayed as well, they are not in the European Union.

[0] https://openai.com/global-affairs/a-primer-on-the-eu-ai-act/

  • mhh__
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Are you saying you're not glad that the EU has chosen for you?

I would ask an AI to generate a riff on a "I am the very model of a modern major general" but for some EU bureaucrat but I'll spare you the spam.

From "12 Days of OpenAI: Day 3"

https://www.youtube.com/watch?v=2jKVx2vyZOY (live as of this comment)

  • bbor
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Over now, and pretty short/light on info AFAICT. That said, knowing what we know now about Altman made me physically unable to watch while he engages in sustained eye contact with the camera, so maybe missed something while skimming! On the upside, I'm so glad we have three billionaires cultivating three different cinema-supervillain vibes (Musk, Altman, & Zuckerberg). Much more fresh than the usual "oil baron" aesthetic that we know from the gilded age
In a not so distant future we might need to have some sort of regulation that forces uploaders (or content creators) to declare if videos have been generated with ai tech or not and depending on the content such declaration might carry legal consequences. On the other side hosting platforms should display clearly if such content was declared ai generated or not as well. Right now I can't see a simple and good enough solution as this that could mitigate the spread of malicious content.
  • i5heu
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Or we will have some cryptographic going on that is connected with your ID to prove that you are indeed human.

Trust on a society level is some other beast of difficult problem.

My hunch is that it should be easy to decipher if it’s a AI video based on how the frames transitions.
This is actually a different version from what they had before. What they released today is Sora Turbo.
Account creation currently unavailable
yea just that that too. Did anyone get in or did they get overwhelemed?
Doesn't look like you will.

"We’re currently experiencing heavy traffic and have temporarily disabled sign ups. We’re working to get them back up shortly so check back soon."

There’s an ongoing related livestream[0].

[0]: https://youtu.be/2jKVx2vyZOY

Curious, what kinds of things are you all gonna make with Sora?

Personally, I think I'll just be making weird memes to send to my friends!

If we take HunyuanVideo, which is similar to Sora, as an example, they state that generating a 5-second video requires 5 minutes on 8xH100 GPUs. Therefore, if 10,000 users simultaneously want to generate a 5-second video within the same 5-minute window, you would need 80,000 H100 GPUs, which would cost around 2 billion USD in GPUs alone.
Can't even log in. I get "Unexpected token '<', "<!DOCTYPE "... is not valid JSON"
God, I hate this crap. Every damn thing on the internet now is some variety of AI slop. Every image is some generative garbage, half the text is the kind of stilted half-accurate wordy gpt garbage, and now I get to dodge a gazillion generated videos of things that would be interesting if they were real and followed the laws of physics and continuity, and I get to do all that while living in a post-truth society, because there’s just too much bullshit out there for the average person to bother to sort through. You’d think after 20 years of this crap nerds would have some notion that technology has consequences, but nope, there’s a shiny thing we could build, so we’ve gotta build it.
Welp I can't even login with my existing ChatGPT account because their servers are overloaded
Wish they’d followed their previous MO of releasing stuff with no warning or buildup.

Results won’t match the hype.

I feel like announcing a new product in the same vein as your main product as an established company is almost always a bad idea. If you're going to improve your product, don't announce the improvements 6-12 months ahead of time and grow the hype to unmanageable levels, just announce a great product and tell them it's available starting today.
They're already too slow. Hunyuan Video came out a few days ago and beats them on every metric.

Hunyuan is 100% open source and it's set to become the Stable Diffusion / Flux of AI video.

https://github.com/Tencent/HunyuanVideo/

If you're looking for video for casual personal projects or fill-ins for vlog posts, or something to make your PowerPoint look neat, this seems like a rad tool. It has a looong way to go before it's taking anyone's movie VFX job.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The mammoths are walking over some pre-existing footprints, but they don't leave any prints of their own. I guess I'm getting hung up on little things. For a prompt of a few words, it looks pretty nice!
Genuinely curious who is doing this for adult content?

Complaints about Sora's quality and prompt complexity likely not as important to auteur's in that category, especially with ability to load a custom character etc

Sora (along with DALL-E 2 well before it) specifically has safeguards against NSFW content.
Account creation not available. Login to see more videos.

Classic OpenAI. I don't care, there are so many better alternatives to everything they do. Funny how quickly they have become irrelevant and lost their moat.

Not downplaying the amazing progress, but even the video showcases have some weird uncanny valley effects. The winged horse one in particular - the wings and legs morph, the wing on the left disappears and reappears through the tail.

This stuff is a little ways off, but still some amazing effects here. I think it will be a little bit before it is sufficient for production use in any real commercial situation. There's something unsettling about all of the videos generated here.

This seems pretty broken at the moment, I haven't actually managed to create a video, every prompt results in "There was an unexpected error running this prompt".
At least you get to even see the page! I'm seeing "Sign ups are temporarily unavailable We’re currently experiencing heavy traffic and have temporarily disabled sign ups. We’re working to get them back up shortly so check back soon."
I can't even sign up. I assume it's a capacity issue.
I wonder what it is about EU and UK law, in particular, that restricts its availability there. Their FAQs don't mention this.

If it's about training models on potentially personal information, the GDPR (EU and UK variants) kicks in, but then that hasn't restricted OpenAI's ability to deploy (Chat)GPT there. The same applies to broader copyright regulations around platforms needing to proactively prevent copyright violation, something GPT could also theoretically accomplish. Any (planned) EU-specific regulations don't apply to the UK, so I doubt it's those either.

The only thing that leaves, perhaps, is laws around the generation of deepfakes which both the UK and EU have laws about? But then why didn't that affect DALL-E? Anyone with a more detailed understanding of this space have any ideas?

A lot has changed since ChatGPT was released. https://en.wikipedia.org/wiki/Digital_Markets_Act wasn't in effect back then. Microsoft hadn't made their big investment yet either. OpenAi is a growing target, and the laws are becoming more strict, so they need to be more cautious from a legal perspective, and they need to consider that compliance with EU laws will slow down their product development.
Part of it might also be capacity problems.
It's a capacity related constraint, not a legal one.
Hmm enough capacity for the rest of the world but not EU:

https://help.openai.com/en/articles/10250692-sora-supported-...

Forget video. Imagine what this going to do for video-gaming
I actually can't imagine what it will do for video gaming. Maybe enhancing cut-scenes, but then why can't they just do the performance and rendering using the gaming engine in realtime?
Perhaps AI will help with procedural generation of environmental details within a pre-built game world. This way the AI isn't burdened with generating the whole scene, but only the clutter of objects and textures - things that usually take a long time to build by hand.

For example in Train or Truck Simulators, I see examples where someone has put effort into making that farmhouse in the distance nicely detailed, but other times it's just a simple structure. If AI were tasked with "distant details", the whole game could look more polished.

How long until they fix the sign up issue? What an embarrassment. Why release something if you know it can't work properly? And why do we need to sign up when we already have an account with ChatGPT?

It was cool when they announced it but the novelty of generating a piece of AI video clipart is quickly fading, especially when it takes months or years to just get a demo in users' hands.

“The version of Sora we are deploying has many limitations. It often generates unrealistic physics and struggles with complex actions over long durations. Although Sora Turbo is much faster than the February preview, we’re still working to make the technology affordable for everyone.”

So they demo the full model and release the quantised and censored model.

Does anyone else find this kind of bait & switch distasteful?

You don't need to worry. Open source video is already pulling ahead of closed source.

Hunyuan [1] is better than Sora Turbo and is 100% open source. It's got fine tuning code, LoRA training code, multiple modalities, controlnets, ComfyUI compatibility, and is rapidly growing an ecosystem around it.

Hunyuan is going to be the Stable Diffusion / Flux for video, and that doesn't bode well for Sora. Nobody even uses Dall-E in conversation anymore, and I expect the same to hold true for closed source foundation video models.

And if one company developing foundation video models in the open isn't good enough, then Lightricks' LTX and Genmo's Mochi should provide additional reassurance that this is going to be commoditized and made readily available to everyone.

I've even heard from the Banodoco [2] grapevine that Meta is considering releasing their foundation video model as open source.

[1] https://github.com/Tencent/HunyuanVideo/

[2] Banodoco is one of the best communities for open source foundation AI video; https://banodoco.ai/

Maybe, but alternative would be to not demo results with state of the art processing at all, which I wouldn't like either.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"Sora is not available in The United Kingdom yet". Available elsewhere, from Albania to Zimbabwe. Any particular reason why?
I'm surprised they put in 2 legged poodles
I have been subscribing to ChatGPT Plus for a long time. I just cancelled my subscription today because every time I try to login to sora.com, I get the too busy message. I have never been able to try it. Pissed me off.

Tencent has a comparable open weight model dropped in the last week that looks at least as good.

  • pama
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Patience. I understand the frustration. I hope this transient login problem gets fixed soon.
So when's the lawsuit from Google coming?
  • ivjw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I was searching recently for a photograph of Arthur Rimbaud in his later years, and noticed that already Google Images was flooded with AI-generated stuff from websites that do not require marking it as such. Authentic material will soon be much harder to find for people who don't know the sources.
That's now how the world is supposed to work, I wonder if there is going to be long-term psychological effects if being exposed to videos like these regularly. If our neurons are unable to receive a stable stream of reality like we have for millions of years will our brains become dysfunctional over time?
Something about that image of the spinning coffee cup with sailing ships is giving me severe trypophobia:

https://en.m.wikipedia.org/wiki/Trypophobia

It’s a like a spider’s eyes… and also not what I would expect a latte to look like.

Is it better or just more distracting away from it's flaws and using flaws to advantage? I only see repulsive mouth movements to induce fear, face coverings to hide the uncanniness, dreamy physics sim to distract. Not so out of place in present day hollywood but never any coherence of feeling.
So in the end, was it all classic hype pumping and heavily edited marketing material? Can't create account but from what I see on the front-page, every video looks kinda strange, almost in everyone of them something is instantly off to my eye. Anyone got decent results yet?
As far as I can tell, this is not Sora, but a distilled model that runs in a reasonable amount of time, with a reasonable amount of compute. It's pretty likely that's resulted in degradation of quality. Further, the marketing/demo-ing for Sora for the past year has been heavily curated videos from OpenAI and it's not clear what was generate using Sora and what was being generated using Sora "Turbo" (this distilled model). It wouldn't surprise me if some or much of it was from the original Sora model, leading to mismatched expectations and hype fatigue.

Mostly hunches from me. It could very well be that the original Sora is also plagued with outputs that aren't just subjectively "bad", but which aren't _useful_ (not adhering to the prompt, for instance).

There's some cool ideas here. The storyboard thing is nifty - kind of the refined synthetic captions that ChatGPT uses for DALLE3 on crack. Perhaps after people get over the prompting learning curve it will output better results. But it seems tougher to prompt than simple text-to-image, requiring generally longer prompts that aim to steer the model away from whatever strange thing it's doing that you don't need it to do. In my case, using the "image as the first frame" approach, the model generated cuts between newly imagined cameras consistently, when I simply wanted a single continuous shot from the POV of the camera of the photo.

We'll see, but I'm sort of over it. The UX is fancy for sure, and the scale they're pulling off with this is unprecedented even if there's already decent competitors.

Read few other reviews as well, the general feeling seems to be more or less the same. People also complain that it often imagines faces on reference pictures and after some substantial delay denies generating, which is a big game-ender.
> after some substantial delay denies generating

Hopefully these types of issues blow over as they increase capacity or load decreases.

The lengthy generation times aren't fun to deal with though in any case. As good as the UX for the app itself is, there's little they can do about how long it takes for a video to generate compared to images. The near instant feedback is gone (just like old times)

Why keep building AI to do the things that people find fun to do rather than the mundane bullshit? All we’ll be left with is cleaning, folding laundry, and doing the dishes while AI does all the interesting things.
Because we don't have as much data about mundane bullshit.
How do we get it? Serious question. The take makes sense but how do we digitize doing the dishes?
not dishes but the other day I saw this recent papers for clothes

>RoboHanger: Learning Generalizable Robotic Hanger Insertion for Diverse Garments

https://arxiv.org/abs/2412.01083

>To overcome the challenge of limited data, we build our own simulator and create 144 synthetic clothing assets to effectively collect high-quality training data.

the strategy is simulation

  • zlies
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Is there information when it will be available in other countries, like Germany for example?
  • LukaD
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What do you mean? “Sora is here” is not enough?

Sorry for the sarcasm but I’m just tired of this fuck Germany attitude by certain companies.

Meh. It's a cool POC and immediately useful for abstract imagery, but not for anything realistic.

Looking forward to the onslaught of AI-generated slop filling every video feed on the Internet. Maybe it's finally what's going to kill things like TikTok, YT Shorts, Reels, etc. One can hope...anyway.

I may be the only one but this kinda breaks my brain in that I notice weird physics anomalies in these but then I start to look for those in non-AI produced video and start to question everything. Hopefully this a short term situation.
No integration with ChatGPT is a lost opportunity and illustrates no joined up thinking in all senses of that phrase. Demonstrations, helping people with learning difficulties visualise things, education purposes, story telling...
ChatGPT isn't even aware of Sora when I asked it to generate a video. Like why not combine the interface for prompting? Sounds like their having issues scaling a common interface so each team is making their own.
Serious question: is this better than current text-to-video models like Hailuo?
[flagged]
The page says "coming soon." I guess I'm wondering if there are any benchmarks or other way to compare this to current models.
even if there were supposed benchmarks or comparisons, you wouldn't know if they were reliably until you can actually try it and see what it does how you'd use it.
We'll probably know once they release it.
  • neom
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
In July I made this 3 minute little content marketing video for Canada Day. Took me about 40 minutes using a combo of midjourney + pika, suno for the music. Honestly I had a lot of fun making it, I can see these tools with be fun for creative teams to hammer out little things for social media and stuff: https://x.com/ascent_hi/status/1807871799302279372

I don't see sora being THAT much better than pika now that I'm trying both, except that it's included in my openai subscription, but I do think people who do discreet parts of the "modal stack" are going to be able to compete on their merits (be it pika for vid or suno for music etc)

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • m3kw9
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
A minimum setting video took an hour 480p 5s 1:1. Servers getting cooked
"Here" depends on where you live. Not available in the UK.
  • mkaic
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
A friendly reminder: if you have tech-illiterate people in your life (parents, grandparents, friends, etc), please reach out to them and inform them about advances in AI text, image, audio, and (as of very recently) video generation. Many folks are not aware of what modern algorithms are capable of, and this puts them at risk. GenAI makes it easier and cheaper than ever for bad actors to create targeted, believable scams. Let your loved ones know that it is possible to create believable images, audio, and videos which may depict anything from “Politician Says OUTRAGEOUS Thing!” to “a member of your own family is begging you for money.” The best defense you can give them is to make them aware of what they’re up against. These tools are currently the worst they will ever be, and their capabilities will only grow in the coming months and years. They are already widely used by scammers.
Anyone else feeling their servers melt a bit on sora.com?
  • ngd
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What’s next, Tiagra?
I'm holding out for 105
can't wait to see what Dura Ace has to offer
Is there no way to try it even without subscribing? After you log in, it goes straight to a screen asking for subscription to create videos.
Great. In a world awash with disinformation, we're making it easier to create even more of it.

I don't see any good coming from tools like these.

The competition in text-to-video tools is heating up, but a key challenge remains: achieving desired results without exhausting resources. Runway, for instance, often consumes all your credits before producing something usable, even if you stick to their guidelines. Hailuo AI shows better consistency, while Sora Turbo sounds promising with potentially more mature generations. Progress is clear, but there’s still a way to go in perfecting these tools.
  • domid
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I've been thinking about your point regarding credit consumption with these tools. I started exploring this at https://liveimage.ai (an AI video avatar generator) with a low-res preview system first. I'm curious if you think this kind of approach would be more useful - generating quick low-res previews that use minimal credits, then only processing high-res versions once you're happy with the result? Seems like it could help avoid wasting resources on full quality generations that don't match what you're looking for. I often wonder if jumping straight to high-res without previewing is partly why people burn through credits so quickly. Would love to hear if you've encountered other tools taking this kind of approach.
I wonder when in the future ai images and videos will be remotely useful and easy to create. These are still weird and garbage quality.
Sora makes movies less interesting, regardless of how they're created.

The part made by Sora? About as interesting as the latest chess programs doing well at chess. woohoo/nice job.

The overall effect? Now we spend mental energy trying to figure out which parts are machine generated, and hence not worth anything. That mental energy is gone, sucked out of the cultural economy, and fed to the machinery of mediocrity.

If you have to spend mental energy trying to figure out which is which solely to blindly and automatically disregard something, maybe there’s useful stuff there to enjoy instead.

I certainly don’t dislike all the cool movies where special effects are CG just because the old time stop motion artists from 1950s Flash Gordon aren’t using sparklers. Similarly I’m not going to discount new creation that can be enjoyable no matter the provenance.

  • lmm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Huh? AI chess is much more interesting to watch than human chess.
  • avree
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Star Wars? Not worth anything, huge reliance on "machine-generated" imagery. Lord of the Rings? Useless film, uses "machine-generated" imagery. Don't even get me started on anything by Pixar.

/s

You know what I mean, ChatGPTWhatever. Stay out of human business.
sorry for the tangent: can't remember a launch they've had where you could just use it. it's always "rollout", "later this quarter", "select users", what's the deal here?

it's given openai this tinge to me that i probably won't ever manage to forget.

Account creation currently unavailable
Was this the one trained on YouTube?
  • mhb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What is a "System Card"?
  • exe34
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
regarding all the comments about physics, I wonder if a hybrid approach would work better, with an llm generating 3d objects that interact in a physics simulation with guiding forces from the LLM and then another model generating photo realistic rendering.
The most impressive part is the temporal consistency in the demo videos.

He flower one is the best looking.

That cat skateboarding off the path cut out just when it was getting interesting.

Many of these likely fall apart just split seconds after

I don’t doubt it, but even 60 seconds of temporal consistency is an improvement, even if it’s incremental.
It's worth remembering, in the end, humans still created it. Just remarkable.
I am not impressed by it at all ... Is it actually better than the competitors?
  • chrsw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
At least we don't have to worry about AI taking over any time soon
  • jas39
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So they released it in Afghanistan but not in Sweden? Is this an EU ban?
Sort of messed up that I’m paying for ChatGPT pro and still can’t access it.
Uhh... no. The $20 you pay to OpenAI every month does not require them to give you access to sora.
"...I felt a great disturbance in the algorithm... as if millions of influencers, OnlyFans stars, and video creators suddenly cried out in terror..."
  • vault
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I wish that after Sora and Sora Turbo they would release Sora Lella
So we are now a few years into the AI video thing.

I'm curious to know - is it actually useful for real world tasks that people/companies need videos for?

  • cush
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I like that the contributors are presented as movie credits
anyone done a comparison with the open-source hunyuanvideoai.com?
0 stars, and no comments the last time this was posted. Maybe too good to be true?
That's not the upstream source. You're looking for this:

https://github.com/Tencent/HunyuanVideo/

This isn't "too good to be true" - this is the holy grail. Hunyuan is set to become the Flux/Stable Diffusion of AI video.

I don't see how Hunyuan doesn't completely kill off Sora. It's 100% open source, is rapidly being developed for consumer PCs, can be fine tuned, works with ComfyUI/other tools, and it has control nets.

there it is, thanks
I've ran hunyuanvideoai from their GitHub and it seems to generate realistic videos. It is a bit slow (30-60min per video clip) and requires ~50GB VRAM. I wonder how the quality compares though?
oh, and the output videos that are generated are 5sec clips at 554x960px. this is on a single A6000
It looks good, I'm just wondering why it has no attention from the ML community
HunYuan is all everybody on Banodoco and the broader comfy ecosystem are talking about. And that's with Lightricks' LTX model having just been announced too.

HunYuan is seriously amazing and it looks like it'll be the Flux/Stable Diffusion of AI video.

Sora is cooked.

  • msp26
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It depends on where you're looking haha.
  • han00
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think this would be fitting use case for niche stock video.
God. These AI tools are really showing just how f’d up people are. AI is a scapegoat for the absurdity of society. We are a a scourge! Well, aren’t we? Look at what we’re doing and the way we use this to instinctively conduct psychological warfare on each other! People are facetious. People will kill each other, and people want to lie, steal, and cheat. God almighty. How depressing…

This is a problem that highlights the apparent either lack of effect, or lack of care for, consequences, period. The current consequences in place to deter this abysmal and abhorrent behavior simply aren’t enough. I look around and the world is going nuts, and failing, it seems, to attribute the cause with the effect. Shitty people doing shitty things. We don’t need new laws or regulations, when people aren’t willing or able to abide by the ones currently in place! What good will that do?

what in particular is better as about this than

https://civitai.com/videos

"Enable JavaScript and cookies to continue"

mmmmh...

ok, so gpt pro with some extra power and sora. This means that gtp5 and generally speaking AGI can wait
even in this mammoths demo, the dust clouds keep popping behind them even after they have moved forward
No API/per video generation? Huh.
It's probably because they're relying heavily on their new editing UI to make the model useful. You can cut out weird parts of the videos and append a newly generated potion with a new prompt.
OpenAI almost always waits a few months before adding new features or models to the API, the same happened to DALL-E 3, advanced voice mode, and lots of smaller model updates and releases.
forced to finally release after that new open source model came out that was equal or better?
Great. More tools to continue the enshitification of everything on the web.
Sora? More like r/ShittyHDR
Is it just me or do all the clips on the official page feel just creepily wrong? Everything seems so wrong, movements, proportions, light, color... everything's almost correct, but barely and noticeable not so. To me it looks like some horror from beyond, something inter-dimensional beings would dream up to simulate the idea of the something that's similar to reality. And it creeps me the fuck out.
People really worry about fake video and images and whatever but I have to say, the correct heuristics both already exist and have existed for a long time:

1. Anything on the internet can be fake

2. Trust is interpersonal, and trusting content should be predicated first and foremost on trusting its source to not deceive you

This is imperfect but also the best people ever really do in the general case, and just orders of magnitude better than most people are currently doing

The issue isn't models like this, it's that people are eating a ton of information but have been strongly encouraged to be credulous, and a lion's share of that training is directly coming from the tech grift industrial complex

I wouldn't even say this is the most compelling kind of tool for plausible-looking disinformation out there by a long shot for the record, but without actually examining why people are gullible there is no technology that's going to make people accepting fiction as fact substantially worse, or better, really. Scams target people on the order of their life savings every day and there are robust technologies and protocols for vetting communications, but people have to know to use them, care to use them, and be able to use them, for that to matter at all

co-develop := we are in f** around and f** out mode, please bear with us.
Seems as shitty as others.
Ah, yes. We definitely needed another bad dreams generator.
Interesting creative people will produce interesting creative output.

People with no taste will produce tasteless content.

The mountain of slop will grow.

And some of us have no intention of publishing any output whatsoever but just find the existence of these tools fascinating and inspiring.

And some of us have no intention of publishing any output and find the existence of these tools extremely worrying and problematic.
While it's indeed fascinating, part of me finds the sheer energy expenditure to be problematic, not to mention the "Hollywood is dead" innuendos.
Interesting creative people are currently creating interesting output _without_ generative AI.

These tools are fascinating, though I can't help but feel that the main benefactor after all is said and done will be venture capitalists and tech/entertainment execs.

Win me an Academy Award
OpenAI is a masterclass in pissing off paying customers.

I'm just about ready to cancel my ChatGPT subscription and move fully over to Claude because OpenAI has spit in my face one too many times.

I'm tired of announcements of things being available only to find out "No, they aren't" or "It's rolling out slowly" where "slowly" can mean days, weeks, or month (no exaggeration).

I'm tired of shit like this:

    Sign ups are temporarily unavailable
    We’re currently experiencing heavy traffic and have temporarily disabled sign ups. We’re working to 
    get them back up shortly so check back soon.
Sign up? I'm already signed up, I've had a paid account for a year now or so.

> We’re releasing it today as a standalone product at Sora.com to ChatGPT Plus and Pro users.

No you aren't, you might be rolling it out (see above for what that means) but it's not released, I'm a ChatGPT Plus user and I can't use it.

I really don't think it's reasonable to expect them to onboard what is likely tens of thousands of sign ups in the first hour.
ChatGPT has far, far more concurrent users than tens of thousands. Sora is not a small hobby project by an amateur hacker that blew up.
I don't disagree, what I'm asking for is "truth in advertising". I'm not saying they need to give everyone access on day 1, I'm saying don't _say_ you've given everyone access if you haven't.
"a tool never made an artist"

so incredible ugly.

Yawn, there are literally 10 different apps and wannabe startups that do video generation and AI videos have already flooded social media. This doesn't look any better than what is and has been already available to the masses. OpenAI announced this ages ago and never did give people access, now competitors have already captured the AI generated video for social media slop market.

We have yet to see any kind of AI created movie, like Toy Story was for computer 3D animation.

OpenAI isn't a player in the video AI game, but certainly has bagged most of the money for it already (somehow).

Don't just critique - link. What other video generation tools have you used and recommend?
The subreddit /r/aivideo has tons of videos all tagged with what model was used to generate them.
From the few videos that I've seen, I would agree that it doesn't seem to be better than any of the major competitors such as Kling, Hailuo, Runway, etc.
So you're saying there is literally nothing good about Sora?
Unless we’re reading completely different comments, that’s not at all what they said. They said OpenAI waited too long to release it and their competitors beat them to the punch with similar quality offerings and have already cornered the social media AI slop market.
Gentle reminder that it's important to boycott this kind of thing.
  • tgv
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This, and similar tools, make the world a worse place, just so a handful can get the big bucks. This is not technological progress, it's greed. Ethics is a dirty word.
Why?
  • wslh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"We are currently experiencing heavy loads..."
  • zb3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
no API = not good enough

no pay per use = overpriced

  • zb3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
not available in the EU = might use everything you did there against you, sell that data to the higgest bidder
I was under the impression that the EU had other regulations related to AI generated content that affects OpenAI and presumably Sora other than just privacy stuff.
from now on, no content is free from skepticism
I got lucky and got in moments after it launched, managed to get a video of "A pelican riding a bicycle along a coastal path overlooking a harbor" and then the queue times jumped up (my second video has been in the queue for 20+ minutes already) and the https://sora.com site now says "account creation currently unavailable"

Here's my pelican video: https://simonwillison.net/2024/Dec/9/sora/

For those who can't try Sora out, Tencent's super recent HunYuan is 100% open source and outperforms Sora. It's compatible with fine tuning, ComfyUI development, and is getting all manner of ControlNets and plugins.

I don't see how Sora can stay in this race. The open source commoditization is going to hit hard, and OpenAI probably doesn't have the product DNA or focus to bark up this tree too.

Tencent isn't the only company releasing open weights. Genmo, Black Forest Labs, and Lightricks are developing completely open source video models, and that's .

Even if there weren't open source competitors, there are a dozen closed source foundation video companies: Runway, Pika, Kling, Hailuo, etc.

I don't think OpenAI can afford to divert attention and win in this space. It'll be another Dall-E vs. Midjourney, Flux, Stable Diffusion.

https://github.com/Tencent/HunyuanVideo

https://x.com/kennethlynne/status/1865528133807386666

https://fal.ai/models/fal-ai/hunyuan-video

> The Pelican inexplicably morphs to cycle in the opposite direction half way through

It's pretty cool though, the kind of thing that'd be hard if it was what you actually wanted!

"The Pelican inexplicably morphs to cycle in the opposite direction half way through"

Oof, if sora can't even manage to maintain an internal consistency of the world for a 5 second short, I can't imagine how exacerbated it'll be at longer video generation times.

That's an awful result. It turning around has absolutely nothing to do with what you asked for. It's similar in nature to what the chatbot in the recent and ongoing scandal said, saying to come home to her, when it should have known that the idea would be nonsensical or could be taken to mean something horrendous. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-a...

So you were lucky indeed to be able to run your prompt and share it, because the result was quite illuminating, but not in a way that looks good for Sora and OpenAI as a whole.

Image details 9/10 Animation 3/10 Temporal consistency 2/10

Verdict 4/10

Did you notice the frame rate (so to speak) of what's happening down the lake is much lower than the pelican's bicycle animation?
I don't have a lot of mental model for how this works, but I was surprised to note that it seems to maintain continuity on the shapes of the bushes and brown spots on the grass that track out of frame on the left and then reappear as it pans back into frame.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That must be exactly it. The simulated scene extends beyond what the camera is currently capturing.
Thanks, would you mind elaborate more on what you wrote below:

  Sora is built entirely around the idea of directly manipulating and editing and remixing the clips it generates, so the goal isn't to have it produce usable videos from a single prompt.
If you watch the OpenAI announcement they spend most of their time talking about the editing controls: https://www.youtube.com/watch?v=2jKVx2vyZOY
One of the highlights of any model release for me is checking your "pelican riding a bicycle" test.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
One of the problems with a 10-month preannouncement is that the competition is ready to trash the actual announcement. Half an hour in, I already see half a dozen barely-concealed posts ranging from downplays to over-demands to non-user criticism.
The competition is ready to trash the announcement because the 10-month delay gave rise to several viable competitors, and that would still be the case if OpenAI never did the preannouncement. If OpenAI released Sora 10 months ago, there wouldn't be as much cynicism.
  • rtsil
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think that's just the typical HN cynicism.
I don't think you can reduce it to this. Even on the showcase videos on the home page I can see weird artifacts, like the red car in the video of the guy walking through a market. The car is driving on a pedestrian walkway (through pedestrians), and just suddenly disappears from one frame to another.
Also the tennis player walking through the net
  • mhh__
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Unless they drop something mega in the next few months can't help but think that openai's moat is basically gone for now at least.
Yep, haven't used chatgpt seriously in many months; I check it with our coding toolkit every so often, but it just performs so much worse than Claude on everything we do...
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Im sure if openai just waited the extra day or 2 to make sure its available int he EU it wouldnt annoy everyone in the EU so much. Often with new releases everyone in the EU needs to wait a couple of days, the FOMO is not cool bros
Does VPN solves the problem? I'm living in an EU country and I don't like that the EU decides for me (and companies like OpenAI or Meta don't give out their models to me)! I'm an old enough adult to decide for myself what I want...
I used a Japanese protonVPN , I got past the "Not in the EU" thing but it said "no new signups are allowed atm".

Perhaps just best to wait

  • blfr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What did the EU do this time?
  • mnau
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I believe Sora is also not available in the UK, so it's rather maybe a decision to handle load better?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Though I like the novelty of AI generated content, it kind of sucks dead internet theory is becoming more and more prevalent. YouTube (and all of the web) is already being spammed with AI generated slop and "better" video/text/audio models only make this worse. At some point we will cross the threshold of "real" and "generated" content being posted on the web and there's no stopping that.
  • xena
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
My hope was that AI would make it easier for people to create new things that haven't been done before, but my fear was that it would just be an endless slop machine. We're living in the endless slop machine timeline and even genuine attempts to make something artistic end up just coming off as more slop.

I love this timeline.

It might be true for "Creators" etc. but there were things that I always wanted paintings of but I have no talent, time, tools or anything really.

When I first got access to dalle (in '22) the first thing I tried was to get an impressionist style painting of the way I always imagined Bob Dylan's 'Mr. Tambourine Man' I regenerated it multiple times and I got something I was very happy with! I didn't put it on social media, didn't try to make money off it, it's for me .

If you enjoy "art" (nice pictures, paintings, videos now I guess) You can create it yourself! I think people are missing that aspect of it, use it to make yourself happy, make pictures you want to look at!

Even if it's made with AI, it is slop only if you don't add anything original in your prompt, and don't spend much time selecting.

The real competition of any new work is the backlog of decades of content that is instantly accessible. Of course it makes all content less valuable, you can always find something else. Hence the race for attention and the slop machine. It was actually invented by the ad driven revenue model.

We should not project on AI something invented elsewhere. Even if gen AI could make original interesting works, the social network feeds would prioritize slop back again. So the problem is the way we let them control our feeds.

> if you don't add anything original in your prompt

Define "original". You could generate a pregnant Spongebob Squarepants and that would be original, but it would still be noise that doesn't inherently expand the creative space.

> don't spend much time selecting

That's the unexpected issue with the proliferation of generative AI now being accessible to nontechnical people. Most are lazy and go with the first generation that matches the vibe, which is the main reason why we have slop.

Imagine a movie like Napoleon, but instead of needing 100 million and thousands of extras, you just need 5 actors and maybe a budget of 50k.

You could get something much more creative or historically accurate than whatever Hollywood deems marketable.

I think about AI like any other tool. For example I make music using various software.

Are drum machines cheating? Is electronic music computer sloop compared to playing each instrument.

Is using a Mac and a 1k mic over a 30k studio cheating ?

  • xena
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The main comparator is Kasane Teto and Suno. Kasane Teto is functionally a piano that uses generative AI for vocal synthesis: https://youtu.be/s3VPKCC9LSs. This is an aid to the creative process. Suno lets you put in a description and completely bypass the creative process by instantly getting to the end: https://youtu.be/UpBVDSJorlU

Kokoro is art. Driveway is content. Art uses the medium and implementation to say something and convey messages. Content is what goes between the ads so the shareholders see a number increase.

I wish there were more things like Kokoro and less things like Driveway.

What if your making a short movie and driveway is playing in the background during a scene.

It's like everything else. It's just a tool.

You can create an entire movie using a high end phone with quality that would have cost millions 40 years ago. Do real movies need film?

My hope is that it will be the death of the aggregators and there will be more value in high quality and authentic content. The past 10-15 years has rewarded people who appeal to the aggregation algorithms and get the most views. Hopefully going forward theres going to be more organic, word of mouth recommendations of high quality content.
I felt this same way as image generation was rapidly improving, but I've been caught by surprise and impressed with how resilient we have been in the face of it.

Turns out it's surprisingly, at least for me, to tune out the slop. Some platforms will fall victim to it (Google image search, for one), but new platforms will spring up to take their place.

Put more weight on your subscriptions. I don’t have much AI content in my YouTube suggestions. (Good luck AI generating an interview with Chris Lattner or Stephen Kotkin for example. It won’t work.)
  • yaj54
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It will work within thousands of days.
yeah i already have so many AI-generated videos in my feed on all social media it's insane. i spot them from far for now but at some point i'll just be consuming content that took seconds to generate just to get money
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Pricing:

Plus Tier (20$/month)

- Up to 50 priority videos (1,000 credits)

- Up to 720p resolution and 5s duration

Pro Tier (200$/month)

- Up to 500 priority videos (10,000 credits)

- Unlimited relaxed videos

- Up to 1080p resolution, 20s duration and 5 concurrent generations

- Download without watermark

more info: https://help.openai.com/en/articles/10245774-sora-billing-cr...

Called it, they were sitting on Sora until the $200 tier launched. Between the watermarking and 50 video limit the $20 tier is functionally a trial.
Worth noting here that this is the existing ChatGPT subscription, you don’t need a separate one.
From the FAQ [1], too:

>> Can I purchase more credits?

> We currently don’t support the ability to purchase more credits on a one-time basis.

> If you are on a ChatGPT Plus and would like to access more credits to use with Sora, you can upgrade to the Pro plan.

Ouch. Looks like they're really pushing this ChatGPT pro subscription. Between the watermark and being unable to buy more credits, the plus plan is basically a small trial.

[1] https://help.openai.com/en/articles/10245774-sora-billing-cr...

Wow they're watermarking videos and limiting them to 720 at the 20 dollar price point? That's a bold move, considering their competition's pricing...

https://www.klingai.com/membership/membership-plan

Quality seems relatively similar based on the samples I've seen. With the same issues - object permanence, temporal stability, physics comprehension etc, being present in both. Kling has no qualms about copyright violation however.

At OpenAI's $20/mo price point, you can also only generate 16 720p 5s videos per month.

Kling doesn't seem to have more granular information publically but I suspect it allows for more than 16 videos per month.

You can do more than 16 videos for free on Kling per month. Let alone with their price plans. I'm sure it's not equivalent in capability, but all these models suffer from the same technical issues understanding prompts and maintaining physics / temporal coherence anyway.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
[dead]
[dead]
Not impressive compare to the opensource video models out there, I anticipated some physics/VR capabilities, but it's basically just a marketing promotion to "stay in the game"...
  • bbor
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I... can you explain, or point to some competitors...? To me this looks leagues ahead of everything else. But maybe I'm behind the game?

AFAIK based on HuggingFace trending[1], the competitors are:

- bytedance/animatediff-lightning: https://arxiv.org/pdf/2403.12706 (2.7M downloads in the past 30d, released in March)

- genmo/mochi-1-preview: https://github-production-user-asset-6210df.s3.amazonaws.com... (21k downloads, released in October)

- thudm/cogvideox-5b: https://huggingface.co/THUDM/CogVideoX-5b (128k downloads, released in August)

Is there a better place to go? I'm very much not plugged into this part of LLMs, partially because it's just so damn spooky...

EDIT: I now see the reply above referencing Hunyuan, which I didn't even know was its own model. Fair enough! I guess, like always, we'll just need to wait for release so people can run their own human-preference tests to definitively say which is better. Hunyuan does indeed seem good

  • Geee
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What's the best open source video model right now?
Hunyan (https://replicate.com/tencent/hunyuan-video , $0.70/video) is the best but somewhat expensive. LTX (https://replicate.com/fofr/ltx-video , $0.10) is cheaper/faster but less capable.

Both are permissively licensed.

Hunyuan at other providers like fal.ai is cheaper than SORA for the same resolution (720p 5 seconds gets you ~15 videos for $20 vs almost 50 videos at fal). It is slower than SORA (~3 minutes for a 720p video) but faster than replicate's hunyuan (by 6-7x for the same settings).

https://fal.ai/models/fal-ai/hunyuan-video

Hunyuan is a recent one that has looked pretty good.
Like with music generation models, the main thing that might make "open source" models better is most likely that they have no concern about excluding copyrighted material from the training data, so they actually get a good starting point instead of using a dataset consisting of youtube videos and stock footage
[dead]
[dead]
  • sjm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Anyone else find this stuff extremely distasteful? "Disrupting" creativity and art feels like it goes against our humanity.
The past few years' innovation in AI has roughly been split into two camps for me.

LLMs -- Awesome and useful. Disruptive, and somewhat dangerous, but probably more good than harm if we do it right.

'Generative art' (i.e. music generation, image generation, video generation) -- Why? Just why?

The 'art' is always good enough to trick most humans at a glance but clearly fake, plastic, and soulless when you look a bit closer. It has instilled somewhat of a paranoia in me when browsing images and genuinely worsened my experience consuming art on the internet overall. I've just recently found out that a jazz mix I found on YouTube and thought was pretty neat is fully AI generated, and the same happens when I browse niche artstyles on Instagram. Don't get me started on what this Sora release will do...

It changed my relationship consuming art online in general. When I see something that looks cool on the surface, my reaction is adversarial, one of suspicion. If it's recent, I default to assuming the piece is AI, and most of the time I don't have time or effort to sleuth the creator down and check. It's only been like a year, and it's already exhausting.

No one asked for AI art. I don't understand why corporations keep pushing it so much.

There's this FinTech ad on the NYC subway right now. I can't remember the company, but the entire ad is just a picture of a guitar and some text.

Anyway, the guitar is AI generated, and it's really bad. There are 5 strings, which morph into 6 at the headstock. There's a trem bar jammed under the pickguard, somehow. There's a randomly placed blob on the guitar that is supposed to be a knob/button, but clearly is not. The pickups are visually distorted.

It's repulsive. You're trying to sell me on something, why would you put so little effort into your advertising? Why would you not just...take a picture of a real guitar? I so badly want to cover it up.

> You're trying to sell me on something, why would you put so little effort into your advertising? Why would you not just...take a picture of a real guitar?

Is this not evident? Because using AI is much cheaper and faster. Instead of finding the right guitar, paying for a good photographer, location, decoration, and all the associated logistics, a graphics designer can write a prompt that gets you 90% of the vision, for orders of magnitude less cost and time. AI is even cheaper and faster than using stock images and talented graphic designers, which is what we've been doing for the past few decades.

All our media channels, in both physical and digital spaces, will be flooded with this low-effort AI garbage from here on out. This is only the beginning. We'll need to use aggressive filtering and curation in order to find quality media, whether that's done manually by humans or automatically by other AI. Welcome to the future.

I was able to find a similar public domain image in all of 5 seconds, so neither faster nor cheaper in this case.

In fact, it's not hard to imagine people using AI tools even if they're slower, more expensive, and yield worse quality results in the long run.

"When all you have is a hammer...".

Just need to add a hand with 6 fingers strumming it and it could be a meme.
Reminds me of the new Coca Cola Christmas ad which is equally off-putting.
I don't understand why you see a distinction between models that generate text, and those that generate images, video or audio. They're all digital formats, and the technology itself is fairly agnostic about what it's actually generating.

Can't text also be considered art? There's as much art in poetry, lyrics, novels, scripts, etc. as in other forms of media.

The thing is that the generative tech is out of the bag, and there's no going back. So we'll have to endure the negative effects along with the positive.

Simple: I am equally offput when LLMs are used for generating poetry, lyrics, novels, scripts, etc. I don't like it when low-effort generated slop is passed off as art.

I just think that LLMs have genuine use for non-artistic things, which is why I said it's dangerous but may be useful if we play our cards right.

I see. Well, I agree to an extent, but there's no clear agreement about what constitutes art with human-generated works either. There are examples of paintings where the human clearly "just" slapped some colors on a canvas, yet they're highly regarded in art circles. Just because something is low-effort doesn't mean it's not art, or worthy of merit.

So we could say the same thing about AI-generated art. Maybe most of it is low-effort, but why can't it be considered art? There is a separate topic about human emotion being a key component these generated works are missing, but art is in the eyes of the beholder, after all, so who are we to judge?

Mind you, I'm merely playing devil's advocate here. I think that all of this technology has deep implications we're only beginning to grapple with, and art is a small piece of the puzzle.

You make a good point. I'm just spitballing here, but I think what sets generative art apart for me is the element of deception.

I'd be perfectly fine with a hypothetical world in which all generated art is clearly denoted as such. Like you said, art is in the eyes of the beholder. I welcome a world in which AI art lives side-by-side with traditional art, but clearly demarcated.

Unfortunately, the reality is very different.

AI art inherently tries to pass off as if it were made by a human. The result of the tools released in the past year is that my relationship with media online has become adversarial. I've been tricked in the past by AI music and images which were not labelled as such, which fosters a sort of paranoia that just isn't there with the examples you mentioned.

the offensive part is that it's creative theft by digesting other people's creative works then reworked and regurgitated. It's 'fine' when it's technical documentation and reference work, but that's not human expression.
So pre-LLM were you offended when someone posted their personal poetry or artwork on internet if it was clear they had put little effort into it? Somehow I doubt it.
Wish it was just generative AI for me.

You don't have the same paranoia with LLM? So often I find myself getting a third of the way into reading an article or blog post and think: "wait a minute...".

LLM tone is so specific and unrealistic that it completely disengages me as a reader.

I have found a channel that curates and cleans some AI generated music. I really enjoy it, it's nothing I heard before, it's unique, distinct, and devoid of copyright.
I understand your take but it's only going to get better and incredibly fast.

I'm a huge film nerd and I can only dream of a future where I could use these type of tools (but more advanced) to create short films about ideas I've had.

It's very exciting to me

I somehow doubt it's (lack of) technology that's stopping you from creating your ideas.
Yeah it's a desire to do so in a really short amount of time because there's other things I prioritize.
I'm glad someone else said this. Hopefully we can get rid of that terrible disruptive camera too.
There's some of that but it produces some cool stuff too. I mean you have these new virtual worlds like this that didn't exist before https://youtu.be/y_4Kv_Xy7vs?t=13

The video there is kind of a combination of human design and AI which produces something beyond that which either would come up with on their own.

It is like an attempt to do psychic battle over the meaning of "disruption".
"And then everyone clapped ..."

There's nothing wrong with technology going forward and this doesn't go against "creativity and art", to the contrary, it will enhance it.

That's the optimistic version and in theory I would agree - it will be a great enhancer of creativity for some poeple.

But mostly it will end up like the smartphones - we carry more computing power in our pockets that was used to send man to the moon, and instead of taking advantage of it to do great things, we are glued to this small screen several hours / day scrolling social medias nonsense. It's just human nature.

I am so intrigued with the new sora release. I hope it turns out well.
[flagged]
[flagged]
I hope we get unlimited access for the reasonable price of a car note
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
[flagged]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
[flagged]
[flagged]
[flagged]
They have NSFW filters. They also seem to have "contains a human" filters.
[flagged]
[flagged]
> You all understand that Sam, the gaylord of OpenAI, is here to control you, correct?

I see this kind of comment from time to time. do you have any evidence to support this claim, or just paranoia vibes?

[flagged]
I can't wait for the safety features because I know there are those in society that would do bad things. But not me, though. I'd like the unlocked version.
[flagged]
  • rvz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That’s around more than 20+ VC-backed AI video generation startups destroyed in a microsecond and scrambling to compete against Sora in the race to zero.

Many of them will die, but may the AI slop continue anyway.

It’s a race to zero margin. The people who win will have lots of existing distribution channels (customers) or lots of money or control over data. Those who innovate but don’t have these things will be copied and run out of money eventually, as sad is it is. The competition between those startups and bigger players isn’t fair.
Not really a microsecond. Sora was announced months ago.
Hollywood's days are numbered.

If you are a creative in this industry, start preparing to transition to another industry or adapt.

Your boss is highly likely to be toying around with this.

The first entirely AI generated film (with Sora or other AI video tools) to win an Oscar will be less than 5 years away.

Nothing I'm seeing here looks like it's going to destroy Hollywood.

I could see this tool maybe being used for generating establishing shots (generate a sweeping drone shot of a lighthouse looking out over a stormy sea), but then the actual talent work in a scene will be way more sensitive. The little details matter so much, and this feels so far from getting all of that right.

Sure, this is the worst it will ever be, things will improve, etc, but if we've learned anything with AI, it's that the last mile is often the hardest.

I'm not sure the little details are enough of a moat. Consider TikTok - people use cheap "special effects" to get the message across, e.g. if a man is playing a woman he might drape a towel over his head - it's silly and low quality but it gets the idea across to the viewer. Think too about programs like Archer or South Park that have (stylistically) low quality animation but still huge fan bases.

What I think this will unlock, maybe with a bit of improvement, is low quality video generation for a vast number of people. Do you have a short film idea? Know people with some? Likely millions of people will be able to use this to put together good enough short films - that yes, have terrible details, but are still good enough to watch. Some of those millions of newly enabled videos will have such strong ideas or writing behind them that it will make up for, or capitalize on, the weak video generation.

As the tools become easier, cheaper, faster, better etc more and more hobbyists will pick them up and try to use them. The user base will encourage the product to grow, and it will gradually consume film (assuming it can reach the point of being as or nearly as good as modern special effects).

I think of it like - when Steven Spielberg was young he used an 8mm camera, not as good as professional film equipment in the day, but good enough to create with. If I were a high school student interested in film I would absolutely be using stuff like this to create.

> What I think this will unlock, maybe with a bit of improvement, is low quality video generation for a vast number of people. Do you have a short film idea? Know people with some? Likely millions of people will be able to use this to put together good enough short films - that yes, have terrible details, but are still good enough to watch.

Sure, this is already happening on Reels, Tik Tok, etc. People are ok with low quality content on those platforms. Lazy AI will undoubtedly be more utilized here. But I don’t think it’s threatening Hollywood (well, aside from slowly destroying people’s attention spans for long form content, but that’s a different debate). People will still want high quality entertainment, even if they can also be satisfied with low fidelity stuff too.

I think this has always been true — think the difference between made for TV CGI and big-budget Hollywood movie CGI. Expectations are different in different mediums.

This current product is not good enough for Hollywood. As long as people have some desire for Hollywood level quality, this will not take those jobs.

The big caveat here is “yet” — when does this get good enough? And this is where my skepticism comes in, because the last mile is the hardest, and getting things mostly right isn’t really good enough for high quality content. (Remember how much the internet lost it over a Starbucks cup in Game of Thrones?)

The other caveat is maybe that our minds melt into stupidity to the point that we only watch things in low fidelity 10 seconds clips that AI can capably run amock with. In which case I don’t really think AI actually takes over Hollywood so much as Hollywood — effectively high fidelity long form content — just ceases to exist altogether. That is the sad timeline.

The day that 90 minutes of 3-second dolly shots wins an Oscar is the day cinema dies.
  • n144q
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If you are ok with physics that is completely wrong, camera angles that just don't feel right, strange light effects, and all other kinds of distorted images/videos, maybe Hollywood is doomed. But I don't see that happening.

A reminder: as advanced as CGI is today, lots and lots of movie are still based on (very expensive) real-life scenery or miniature sets (just two of many examples), because they are far, far more realistic than what you get out of computers.

> entirely

What would you like to wager on this?

I'd take that bet at 10:1 odds.
I'd be careful.

OpenAI could be a big enough bubble in less than 5 years to buy the Oscar winner, even if the film is terrible.

Also, OP only said "an Oscar".

The Oscar committee could easily get themselves hyped enough on the AI bubble, to create an AI Oscar Film award.

No one said anything about making a "good" movie.

> OP only said "an Oscar"

...For soundtrack. (Sorry.)

But seriously: like the democratization which made music production cheap brought some interesting or commercially successful endavours, the increased effort from people who could not bring their dreams to reality because of the basic constraint of budget will probably bring some very good results, even anthology worth - and lots of trash.

... Have you _seen_ the output from these things? I'm not sure actors need to panic just yet.
I mean thats a bold claim. I'd first let chatgpt win an Oscar for writing the best screenplay, and only then would Sora come into the picture.
I hope somebody pays 100.000 pro subscriptions and uses AI to request Sora to generate videos 24/7. Maybe Elon?

Even if they use queues, I'm sure they are running at a loss and the GPU time is going to cost 100x more than what they charge.

Creating false demand for AI can easily bankrupt their business, as they will believe people actually want to use that crap for that purpose.

Deliberately wasting electricity isn't exactly a moral win.
Generative AI is a waste of electricity by definition.
> by definition

"Definition" does not mean "...plus your own assumptions".

The results are there. Optimal, no; somehow valuable, yes.