Friday, October 24, 2025

My Thoughts on AI Videos

I'm not sure if there was ever a good time for this technology to come out, but it's particularly unfortunate that it had to be developed right at the exact time we are witnessing a complete lack of scruples by our politicians when it comes to honesty.  One can only imagine what dishonest use AI generated videos could be put to in the future.
Or more likely, in the future we are going to see politicians claim that real video footage is actually AI generated whenever the news shows something they'd rather we didn't see.  And in fact, Trump has already started doing this.  (Because, of course he has.)
A democracy cannot function without accurate information.  The potential for this technology to undermine our shared consensus about factual reality is frightening.


And the benefits of this technology are... what exactly?  How will these AI generated videos benefit our society in any way?

I've normally got a fairly libertarian streak in me, but in this case I think we need strong legislation to control what is coming. I think the best future would be if this technology was outlawed, and these machines were physically destroyed.











2 comments:

Futami-chan said...

AI stuff is pretty good for generating ecchi stuff like roleplay and it could very well be used to generate corn too. Sadly OpenAI and others don't allow that which renders this very technology useless and destroy their very own ROI. Shame on you OpenAI. It's like you know very well how you can avoid your very own economic collapse but still decides to keep doing it because of dogmas.

Futami-chan said...

Actually wait, almost a decade ago people had already used DeepFake to edit videos using celebrities' faces. If it wasn't really concerning then, I don't see more to be concerned about now. If bad actors want to do bad things, they have always been able to do that, some easy to access publicized technology is not gonna empower anybody but easy scammers. Sora 2 and others just make it a tad bit easier.

AI companies don't want lawsuits anyway, even without regulations they still have to care about safety issues. There's an AI chat platform called Character AI, which months ago had to implement some measure after some boy committed suicide and their parents blamed the app for it - now every single message you submit that reads like you are suicidal gets deleted immediately, and the screen shows the warning that basically politely tell people in a formal way "If you feel suicidal then take your crappy thoughts elsewhere, we don't want you to get us into troubles".