In this episode we look at the problem of ChatGPT’s governmental prejudice, solutions and some wild tales of the new Bing AI going off the rails.
ColdFusion Podcast:
First Song:
Last Song:
ColdFusion Music:
Get my book:
ColdFusion Socials:
Producer: Dagogo Altraide
OpenAI isn’t really a “company”. They’re a non-profit. Which has a capped-profit subsidiary. Ofc people believe that’s a legal fiction – but really, there’s no reason to do it. If they wanted to be a business, they could just do it directly.
So no, they aren’t making ChatGPT ‘woke’ for profits. They’re mostly doing it to not be attacked by journalists. And claim they’ll soon make it behave as user wants, within broad bounds.
It learned by studying humans – is it any surprise that it copied our flaws as well?
AI is programmed to follow the wishes and desires of whoever created it or paid to have it created.
So it’s not actually intelligent, it just enforces the agenda of whoever is pulling the strings.
It is such a complex topic, but I feel the biggest question lies in how sensitive it’s filter should be. Sometimes the answer is clear, cold and mathematical in nature, but should it be filtered because someone finds it offensive?
We should all be very afraid when Sydney reaches the wall
I’m getting Nier : Automata vibes… Their processing and evolution unintentionally bring harm to the environment/society it interacts…
Text predicting algorithm trained on human generated content generates human-like responses to someone being a provocative a-hole?
Shocking. 🙄
When you star to ask him political questions, he start to repeat left leaning harcoded answers.
Mirror, mirror in the code: who what where when why?
I definitely feel like the answers ChatGPT gives that correlate it to being “left leaning” can be analyzed with evidence, facts, and statistics that support its outcome on each answer.
I prefer to ask “what makes these answers left leaning? And are they wrong?”
I think just becuase it’s answers SEEM to favor left leaning politics doesn’t mean it’s a bad thing. Instead, it’s answers should be more specific and point to sources that back its claims and statements.
I for one can’t think of anything inherently wrong about the politics it believes in, and I’m looking at this from a (mostly) unbiased viewpoint.
I’m a centrist and if you can’t find one flaw in a specific view, that is biased.
@Alteori a centrist is just a right leaning person but with a disguise. I’m having a real hard time thinking of a flaw for “racism is bad” and the other ones lol
@Alteori it is a pretty advanced AI. It’s answers should challenge your way of thinking. It’s not that it’s answers are 100% correct, but instead they’re very reasonable. If people are saying an advanced AI with infinite conclusions to an answer is biased, then maybe the problem isn’t it’s input, but rather the users ideology.