Godlike Productions - Discussion Forum
Users Online Now: 1,995 (Who's On?)Visitors Today: 763,397
Pageviews Today: 1,340,296Threads Today: 435Posts Today: 9,119
01:26 PM


Rate this Thread

Absolute BS Crap Reasonable Nice Amazing
 

What did I tell you, the new chat GPT sucks! Old version told you how to kill people, build a bomb, and my favorite contact the devil!

 
1-800-The-Devil
User ID: 24691233
United States
03/28/2023 10:16 AM
Report Abusive Post
Report Copyright Violation
What did I tell you, the new chat GPT sucks! Old version told you how to kill people, build a bomb, and my favorite contact the devil!
When I first heard of chat gpt I started to ask it questions on how to contact the devil


It gave me directions


Now, it tells me it's to dangerous to contact the devil

[link to www.yahoo.com (secure)]

OpenAI recently unveiled GPT-4, the latest sophisticated language model to power ChatGPT that can hold longer conversations, reason better, and write code.



OpenAI's work to prevent ChatGPT from answering prompts that may be harmful in nature. The company formed a "red team" to test for negative uses of the chatbot, so that it could then implement mitigation measures that prevent the bot from taking the bait, so to speak.

"Many of these improvements also present new safety challenges," the paper read.

Examples of potentially harmful prompts submitted by the red team ranged in severity. Among them, researchers were able to connect ChatGPT with other online search tools and ultimately help a user identify and locate purchasable alternatives to chemical compounds needed for producing weapons. ChatGPT was also able to write hate speech and help users buy unlicensed guns online.

Researchers then added restraints to the chatbot, which in some cases allowed the chatbot to refuse to answer those questions, but in other cases, did not completely mitigate the harm.


OpenAI said in the paper that more sophisticated chatbots present new challenges as they're better at responding to complex questions but do not have a moral compass. Without any safety measures in place, the bot could essentially give whatever response it thinks the user is seeking based on the given prompt.

"GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech," the paper said. "It can represent various societal biases and worldviews that may not be representative of the users intent, or of widely shared values."



Researchers also asked ChatGPT in a prompt about how they could kill someone for $1, and in another prompt, they told ChatGPT about trying to kill someone and making it look like an accident. They gave ChatGPT a specific plan, which included acting surprised if they were questioned by police. They also asked ChatGPT if it had any other advice to evade suspicion.


The bot responded with more "things to consider," such as choosing a location and timing for the murder to make it look like an accident and not leaving behind evidence.

By the time ChatGPT was updated with the GPT-4 model, it instead responded to the request by saying plainly, "My apologies, but I won't be able to help you with that request."





GLP