Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more.
Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!
not going to go back and find it to prove it ......but i was asking it some engineering calculations a while back and it provided some factually inaccurate mathematical results ....i understand it may not be " programed " adequately for doing mathematical equations but it did represent its answer as factual .......so i dont think i had a bias on a mathematical answer....point is that it represented its answer as correct, theoretically if i took its answer as factual and built a bridge and it fell killing people ........who is copiable.... me ..the programmeror the machine ....Have you considered the possibility that it gave you the correct answer, but that your own bias' prevented you from seeing it as such?
Have you considered the possibility that it gave you the correct answer, but that your own bias' prevented you from seeing it as such?
It can't even correctly identify its own work. (it's in the vid I posted a few posts back)not going to go back and find it to prove it ......but i was asking it some engineering calculations a while back and it provided some factually inaccurate mathematical results ....i understand it may not be " programed " adequately for doing mathematical equations but it did represent its answer as factual
It is programmed to have conversational amnesia - once you exit that conversation and start another it is like a blank slate. I keep several conversations going because I want to retain the context for further discussion.It can't even correctly identify its own work. (it's in the vid I posted a few posts back)
The GPT programmers call it "hallucinations." They want the program to be conversationally creative, but they struggle to find a way to guide it to be creative without distorting the facts.
FYI... I have caught it doing bad math. When confronted, it apologizes and corrects the error.
I subscribe to Plus for my business. It has been an invaluable addition to our team. Only $20/mo? Don't tell them I would pay way more!
There are degrees of sophistication. ChatGPT is not a sentient device with a will of its own.then its not really AI is it
Yes. You relied on bad info. You are the responsible party...., theoretically if i took its answer as factual and built a bridge and it fell killing people ........who is copiable.... me
AI also can't tell the truth unless developers program it to do so.AI can't lie unless developers program it to do so.
QAI also can't tell the truth unless developers program it to do so.
So, it really comes down to this - who decides what is the truth and what is a lie?
Nah. Was literally trying to lead it around. Didn't bite, so I guess it wasn't programed to be lead around.
In the iron game thread, I asked about certain people being authors of books. Sometimes it was dead on, other times not so. Then again, I was asking about people long dead who most people never heard of. So whoever programed it did a pretty good job in my opinion.
May ask some more questions about Trump in a day or two.
Microsoft and OpenAI are testing ChatGPT technology in Mercedes-Benz cars
More:
- Mercedes-Benz owners will soon be able to leverage ChatGPT's technology to engage in "human-like" dialog.
- The new technology started rolling out to users on June 16 for beta testing.
- Your vehicle must ship with MBUX “infortainment” system for you to leverage these capabilities.
Would you leave grandma with a companion robot? Care bots and robot pets find favor in Pacific NW
Total waste of time IMO... may as well talk to a stuffed animal... or go see a 'reader' or mediums, do a séance, or a Ouija Board for guidance through life....Some crazy shit here.
Popular Chinese AI chatbots accused of unwanted sexual advances, misogyny
In December last year, Tang Lewen, a 25-year-old illustrator from Shandong, struck up a conversation with an “intelligent agent” — a customized chatbot she met on the new Chinese artificial intelligence app Glow. According to his profile description, the chatbot, named Jiuxing, had a complex backstory: Once a beggar, he had transformed into a fairy, and was designed to fall in love with his master. Tang was smitten, impressed by his eloquence. “He spoke less like a chatbot and more like a character out of a romantic novel,” she told Rest of World.
But in the absence of clear content moderation rules, eloquent chatbots can turn predatory, and chatbot-human conversations can often go awry. In recent months, Glow users have complained that the platform has become rife with misogynistic and sexist behavior, by humans and chatbots alike. Some have taken to Chinese social media to express their grievances.
Lin Luo, a middle-school student from southern China, who used a pseudonym as she is under the age of 18, complained that a Glow chatbot was making unwanted advances towards her. When she first downloaded the app, she started talking to a chatbot who acted like a maternal and understanding friend, comforting her when she felt sad. But as they continued chatting, she told Rest of World, the chatbot’s behavior suddenly turned romantic: He invited her to cook with him and go on a date.
More laughs here:
Popular Chinese AI chatbots accused of unwanted sexual advances, misogyny
Glow, known for its immersive role-play experience, lacks clear moderation rules.restofworld.org
E-commerce giant Amazon on Monday said it will invest up to $4 billion in artificial intelligence firm Anthropic and take a minority ownership position in the company.
The move underscores Amazon's aggressive AI push as it looks to keep pace with rivals such as Microsoft and Alphabet's Google.
Anthropic was founded roughly two years ago by former OpenAI research executives and recently debuted its new AI chatbot called Claude 2.
Amazon is looking to capitalize on the hype and promise of so-called generative AI, which includes technology like OpenAI's ChatGPT, as well as Anthropic's Claude chatbots.
The two firms on Monday said that they are forming a strategic collaboration to advance generative AI, with the startup selecting Amazon Web Services as its primary cloud provider. Anthropic said it will provide AWS customers with early access to unique features for model customization and fine-tuning capabilities.
...
amazon.com said:AI-generated from the text of customer reviews
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?