ChatGPT, Grok, Gemini (et al): news and discussion about AI

Welcome to the Precious Metals Bug Forums

Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.

Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!

Man raped in jail after AI technology wrongfully identifies him in robbery, suit says​

On Jan. 22, 2022, a Sunglass Hut in Houston was robbed by two armed men. The men stole thousands of dollars from the store, according to a lawsuit.

During the robbery, the men ordered two employees into a back room and told them to stay there so they could get away, court records say.

As police were investigating, they got a call from a loss prevention employee with EssilorLuxottica, which is Sunglass Hut’s parent company. The employee told police they “could stop their investigation because he found their guy,” the lawsuit said.

The employee said he worked with Macy’s loss prevention using artificial intelligence and facial recognition software to identify the suspect as the 61-year-old man, the lawsuit said.

https://www.yahoo.com/news/man-raped-jail-ai-technology-210846029.html
 

It's true, LLMs are better than people – at creating convincing misinformation​

Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans.

Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set out to examine whether LLM-generated misinformation can cause more harm than the human-generated variety of infospam.

In a paper titled, "Can LLM-Generated Information Be Detected," they focus on the challenge of detecting misinformation – content with deliberate or unintentional factual errors – computationally. The paper has been accepted for the International Conference on Learning Representations later this year.

More:

 
I suppose there is a good chance that the advent of LLMs/AI is going to lead to a rebirth of the value in critical thinking and due diligence.
 
Battle of the bots

Condensed version of an article taken from here https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-023-00440-3

Armies of bots battled on Twitter over Chinese spy balloon incident​

Tens of thousands of bots tussled on Twitter to try to shape the debate as a Chinese spy balloon flew over the US and Canada last year, according to an analysis of social media posts.

Kathleen Carley and Lynnette Hui Xian Ng at Carnegie Mellon University in Pennsylvania tracked nearly 1.2 million tweets posted by more than 120,000 users on Twitter – which has since been renamed X – between 31 January and 22 February 2023. All tweets contained the hashtags #chineseballoon and #weatherballoon, discussing the controversial airborne object that the US claimed China had used for spying.

The spy balloon saga of 2023 inflated US-China political tensions

The tweets were then geolocated using Twitter’s location feature, and checked with an algorithm called BotHunter, which looks for signs that an account isn’t controlled by a human.

“There are lots of different things [identifying a bot] is based off, but examples are whether your messages are being sent out so fast that a human literally can’t type that fast, or if you’re geotagged in London one minute, then in New Zealand when it’s physically impossible for a person to do so,” says Carley.

More:

 

Your AI Girlfriend Is a Data-Harvesting Horror Show​

Lonely on Valentine’s Day? AI can help. At least, that’s what a number of companies hawking “romantic” chatbots will tell you. But as your robot love story unfolds, there’s a tradeoff you may not realize you’re making. According to a new study from Mozilla’s *Privacy Not Included project, AI girlfriends and boyfriends harvest shockingly personal information, and almost all of them sell or share the data they collect.

“To be perfectly blunt, AI girlfriends and boyfriends are not your friends,” said Misha Rykov, a Mozilla Researcher, in a press statement. “Although they are marketed as something that will enhance your mental health and well-being, they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Mozilla dug into 11 different AI romance chatbots, including popular apps such as Replika, Chai, Romantic AI, EVA AI Chat Bot & Soulmate, and CrushOn.AI. Every single one earned the Privacy Not Included label, putting these chatbots among the worst categories of products Mozilla has ever reviewed. The apps mentioned in this story didn’t immediately respond to requests for comment.

You’ve heard stories about data problems before, but according to Mozilla, AI girlfriends violate your privacy in “disturbing new ways.” For example, CrushOn.AI collects details including information about sexual health, use of medication, and gender-affirming care. 90% of the apps may sell or share user data for targeted ads and other purposes, and more than half won’t let you delete the data they collect. Security was also a problem. Only one app, Genesia AI Friend & Partner, met Mozilla’s minimum security standards.

More:

 
^^^^^^

Don’t date robots — their privacy policies are terrible​

Talkie Soulful Character AI, Chai, iGirl: AI Girlfriend, Romantic AI, Genesia - AI Friend & Partner, Anima: My Virtual AI Boyfriend, Replika, Anima: AI Friend, Mimico - Your AI Friends, EVA AI Chat Bot & Soulmate, and CrushOn.AI are not just the names of 11 chatbots ready to play fantasy girlfriend — they’re also potential privacy and security risks.

A report from Mozilla looked at those AI companion apps, finding many are intentionally vague about the AI training behind the bot, where their data comes from, how they protect information, and their responsibilities in case of a data breach. Only one (Genesia) met its minimum standards for privacy.

Wired says the AI companion apps reviewed by Mozilla “have been downloaded more than 100 million times on Android devices.”

More:

 
Google Gemini is pro-pedophile and even refers to them as “minor-attracted persons” (MAPS).

AI does not have individual thought. It makes decisions based on an accumulation of data presented to it.

Meaning Gemini was taught this. This is the world the left wants.


 

Walmart CEO confirms new AI technology that will change checkouts forever - but will you want cameras prying into your cart?​

  • The tech is being be rolled out to 600 Sam's Club stores this year
  • Sam's Club is owned by Walmart - which may use the tech in Walmart too
  • AI means employees will not have to check customer receipts at store exits
A checkout change is on the way at Sam's Club stores aimed at speeding up wait times for shoppers, the boss of its parent company Walmart has confirmed.

The membership-based warehouse store - Walmart's version of Costco - will allow customers to scan and pay for groceries with an app on their phone and just walk straight out.

'At Sam's Club US, we're rolling out new exit technology that enables our members to use scan and go to just walk out after completing their transaction on their phone,' Walmart CEO Doug McMillon said in an earnings call last week.

More:

 

AI Helps Litigation Funders Mine Court Dockets for Legal Gold​

  • Legalist Inc. uses the ‘truffle sniffer’ to find cases
  • Case Miner is the AI search tool for firm Qanlex
Companies that seek profits by funding lawsuits are using AI to help find cases to invest in, even as skepticism lingers about the tool’s usefulness.
Legalist Inc. built an algorithm called “the truffle sniffer” to search for lawsuits by focusing on variables such as the court, judge and case type. The tool was essential for the alternative asset manager’s growth, with $901 million now under management after its 2016 founding.

“The AI is a really crucial part of sourcing,” said Eva Shang, Legalist’s chief executive officer. She co-founded the company with the help of a $100,000 grant from billionaire Peter Thiel’s foundation after dropping out of Harvard.

Whether companies use AI to decide which cases to fund, or simply for research, the tool is becoming increasingly common in the $13.5 billion litigation finance industry, in which investors fund lawsuits and take a portion of any successful awards.

More:

 
I suppose there is a good chance that the advent of LLMs/AI is going to lead to a rebirth of the value in critical thinking and due diligence.
Hopefully, but I'm thinkin' it'll end up going in the exact opposite direction.

Right now AI is still being dialed in, so to speak. Once they get it to where most people see it as relatively accurate, it will become readily adopted.

What I see, is a World where no one needs critical thinking skills, as whenever a problem comes up, just ask your AI what to do. If the average learns that they can get better answers than they could have come up with themselves, and if it happens over and over, why wouldn't people end up just letting it figure everything out for them? People always end up taking what looks like the easy way, and thinking can be hard. Asking AI and getting an instant answer, is easy peasy.
.....and I'm talkin' down the road a bit, not right now. Ie: where all this AI stuff will eventually lead society over the next generation if it keeps going at the rate it is, or even accelerates. As is likely.

Just like how today, thanks to cell phones, no one can hardly remember a phone number. Even their own a lotta times.
Because when it's all right there in an electronic list, most see no need to commit those #'s to memory anymore. The average can now put those brain cells to work remembering some bs about pop culture instead.

Or how GPS makes it where most people no longer need to create a mental map in their head in order to know how to get around. Just let the GPS do that mental work for us.
Sure, occasionally it'll direct someone to drive into a lake, but the vast majority of time it does just fine for the average.

If AI gets to the point that most see it's answers and decisions as being better than their own, that logic and reasoning part of most peoples brains might shut off for everything greater than the most trivial of tasks.
Think Idiocracy, but with smart computers directing everything anyone does.

I've read often where people ponder what kind of World we're headed for. Usualy along the lines of 1984, or perhaps some version of a Brave New World.

However, after some thinking about where this stuff might be headed, I think it'll be more like 2112, with the Priests being the gov and tech companies. Ie: fascism.

We’ve taken care of everything
The words you read
The songs you sing
The pictures that give pleasure
To your eye
One for all and all for one
Work together
Common sons
Never need to wonder
How or why


Except there is no Solar Federation coming to save us.
 
Last edited:
My father has joked for many decades that he wanted to create an online church where people could visit to get some randomly generated spiritual guidance and then pay a tithe. A tax free money suck that doesn't require any human effort (beyong the initial set up). I knew it wouldn't be long before someone used AI to do this.
 
My father has joked for many decades that he wanted to create an online church where people could visit to get some randomly generated spiritual guidance and then pay a tithe. A tax free money suck that doesn't require any human effort (beyong the initial set up). I knew it wouldn't be long before someone used AI to do this.
 

MyShell, Blockchain Platform For Building 'AI Girlfriends' and Productivity Apps, Raises $11M​

MyShell, a Tokyo-based decentralized AI platform used to create "AI girlfriends," has raised $11 million in pre-series A funding for its AI app-building ecosystem.

The investment, which brings MyShell's total funding to $16.6 million, was led by Dragonfly, with additional participation from Delphi Ventures, Bankless Ventures, Maven11 Capital, Nascent, Nomad Capital and OKX Ventures. The round also attracted support from individual investors such as crypto investor and thought leader Balaji Srinivasan, NEAR's Illia Polosukhin and Paradigm's Casey K. Caruso.

MyShell pitches itself as an ecosystem for AI creators to develop and deploy applications. The new funds will be used to further the development of its open-source foundational model and to support its community "of over 1 million registered users and 50,000 creators," according to a statement.

MyShell aims to distinguish itself from other AI platforms with its focus on decentralization: The company has open-sourced key parts of its codebase and leans on blockchain tools to assist with model training and creator royalties.

Crypto will "uniquely enable" MyShell to "help creators and AI model researchers to monetize their work," and blockchains will serve as the foundation for "a dedicated platform to trade, and own AI assets," MyShell CEO Ethan Sun told CoinDesk in an email.

More:

 

Key takeaways​

  • A representative survey shows that almost half of US households use generative artificial intelligence (gen AI) tools. The use of and knowledge about gen AI are significantly lower among women, the elderly and households with lower income or educational attainment.
  • Respondents expect gen AI to bring more opportunities than risks for job prospects, especially among men and younger, more educated and higher-income households. Nonetheless, all groups trust gen AI less than humans, especially in the provision of financial and medical services.
  • Survey participants express concern over the risks of data breaches and data abuse and overwhelmingly support the regulation of AI. Consistent with previous surveys, respondents trust government agencies and financial institutions more than big techs to safeguard their data.
 
I don't believe that 50% number for a minute. I doubt 1% of the households in my neighborhood use AI tools or could even name one aside from ChatGPT.
 
Hmmm...


🦾 "We don't understand how it happened." AI has skills it wasn't taught, Russian scientist says

Trained on huge amounts of data, it has developed "multiple, unusual and surprising" skills, Professor Konstantin Vorontsov of the Russian Academy of Sciences told Sputnik.

☝️ What's more, the GPT-4 model acquired most of its abilities on its own without being given any examples, the researcher said. He admitted that the scientific community is at a loss to explain this result.

"We say that 'quantity has become quality'," he noted, adding, "but this philosophical explanation doesn't make up for our lack of understanding and confusion."

Subscribe to @sputnik_africa

🔸 TikTok (https://www.tiktok.com/@sputnik.africa) | Sputnik Africa (https://en.sputniknews.africa/) | Boost us on Telegram (https://t.me/sputnik_africa?boost) 🔸

 
🧑‍💻 "It's not really intelligence, although it's very close." A Russian scientist on AI and the human brain

"The neural network trained on terabytes of text has absorbed almost all the knowledge accumulated by mankind, including the immense amount of textual content on the Internet," Professor Konstantin Vorontsov of the Russian Academy of Sciences told Sputnik.

Despite all this, the capacity of the model remains inferior to that of the human brain, he continued.

He added, "There is a growing conviction that artificial intelligence based on neural networks has a completely different basis and characteristics and cannot be compared with biological intelligence."

Despite the ability to retain information and make relevant decisions, machines are no more than "obedient assistants," the scientist insisted.

🗣 "We are building our human civilization, not a civilization of machines," he said. And he highlighted that AI is just one of the man-made technologies that pose a mortal threat to humanity.

"We will be able to survive if we approach everything from the point of view of civilizational goals and values, and constantly remind ourselves of them," he concluded.

Subscribe to @sputnik_africa

🔸 TikTok (https://www.tiktok.com/@sputnik.africa) | Sputnik Africa (https://en.sputniknews.africa/) | Boost us on Telegram (https://t.me/sputnik_africa?boost) 🔸

 
I don't believe that 50% number for a minute. I doubt 1% of the households in my neighborhood use AI tools or could even name one aside from ChatGPT.
Ai is technology of the future.

Sure, we 'dabble' with it on occasion, but the real advances will happen down the road with the younger generations once it becomes integrated into every day life.

People will think Ai was always around... kind of like the internet today vs 40 years ago when it was a fledgling idea barely useable on terminals located in universities.
 
*Note: This short article looks at AI from the military's point of view.

HOW WILL AI CHANGE CYBER OPERATIONS?​

APRIL 30, 2024

The U.S. government somehow seems to be both optimistic and pessimistic about the impact of AI on cyber operations. On one hand, officials say AI will give the edge to cyber defense. For example, last year Army Cyber Command’s chief technology officer said, “Right now, the old adage is the advantage goes to the attacker. Today, I think with AI and machine learning, it starts to shift that paradigm to giving an advantage back over to the defender. It’s going to make it much harder for the offensive side.” On the other hand, the White House’s AI Executive Order is studded with cautionary language on AI’s potential to enable powerful offensive cyber operations. How can this be?

The rapid pace of recent advancements in AI is likely to significantly change the landscape of cyber operations, creating both opportunities as well as risks for cybersecurity. At the very least, both attackers and defenders are already discovering new AI-enabled tools, techniques, and procedures to enhance each of their campaigns. We can also expect the attack surface itself to change because AI-assisted coding will sometimes produce insecure code. AI systems and applications developed on top of them will also become subject to cyber attack. All of these changes complicate the calculus.

More:

 
I suppose there is a good chance that the advent of LLMs/AI is going to lead to a rebirth of the value in critical thinking and due diligence.


I do not have very high hopes for that. There are actually people out there who believe what they hear on CNN and NPR.
 


Researchers have built a swarm of miniature, snail-inspired robots, minus all the mucus. Instead, a retractable suction cup works in tandem with the remote-controlled machine’s tank-like treads to maneuver across both difficult terrain and over each other.

Biomimicry is nothing new within the field of robotics. But while many aquatic and flying examples can navigate three-dimensional environments, that often isn’t the case for bots relegated to walking, crawling, or rolling along the ground. Determined to find a potential solution, roboticists at the Chinese University of Hong Kong looked to shelled gastropods for their design cues.

The result, detailed recently in Nature Communications, is a troop of snailbots that can collaborate when an environment becomes too difficult for a single explorer. Each rubber tread system incorporates tiny magnets, above which the electronics, battery, microprocessor, and other components are housed within a bespoke metal “helmet.” When in “free mode,” the robots move across a surface much like a traditional tank or bulldozer. But when the going gets tough and it’s time to swap responsibilities, the team engages their snail bot’s “strong mode.”
...

More:

 
Here's some 'dabbling'

The CRAZIEST Race Hoax I’ve Ever Seen​

A school principal’s shocking racist rant went viral within his community and on the Internet. But upon further investigation, it was discovered that the rant was faked using artificial intelligence voice changing software by a disgruntled employee (who happened to be a black man). Is this a race hoax worse than Jussie Smollett? Let’s get into it.
8
 


...
Venice utilizes leading open-source AI models (we’re fond of Nous Research) to deliver text, code, and image generation to your web browser or mobile app.

No downloads. No installations of anything. And for basic use, no account necessary and the service is free.
...
... Venice applies these protective patterns to generative AI.
  • Your conversation history is stored only in your browser. Venice does not store or log any prompt or model responses on our servers.
  • Your inference requests (the messages you send) go through a proxy server, encrypted, directly to the decentralized compute resources.
  • The response from the AI is similarly streamed directly back again through the encrypted proxy server to your browser, never persisting anywhere other than your browser.
  • The GPU’s which process your inference requests come from multiple decentralized providers, and while each specific decentralized server can see the text of one specific conversation, it never sees your entire history, nor does it know your identity.

The result:
  • What Venice knows: Your email and IP address, but not your conversation.
  • What the compute provider knows: a specific conversation, but not your email or IP address, and it can’t associate specific conversations with specific users.

Perfect privacy will only be achievable with FHE (we’ll get there) or running models locally (go for it). But, today, we believe Venice’s architecture is materially superior to any hosted AI service if you don’t want to be surveilled or censored.
...


You can try it out here:
 
I just tried it. I entered:
show me a photo realistic image of silver going to the moon
and it said it couldn't generate an image, but gave me several paragraphs describing an image. I then noticed three icons below the input bar and clicked the image icon and re-entered the same query. Venice.ai gave me a 2GB .PNG image. I had to crop it a bit to display it here as an attachment:

silver to the moon.jpg
 
I saw an ad for this service... Amazing what technology is developing...



...
Huge Library of Voices

Our voice changer uses leading AI technology to retain emotion from the original voice and audio input and to apply it to new custom voices. Our huge library of different voices and intuitive interface allows anyone to access cutting edge technology without expensive recording equipment. Changing your voice has never been easier with the best voice changer online.

Speech-to-speech AI

Many voice changers or voice generators sound robotic and not at all like natural voices. This is because they use simple text-to-speech software or voice effects to modify and create voices. The best voice changers utilize AI to create real-time speech-to-speech voice conversion in order to transfer your voice into a completely new voice, while retaining your emotion, emphasis and speech patterns.
...

 
Back
Top Bottom