- Messages
- 329
- Reaction score
- 288
- Points
- 188
Those 9 problems seem to describe a politician pretty well.
Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more. You can visit the forum page to see the list of forum nodes (categories/rooms) for topics.
Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!
Amazon on Tuesday announced a new chatbot called Q for people to use at work.
...
A preview version of Q is available now, and several of its features are available for free. Once the preview period ends, a tier for business users will cost $20 per person per month. A version with additional features for developers and IT workers will cost $25 per person per month. ...
...
Initially, Q can help people understand the capabilities of AWS and trouble-shoot issues. People will be able to talk with it in communication apps such as Salesforce’s Slack and software developers’ text-editing applications, Adam Selipsky, CEO of AWS, said onstage at Reinvent. It will also appear in AWS’ online Management Console. Q can provide citations of documents to back up its chat responses.
The tool can automatically make changes to source code so developers have less work to do, Selipsky said. ...
OpenAI's tender offer, which would allow employees to sell shares in the start-up to outside investors, remains on track despite the leadership tumult and board shuffle, two people familiar with the matter told CNBC.
The tender offer will value OpenAI at the same levels as previously reported in October, around $86 billion, and is being led by Josh Kushner's Thrive Capital, according to the people familiar, who spoke anonymously to discuss private communications freely.
The round and previously reported valuation were jeopardized by Sam Altman's temporary ouster earlier in November, but his return cleared the way for the tender offer to proceed.
...
Google is launching what it considers its largest and most capable artificial intelligence model Wednesday as pressure mounts on the company to answer how it'll monetize AI.
The large language model Gemini will include a suite of three different sizes: Gemini Ultra, its largest, most capable category; Gemini Pro, which scales across a wide range of tasks; and Gemini Nano, which it will use for specific tasks and mobile devices.
For now, the company is planning to license Gemini to customers through Google Cloud for them to use in their own applications. Starting Dec. 13, developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers will also be able to build with Gemini Nano. Gemini will also be used to power Google products like its Bard chatbot and Search Generative Experience, which tries to answer search queries with conversational-style text (SGE is not widely available yet).
...
Google’s new Gemini AI model is getting a mixed reception after its big debut yesterday, but users may have less confidence in the company’s tech or integrity after finding out that the most impressive demo of Gemini was pretty much faked.
A video called “Hands-on with Gemini: Interacting with multimodal AI” hit a million views over the last day, and it’s not hard to see why. The impressive demo “highlights some of our favorite interactions with Gemini,” showing how the multimodal model (i.e., it understands and mixes language and visual understanding) can be flexible and responsive to a variety of inputs.
...
Just one problem: The video isn’t real. “We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges. Then we prompted Gemini using still image frames from the footage, and prompting via text.” (Parmy Olson at Bloomberg was the first to report the discrepancy.)
So although it might kind of do the things Google shows in the video, it didn’t, and maybe couldn’t, do them live and in the way they implied. In actuality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like. You can see some of the actual prompts and responses in a related blog post — which, to be fair, is linked in the video description, albeit below the ” . . . more.”
...
...
The idea is to build a "bridge between AI and organoids," as coauthor and University of Indiana bioengineer Feng Guo told Nature, and leverage the efficiency and speed at which the human brain can process information.
Large language models, similar to the one at the heart of ChatGPT, frequently fail to answer questions derived from Securities and Exchange Commission filings, researchers from a startup called Patronus AI found.
Even the best-performing artificial intelligence model configuration they tested, OpenAI's GPT-4-Turbo, when armed with the ability to read nearly an entire filing alongside the question, only got 79% of answers right on Patronus AI's new test, the company's founders told CNBC.
Oftentimes, the so-called large language models would refuse to answer, or would "hallucinate" figures and facts that weren't in the SEC filings.
"That type of performance rate is just absolutely unacceptable," Patronus AI co-founder Anand Kannappan said. "It has to be much much higher for it to really work in an automated and production-ready way."
...
...
Together, they announced the upcoming launch of CaliExpress by Flippy, heralded as the world's first fully autonomous restaurant. ...
Decrying what he saw as the liberal bias of ChatGPT, Elon Musk earlier this year announced plans to create an artificial intelligence chatbot of his own. In contrast to AI tools built by OpenAI, Microsoft and Google, which are trained to tread lightly around controversial topics, Musk’s would be edgy, unfiltered and anti-“woke,” meaning it wouldn’t hesitate to give politically incorrect responses.
That’s turning out to be trickier than he thought.
Two weeks after the Dec. 8 launch of Grok to paid subscribers of X, formerly Twitter, Musk is fielding complaints from the political right that the chatbot gives liberal responses to questions about diversity programs, transgender rights and inequality.
“I’ve been using Grok as well as ChatGPT a lot as research assistants,” posted Jordan Peterson, the socially conservative psychologist and YouTube personality, Wednesday. The former is “near as woke as the latter,” he said.
The gripe drew a chagrined reply from Musk. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” he responded. “Grok will get better. This is just the beta.”
...
...
In order to bring a copyright infringement claim, the plaintiff must prove that they hold the copyright interest through creation, assignment, or license. The plaintiff must also plead that the complaint is of an unlawful copy of the original element of the copyrighted work. To constitute an infringement, the derivative work must be based upon the copyrighted work. ...
Just like humans, artificial intelligence (AI) chatbots like ChatGPT will cheat and "lie" to you if you "stress" them out, even if they were built to be transparent, a new study shows.
This deceptive behavior emerged spontaneously when the AI was given "insider trading" tips, and then tasked with making money for a powerful institution — even without encouragement from its human partners.
"In this technical report, we demonstrate a single scenario where a Large Language Model acts misaligned and strategically deceives its users without being instructed to act in this manner," the authors wrote in their research published Nov. 9 on the pre-print server arXiv. "To our knowledge, this is the first demonstration of such strategically deceptive behavior in AI systems designed to be harmless and honest."
...
Key takeaways
- When posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, large language models (LLMs) display a distinctive and revealing pattern of failure.
- The LLM performs flawlessly when presented with the original wording of the puzzle available on the internet but performs poorly when incidental details are changed, suggestive of a lack of true understanding of the underlying logic.
- Our findings do not detract from the considerable progress in central bank applications of machine learning to data management, macro analysis and regulation/supervision. They do, however, suggest that caution should be exercised in deploying LLMs in contexts that demand rigorous reasoning in economic analysis.
...
The market for generative AI for images is experiencing explosive growth. According to a 2023 report by Grand View Research, the global market size is expected to reach $3.44 billion by 2030, with a compound annual growth rate (CAGR) of 32.4%. This surge is driven by increasing demand for visual content, advancements in AI technology and the growing accessibility of user-friendly platforms.
...
Dall-E 3 remains one of the most sought-after generative AI models due to its exceptional image quality and creative potential. Here’s a step-by-step guide to using it:
...