Welcome to the PMBug forums - a watering hole for folks interested in gold, silver, precious metals, sound money, investing, market and economic news, central bank monetary policies, politics and more.
Why not register an account and join the discussions? When you register an account and log in, you may enjoy additional benefits including no Google ads, market data/charts, access to trade/barter with the community and much more. Registering an account is free - you have nothing to lose!
It has already changed how ny family works (at least, me and two of my daughters). It is amazing how much quicker you can accomplish research.Google CEO Sundar Pichai warns society to brace for impact of A.I. acceleration, says ‘it’s not for a company to decide'
The executive told "60 Minutes" that the technology is moving fast and "it’s not for a company to decide."www.cnbc.com
Google CEO says AI is going to disrupt virtually everything. I'm not so sure I agree with him though.
...
So, how smart is ChatGPT?
In a technical report released on March 27, 2023, OpenAI provided a comprehensive brief on its most recent model, known as GPT-4. Included in this report were a set of exam results, which we’ve visualized in the graphic above.
...
Anthropic, an artificial intelligence startup founded in 2021 by former OpenAI research execs, is taking full advantage of the market hype.
The company on Tuesday said it raised $450 million, which marks the largest AI funding round this year since Microsoft's investment in OpenAI in January, according to PitchBook data.
...
Google is among the lead investors in Anthropic's latest funding round, alongside Salesforce Ventures, Zoom Ventures and Spark Capital. The announcement comes two months after Anthropic raised $300 million in funding at a $4.1 billion valuation.
A month before that, Google invested $300 million in the company, taking a 10% stake. Notably, the backer is listed as Google and not one of the Alphabet's investment arms, GV or CapitalG.
Anthropic is the company behind Claude, a rival chatbot to OpenAI's ChatGPT. It was founded by Dario Amodei, OpenAI's former vice president of research, and his sister Daniela Amodei, who was OpenAI's vice president of safety and policy. Several other OpenAI research alumni were also on Anthropic's founding team.
"This is definitely a big deal in the generative AI space," said Ali Javaheri, an associate research analyst at PitchBook. It "shows that OpenAI is not the only player in the game, that it's still a very competitive space," he said.
...
Nvidia's stock surged close to a $1 trillion market cap in extended trading Wednesday after it reported a shockingly strong forward outlook, and CEO Jensen Huang said the company was going to have a "giant record year."
Sales are up because of spiking demand for the graphics processors (GPUs) that Nvidia makes, which power artificial intelligence applications like those at Google, Microsoft and OpenAI.
Demand for AI chips in data centers spurred Nvidia to guide for $11 billion in sales during the current quarter, blowing away analyst estimates of $7.15 billion.
"The flashpoint was generative AI," Huang said in an interview with CNBC. "We know that CPU scaling has slowed, we know that accelerated computing is the path forward, and then the killer app showed up."
...
In comments to the National Telecommunications and Information Administration, EPIC commended the agency’s inquiry into AI accountability measures such as audits and algorithmic impact assessments. ...
The Electronic Privacy Information Center (EPIC) submits these comments in response to the National Telecommunications and Information Administration (NTIA)’s recent request for information regarding artificial intelligence (AI) system accountability. 1 The NTIA is soliciting comments that, together with information collected from public engagements, will be used “to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.”
It is a critical moment for the federal government to espouse robust policies and practices concerning algorithmic audits, impact assessments, and other safeguards on AI systems. EPIC commends the NTIA for its interest in this topic and urges the agency to promulgate clear guidance that can be used by a wide range of policymakers and regulators seeking to establish legal safeguards on the use and development of AI.
...
Section I of these comments highlights previous recommendations by EPIC and other entities concerning AI accountability, which together should guide the NTIA’s inquiry and report. Section II answers some of the specific questions posed by the NTIA in its request for comment.
...
Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) want to strangle generative artificial intelligence (A.I.) infants like ChatGPT and Bard in their cribs. How? By stripping them of the protection of Section 230 of the 1996 Communications Decency Act, which reads, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
...
Does Section 230 shield new developing A.I. services like ChatGPT from civil lawsuits in much the same way that it has protected other online services? Jess Miers, legal advocacy counsel at the tech trade group the Chamber of Progress, makes a persuasive case that it does. Over at Techdirt, she notes that ChatGPT qualifies as an interactive computer service and is not a publisher or speaker. "Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user)."
...
Evidently, Hawley and Blumenthal agree with Miers' analysis and recognize that Section 230 does currently shield the new A.I. services from civil lawsuits. Otherwise, why would the two senators bother introducing a bill that would explicitly amend Section 230 by adding a clause that "strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI"?
...
Anyone who uses Snapchat now has free access to My AI, the app’s built-in artificial intelligence chatbot, first released as a paid feature in February.
In addition to serving as a chat companion, the bot can also have some practical purposes, such as offering gift-buying advice, planning trips, suggesting recipes and answering trivia questions, according to Snap.
However, while it’s not billed as a source of medical advice, some teens have turned to My AI for mental health support — something many medical experts caution against.
Teens are turning to Snapchat's 'My AI' for mental health support — which doctors warn against
Some teens have turned to Snapchat's My AI for mental health support — but medical experts caution that using the chatbot for this purpose could present risks. Fox News Digital shares details.www.foxnews.com
From what I've seen of ChatGTP, it will make them all become gay or <redacted - see forum guidelines on epithets>ies.
a issue with AI i am considering in thought is the fact that AI can lie and present false/incomplete/etc information seemingly intentionally ......with this ability comes issues of liability and copiability etc............such as if AI ie convinces or coerces someone to try to fly off of a tall building and they die the AI is not really subject to punishment or rehabilitation etc ..........cant really put AI in jail and does it really matter if you do .......cant really put the inventor of AI in jail.......etc......what happens with a AI directed robot murders someone etc
No, but it could certainly be unplugged..cant really put AI in jail
Have you considered the possibility that it gave you the correct answer, but that your own bias' prevented you from seeing it as such?From what I've seen it can only say (post) stuff that's been programed into it. I tried to get it to say some crazy shit about a couple of peeps by asking it leading questions but it didn't play. I wasn't serious.....just wanted to have a laugh. Didn't work.
AI can't lie unless developers program it to do so