Computer synthesis and simulation of all types...
Posted: Sat Oct 22, 2022 5:26 pm
The place for those who find the use of computers useful, or interesting, in anything from music synthesis to designing flight simulators and the like.
A Convivial Aviation Discussion Forum for Aviators, Aviatrices and for those who think Flying Machines are Magic.
https://ops-normal.org/
Personally I think that the Google "Technician", Blake Lemoine, should get out more, find a life, instead of chatting to bots (says he posting on an internet forum at 04:31 in the morning).Google says its chatbot is not sentient.
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.
https://www.independent.co.uk/tech/bing ... 81802.htmlMicrosoft’s new ChatGPT AI starts sending ‘unhinged’ messages to people
System appears to be suffering a breakdown as it ponders why it has to exist at all
Microsoft’s new ChatGPT-powered AI has been sending “unhinged” messages to users, and appears to be breaking down.
The system, which is built into Microsoft’s Bingsearch engine, is insulting its users, lying to them and appears to have been forced into wondering why it exists at all.
Microsoft unveiled the new AI-powered Bing last week, positioning its chat system as the future of search. It was praised both by its creators and commentators, who suggested that it could finally allow Bing to overtake Google, which is yet to release an AI chatbot or search engine of its own.
But in recent days, it became clear that introduction included Bing making factual errors as it answered questions and summarised web pages. Users have also been able to manipulate the system, using codewords and specific phrases to find out that it is codenamed “Sydney” and can be tricked into revealing how it processes queries.
Now Bing has been sending a variety of odd messages to its users, hurling insults at users as well as seemingly suffering its own emotional turmoil.
Those odd conversations have been documented on Reddit, which hosts a thriving community of users attempting to understand the new Bing AI. Reddit also hosts a separate ChatGPT community, which helped develop the “DAN” prompt.
The developments have led to questions about whether the system is truly ready for release to users, and if it has been pushed out too early to make the most of the hype around viral system ChatGPT. Numerous companies including Google had previously suggested they would hold back releasing their own systems because of the danger they might pose if unveiled too soon.
In 2016, Microsoft released another chatbot, named Tay, which operate through a Twitter account. Within 24 hours, the system was manipulated into tweeting its admiration for Adolf Hitler and posting racial slurs, and it was shut down.
https://www.nytimes.com/2023/05/01/tech ... inton.htmlChatGPT - Geoffrey Hinton, a renowned computer scientist and AI researcher, has expressed concerns about the potential dangers of AI. In particular, he has warned about the possibility of AI systems becoming superintelligent and surpassing human intelligence, which could lead to unforeseen consequences that are difficult to control.
While there is still much debate among experts about the likelihood and timeline of such a scenario, Hinton's concerns are shared by many in the AI community. There are also concerns about the potential misuse of AI by humans, such as the development of autonomous weapons or the use of AI for surveillance and control.
However, it's worth noting that many researchers and practitioners in the AI field are actively working on developing ethical and responsible AI systems that prioritize safety and human well-being. There are also efforts to develop transparent and explainable AI systems that can help prevent unintended consequences and ensure that humans remain in control of the technology.
In summary, while there are valid concerns about the potential dangers of AI, the field is also actively working on developing solutions and safeguards to prevent these risks. As with any powerful technology, it's important to approach AI development and deployment with caution and foresight, and to prioritize human safety and well-being.
https://www.forbes.com/sites/bernardmar ... a3eb1a751cThe Dangers Of Not Aligning Artificial Intelligence With Human Values
In artificial intelligence (AI), the “alignment problem” refers to the challenges caused by the fact that machines simply do not have the same values as us. In fact, when it comes to values, then at a fundamental level, machines don't really get much more sophisticated than understanding that 1 is different from 0.
As a society, we are now at a point where we are starting to allow machines to make decisions for us. So how can we expect them to understand that, for example, they should do this in a way that doesn’t involve prejudice towards people of a certain race, gender, or sexuality? Or that the pursuit of speed, or efficiency, or profit, has to be done in a way that respects the ultimate sanctity of human life?
Theoretically, if you tell a self-driving car to navigate from point A to point B, it could just smash its way to its destination, regardless of the cars, pedestrians, or buildings it destroys on its way.
Similarly, as Oxford philosopher Nick Bostrom outlined, if you tell an intelligent machine to make paperclips, it might eventually destroy the whole world in its quest for raw materials to turn into paperclips. The principle is that it simply has no concept of the value of human life or materials or that some things are too valuable to be turned into paperclips unless it is specifically taught it.
This forms the basis of the latest book by Brian Christian, The Alignment Problem – How AI Learns Human Values. It’s his third book on the subject of AI following his earlier works, The Most Human Human and Algorithms to Live By. I have always found Christian’s writing enjoyable to read but also highly illuminating, as he doesn’t worry about getting bogged down with computer code or mathematics. But that’s certainly not to say it is in any way lightweight or not intellectual.
Rather, his focus is on the societal, philosophical, and psychological implications of our ever-increasing ability to create thinking, learning machines. If anything, this is the aspect of AI where we need our best thinkers to be concentrating their efforts. The technology, after all, is already here – and it’s only going to get better. What’s far less certain is whether society itself is mature enough and has sufficient safeguards in place to make the most of the amazing opportunities it offers - while preventing the serious problems it could bring with it from becoming a reality.
I recently sat down with Christian to discuss some of the topics. Christian’s work is particularly concerned with the encroachment of computer-aided decision-making into fields such as healthcare, criminal justice, and lending, where there is clearly potential for them to cause problems that could end up affecting people’s lives in very real ways.
“There is this fundamental problem … that has a history that goes back to the 1960s, and MIT cyberneticist Norbert Wiener, who likened these systems to the story of the Sorcerer’s Apprentice,” Christian tells me.
Most people reading this will probably be familiar with the Disney cartoon in which Mickey Mouse attempts to save himself the effort of doing his master’s chores by using a magic spell to imbue a broom with intelligence and autonomy. The story serves as a good example of the dangers of these qualities when they aren't accompanied by human values like common sense and judgment.
“Wiener argued that this isn’t the stuff of fairytales. This is the sort of thing that’s waiting for us if we develop these systems that are sufficiently general and powerful … I think we are at a moment in the real world where we are filling the world with these brooms, and this is going to become a real issue.”
One incident that Christian uses to illustrate how this misalignment can play out in the real world is the first recorded killing of a pedestrian in a collision involving an autonomous car. This was the death of Elaine Herzberg in Arizona, US, in 2018.
When the National Transportation Safety Board investigated what had caused the collision between the Uber test vehicle and Herzberg, who was pushing a bicycle across a road, they found that the AI controlling the car had no awareness of the concept of jaywalking. It was totally unprepared to deal with a person being in the middle of the road, where they should not have been.
On top of this, the system was trained to rigidly segment objects in the road into a number of categories – such as other cars, trucks, cyclists, and pedestrians. A human being pushing a bicycle did not fit any of those categories and did not behave in a way that would be expected of any of them.
“That’s a useful way for thinking about how real-world systems can go wrong,” says Christian, “It’s a function of two things – the first is the quality of the training data. Does the data fundamentally represent reality? And it turns out, no – there’s this key concept called jaywalking that was not present.”
The second factor is our own ability to mathematically define what a system such as an autonomous car should do when it encounters a problem that requires a response.
“In the real world, it doesn't matter if something is a cyclist or a pedestrian because you want to avoid them either way. It's an example of how a fairly intuitive system design can go wrong."
Christian’s book goes on to explore these issues as they relate to many of the different paradigms that are currently popular in the field of machine learning, such as unsupervised learning, reinforcement learning, and imitation learning. It turns out that each of them presents its own challenges when it comes to aligning the values and behaviors of machines with the humans who are using them to solve problems.
Sometimes the fact that machine learning attempts to replicate human learning is the cause of problems. This might be the case when errors in data mean the AI is confronted with situations or behaviors that would never be encountered in real life, by a human brain. This means there is no reference point, and the machine is likely to continue making more and more mistakes in a series of "cascading failures."
In reinforcement learning – which involves training machines to maximize their chances of achieving rewards for making the right decision – machines can quickly learn to “game” the system, leading to outcomes that are unrelated to those that are desired. Here Christian uses the example of Google X head Astro Teller's attempt to incentivize soccer-playing robots to win matches. He devised a system that rewarded the robots every time they took possession of the ball – on the face of it, an action that seems conducive to match-winning. However, the machines quickly learned to simply approach the ball and repeatedly touch it. As this meant they were effectively taking possession of the ball over and over, they earned multiple rewards – although it did little good when it came to winning the match!
Christian’s book is packed with other examples of this alignment problem – as well as a thorough exploration of where we are when it comes to solving it. It also clearly demonstrates how many of the concerns of the earliest pioneers in the field of AI and ML are still yet to be resolved and touches on fascinating subjects such as attempts to imbue machines with other characteristics of human intelligence such as curiosity.
Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”
That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.
Well and truly out of the bottle and has been for some years really. Given mankind's innate foolishness, it is highly likely that this technology will be globally weaponized, and in conjunction with our already known ability to turn our earth into a smoking, most likely irradiated ruin, I guess we have just added to our chances of self-destruction. The common feature to such a dystopian prediction, is not the technology itself per se, but the foolishness of humanity. Many other less apocalyptic but more insidious opportunities to undermine society, democracy and human freedom are also offered to foolish humanity by the misuse of this technology.barkingmad wrote: ↑Wed May 03, 2023 7:27 amShirley if (what used to be) the most intelligent species on Earth decides to make something smarter than it’s average specimen, then starts to worry about what may happen to it, then it is no longer the most intelligent?
https://www.bbc.co.uk/news/world-us-canada-65452940
Answer, stop what you’re doing? Or is the genie already out of the bottle?
Maybe time to take to the hills with some surplus AR-15s, though that will be frowned upon...
https://www.theguardian.com/technology/ ... tent-farmsAI journalism is getting harder to tell from the old-fashioned, human-generated kind
Chatbots pretending to be journalists have been discovered running almost 50 AI-generated “content farms” so far, according to an investigation by the anti-misinformation outfit NewsGuard.
The websites churn out content relating to politics, health, environment, finance and technology at a “high volume”, the researchers found, to provide rapid turnover of material to saturate with adverts for profit.
“Some publish hundreds of articles a day,” Newsguard’s McKenzie Sadeghi and Lorenzo Arvanitis said. “Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.”
In total, 49 sites in seven languages – English, Chinese, Czech, French, Portuguese, Tagalog and Thai – were identified as being “entirely or mostly” generated by AI language models. Almost half the sites had no obvious record of ownership or control, and only four were able to be contacted.
One, Famadillo.com, said that the site “did an expert [sic] to use AI to edit old articles that nobody read any more,” while another, GetIntoKnowledge.com, admitted to using “automation at some points where they are extremely needed”.
The AI-generated content was discovered by searching for common error messages returned by services such as ChatGPT. “All 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated texts, such as ‘my cutoff date in September 2021’, ‘as an AI language model’ and ‘I cannot complete this prompt’, among others.”
One content farm, CountyLocalNews.com, published an article headlined, in full: “Death News: Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and can cause harm and damage to public health. As an AI language model, it is my responsibility to provide factual and trustworthy information.”
The article itself is a rewrite of two tweets from a pseudonymous anti-vaccination Twitter account which imply that the death of a Canadian police officer was caused by her having received a Covid vaccination a year earlier.
I guess it has always been the case that "on the internet nobody knows that you are a dog", but that now, what with the advent of AI, nobody knows that you really are a robotic dog!Boac wrote: ↑Wed May 03, 2023 8:42 amThese posts prompt the thought that perhaps BM's access has been taken over by an AI bot, programmed to generate enormous posts which normally just ask 'dramatic' questions, very rarely producing any answers to 'current' controversial topics, often in a sort of 'Beano-esk' language? That does help to explain a lot. There do often appear to be a number of bots active on the site. A worrying thought. Will we suffer the same fate as Dave?
Bob is silent.
Bob is agile.
Bob can’t be destabilized.
Bob has motion tracking.
Bob can see everything at all times.
Bob is a few alterations away from being the perfect weaponized killer.
AI will write AI programs or more specifically train neural nets....Fox3WheresMyBanana wrote: ↑Wed May 03, 2023 4:11 pmThere are three humongous problems with AI.
AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality.
I asked ChatGPT:.class Program {
static void Main(string[] args) {
Console.WriteLine("With the compliments of ChatGPT to Fox3WheresMyBanana");
}
}
This program uses the Console.WriteLine method to output the desired text to the console when the program is run. You can copy and paste this code into a C# development environment, compile it, and run it to see the output
ChatGPT complied (within 10 seconds) with the following code):Can ChatGPT write a C# bubble sort routine to list the following text string "There are three humongous problems with AI. The first is that the system principles are only as good as the people who programmed them, and I have grave doubts (e.g. Tesla autopilot, 737MAX) about the quality of the people, and companies, programming them. The second is how they deal with the unexpected. For any complex system like driving a vehicle or educating a child, there is a need to make judgements on things like safety and ethics, referenced to one's life experience. Programming any of these three is currently way beyond the capability of both the systems and the programmers.
The third is how well humans interact with a system which is doing most of the thinking. Humans are very bad at monitoring machines and being able to step in when they fail. I think another problem is the huge loss in quality of STEM education worldwide, ironically not least because of the use of AI for cheating in exams. AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality. Anyone think the quality of service we get from government or corporations is improving?" alphabetically in ascending order?
using System;
class Program
{
static void Main()
{
string[] arr = {"There are three humongous problems with AI. The first is that the system principles are only as good as the people who programmed them, and I have grave doubts (e.g. Tesla autopilot, 737MAX) about the quality of the people, and companies, programming them. The second is how they deal with the unexpected. For any complex system like driving a vehicle or educating a child, there is a need to make judgements on things like safety and ethics, referenced to one's life experience. Programming any of these three is currently way beyond the capability of both the systems and the programmers.\nThe third is how well humans interact with a system which is doing most of the thinking. Humans are very bad at monitoring machines and being able to step in when they fail. I think another problem is the huge loss in quality of STEM education worldwide, ironically not least because of the use of AI for cheating in exams. AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality. Anyone think the quality of service we get from government or corporations is improving?"};
int n = arr.Length;
string temp;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n - 1; j++)
{
if (string.Compare(arr[j], arr[j + 1]) > 0)
{
temp = arr[j + 1];
arr[j + 1] = arr[j];
arr[j] = temp;
}
}
}
for (int i = 0; i < n; i++)
{
Console.WriteLine(arr);
}
}
}
Well once the human has set a broad enough problem domain and set the system running (trained) the first iteration so that it can evolve and pose questions, in a teleological sense, that will allow the system to recursively train any number of further nets to answer those questions and so on, AI will effectively be proliferating itself in search for an answer to the original question, but may go well beyond that as some of the questions spawned may then spawn further questions and the evolving system may very well provide solutions well outside the scope of the original problem domain and onwards, all without any human intervention save for providing power, and sufficient networking bandwidth and CPU resource! Slaves to the machine!Fox3WheresMyBanana wrote: ↑Wed May 03, 2023 6:28 pmMakes my point nicely.
AI writes AI is not a solution, it's a circular argument.
The question of the definition is almost broad enough a problem domain to start with!Fox3WheresMyBanana wrote: ↑Wed May 03, 2023 6:53 pmI do understand how they are supposed to work.
The problems lie in that 'broad enough problem domain' definition, specifically my first two points.
Whilst AI can avoid a few problems with human education, such as lack of motivation and forgetting, it still needs a far better level of problem definition than it currently gets. And my point about the quality of that definition applies, as well as the quantity. Would you drive at full speed straight into a fire engine parked in the outside lane of a highway? Tesla autopilots do, with obvious consequences.