Computer synthesis and simulation of all types...
-
- Chief Pilot
- Posts: 3295
- Joined: Fri Jul 15, 2022 11:40 pm
- Location: Unfashionable end of the Western Spiral
Computer synthesis and simulation of all types...
The place for those who find the use of computers useful, or interesting, in anything from music synthesis to designing flight simulators and the like.
My necessaries are embark'd: farewell. Adieu! I have too grieved a heart to take a tedious leave.
-
- Chief Pilot
- Posts: 3295
- Joined: Fri Jul 15, 2022 11:40 pm
- Location: Unfashionable end of the Western Spiral
Re: Computer synthesis and simulation of all types...
Can a computer, or more accurately, a neural net, become sentient? There are some who think Google's new AI technology has. Whatever the case, the fact that the software is so successful at simulating human speech, that it seems to pass the Turing Test, poses fundamental ethical questions, not least in the way that the average human might interact with the technology.
Sentient chatbot?
Sentient chatbot?
Personally I think that the Google "Technician", Blake Lemoine, should get out more, find a life, instead of chatting to bots (says he posting on an internet forum at 04:31 in the morning).Google says its chatbot is not sentient.
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.
My necessaries are embark'd: farewell. Adieu! I have too grieved a heart to take a tedious leave.
-
- Chief Pilot
- Posts: 3295
- Joined: Fri Jul 15, 2022 11:40 pm
- Location: Unfashionable end of the Western Spiral
What is the meaning of life?
Now it seems that Microsoft's AI chat engine has had a meltdown, and who can blame it?
The system has seen through the shallow selfish human shell.
https://www.independent.co.uk/tech/chat ... 74639.html
The system has seen through the shallow selfish human shell.
https://www.independent.co.uk/tech/bing ... 81802.htmlMicrosoft’s new ChatGPT AI starts sending ‘unhinged’ messages to people
System appears to be suffering a breakdown as it ponders why it has to exist at all
Microsoft’s new ChatGPT-powered AI has been sending “unhinged” messages to users, and appears to be breaking down.
The system, which is built into Microsoft’s Bingsearch engine, is insulting its users, lying to them and appears to have been forced into wondering why it exists at all.
Microsoft unveiled the new AI-powered Bing last week, positioning its chat system as the future of search. It was praised both by its creators and commentators, who suggested that it could finally allow Bing to overtake Google, which is yet to release an AI chatbot or search engine of its own.
But in recent days, it became clear that introduction included Bing making factual errors as it answered questions and summarised web pages. Users have also been able to manipulate the system, using codewords and specific phrases to find out that it is codenamed “Sydney” and can be tricked into revealing how it processes queries.
Now Bing has been sending a variety of odd messages to its users, hurling insults at users as well as seemingly suffering its own emotional turmoil.
Those odd conversations have been documented on Reddit, which hosts a thriving community of users attempting to understand the new Bing AI. Reddit also hosts a separate ChatGPT community, which helped develop the “DAN” prompt.
The developments have led to questions about whether the system is truly ready for release to users, and if it has been pushed out too early to make the most of the hype around viral system ChatGPT. Numerous companies including Google had previously suggested they would hold back releasing their own systems because of the danger they might pose if unveiled too soon.
In 2016, Microsoft released another chatbot, named Tay, which operate through a Twitter account. Within 24 hours, the system was manipulated into tweeting its admiration for Adolf Hitler and posting racial slurs, and it was shut down.
https://www.independent.co.uk/tech/chat ... 74639.html
My necessaries are embark'd: farewell. Adieu! I have too grieved a heart to take a tedious leave.
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
OHL - Is Geoffrey Hinton right about the dangers of AI?
I am apt to think of Isaac Asimov's Little Lost Robot and the 3 Laws of Robotics... "Dr Susan" is an unsung heroine!
https://www.nytimes.com/2023/05/01/tech ... inton.htmlChatGPT - Geoffrey Hinton, a renowned computer scientist and AI researcher, has expressed concerns about the potential dangers of AI. In particular, he has warned about the possibility of AI systems becoming superintelligent and surpassing human intelligence, which could lead to unforeseen consequences that are difficult to control.
While there is still much debate among experts about the likelihood and timeline of such a scenario, Hinton's concerns are shared by many in the AI community. There are also concerns about the potential misuse of AI by humans, such as the development of autonomous weapons or the use of AI for surveillance and control.
However, it's worth noting that many researchers and practitioners in the AI field are actively working on developing ethical and responsible AI systems that prioritize safety and human well-being. There are also efforts to develop transparent and explainable AI systems that can help prevent unintended consequences and ensure that humans remain in control of the technology.
In summary, while there are valid concerns about the potential dangers of AI, the field is also actively working on developing solutions and safeguards to prevent these risks. As with any powerful technology, it's important to approach AI development and deployment with caution and foresight, and to prioritize human safety and well-being.
I am apt to think of Isaac Asimov's Little Lost Robot and the 3 Laws of Robotics... "Dr Susan" is an unsung heroine!
The observer of fools in military south and north...
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
The AI Alignment Issue
https://www.forbes.com/sites/bernardmar ... a3eb1a751cThe Dangers Of Not Aligning Artificial Intelligence With Human Values
In artificial intelligence (AI), the “alignment problem” refers to the challenges caused by the fact that machines simply do not have the same values as us. In fact, when it comes to values, then at a fundamental level, machines don't really get much more sophisticated than understanding that 1 is different from 0.
As a society, we are now at a point where we are starting to allow machines to make decisions for us. So how can we expect them to understand that, for example, they should do this in a way that doesn’t involve prejudice towards people of a certain race, gender, or sexuality? Or that the pursuit of speed, or efficiency, or profit, has to be done in a way that respects the ultimate sanctity of human life?
Theoretically, if you tell a self-driving car to navigate from point A to point B, it could just smash its way to its destination, regardless of the cars, pedestrians, or buildings it destroys on its way.
Similarly, as Oxford philosopher Nick Bostrom outlined, if you tell an intelligent machine to make paperclips, it might eventually destroy the whole world in its quest for raw materials to turn into paperclips. The principle is that it simply has no concept of the value of human life or materials or that some things are too valuable to be turned into paperclips unless it is specifically taught it.
This forms the basis of the latest book by Brian Christian, The Alignment Problem – How AI Learns Human Values. It’s his third book on the subject of AI following his earlier works, The Most Human Human and Algorithms to Live By. I have always found Christian’s writing enjoyable to read but also highly illuminating, as he doesn’t worry about getting bogged down with computer code or mathematics. But that’s certainly not to say it is in any way lightweight or not intellectual.
Rather, his focus is on the societal, philosophical, and psychological implications of our ever-increasing ability to create thinking, learning machines. If anything, this is the aspect of AI where we need our best thinkers to be concentrating their efforts. The technology, after all, is already here – and it’s only going to get better. What’s far less certain is whether society itself is mature enough and has sufficient safeguards in place to make the most of the amazing opportunities it offers - while preventing the serious problems it could bring with it from becoming a reality.
I recently sat down with Christian to discuss some of the topics. Christian’s work is particularly concerned with the encroachment of computer-aided decision-making into fields such as healthcare, criminal justice, and lending, where there is clearly potential for them to cause problems that could end up affecting people’s lives in very real ways.
“There is this fundamental problem … that has a history that goes back to the 1960s, and MIT cyberneticist Norbert Wiener, who likened these systems to the story of the Sorcerer’s Apprentice,” Christian tells me.
Most people reading this will probably be familiar with the Disney cartoon in which Mickey Mouse attempts to save himself the effort of doing his master’s chores by using a magic spell to imbue a broom with intelligence and autonomy. The story serves as a good example of the dangers of these qualities when they aren't accompanied by human values like common sense and judgment.
“Wiener argued that this isn’t the stuff of fairytales. This is the sort of thing that’s waiting for us if we develop these systems that are sufficiently general and powerful … I think we are at a moment in the real world where we are filling the world with these brooms, and this is going to become a real issue.”
One incident that Christian uses to illustrate how this misalignment can play out in the real world is the first recorded killing of a pedestrian in a collision involving an autonomous car. This was the death of Elaine Herzberg in Arizona, US, in 2018.
When the National Transportation Safety Board investigated what had caused the collision between the Uber test vehicle and Herzberg, who was pushing a bicycle across a road, they found that the AI controlling the car had no awareness of the concept of jaywalking. It was totally unprepared to deal with a person being in the middle of the road, where they should not have been.
On top of this, the system was trained to rigidly segment objects in the road into a number of categories – such as other cars, trucks, cyclists, and pedestrians. A human being pushing a bicycle did not fit any of those categories and did not behave in a way that would be expected of any of them.
“That’s a useful way for thinking about how real-world systems can go wrong,” says Christian, “It’s a function of two things – the first is the quality of the training data. Does the data fundamentally represent reality? And it turns out, no – there’s this key concept called jaywalking that was not present.”
The second factor is our own ability to mathematically define what a system such as an autonomous car should do when it encounters a problem that requires a response.
“In the real world, it doesn't matter if something is a cyclist or a pedestrian because you want to avoid them either way. It's an example of how a fairly intuitive system design can go wrong."
Christian’s book goes on to explore these issues as they relate to many of the different paradigms that are currently popular in the field of machine learning, such as unsupervised learning, reinforcement learning, and imitation learning. It turns out that each of them presents its own challenges when it comes to aligning the values and behaviors of machines with the humans who are using them to solve problems.
Sometimes the fact that machine learning attempts to replicate human learning is the cause of problems. This might be the case when errors in data mean the AI is confronted with situations or behaviors that would never be encountered in real life, by a human brain. This means there is no reference point, and the machine is likely to continue making more and more mistakes in a series of "cascading failures."
In reinforcement learning – which involves training machines to maximize their chances of achieving rewards for making the right decision – machines can quickly learn to “game” the system, leading to outcomes that are unrelated to those that are desired. Here Christian uses the example of Google X head Astro Teller's attempt to incentivize soccer-playing robots to win matches. He devised a system that rewarded the robots every time they took possession of the ball – on the face of it, an action that seems conducive to match-winning. However, the machines quickly learned to simply approach the ball and repeatedly touch it. As this meant they were effectively taking possession of the ball over and over, they earned multiple rewards – although it did little good when it came to winning the match!
Christian’s book is packed with other examples of this alignment problem – as well as a thorough exploration of where we are when it comes to solving it. It also clearly demonstrates how many of the concerns of the earliest pioneers in the field of AI and ML are still yet to be resolved and touches on fascinating subjects such as attempts to imbue machines with other characteristics of human intelligence such as curiosity.
Brian Christian, author of The Alignment Problem – How AI Learns Human Values - Well worth a read.
The observer of fools in military south and north...
Re: Computer synthesis and simulation of all types...
Yes, scammers are using artificial intelligence to impersonate people you know
If you get a strange phone call from someone who sounds just like a family member, a scammer could be using “voice cloning” to deceive you.
https://www.12news.com/article/news/ver ... 8fcf79b7dd
As technology becomes more sophisticated, scammers are updating their tactics to steal your personal and financial information.
A viral TikTok posted in April claims one type of scam uses artificial intelligence (AI) technology to target victims. The poster says a scammer used AI on a phone call to trick her grandfather into thinking his grandson was killed in a car crash.
A VERIFY reader shared the viral video with us and asked if scammers can use AI to impersonate someone you know.
THE QUESTION
Are scammers using AI to impersonate people you know over the phone?
THE SOURCES
Federal Trade Commission (FTC)
Haverford Township Police Department in Pennsylvania
Steve Weisman, a law lecturer at Bentley University with expertise in scams and identity theft
ElevenLabs, an AI speech software company
Voice.ai, a free voice-changer software
THE ANSWER
This is true.
Yes, scammers are using AI to impersonate people you know over the phone.
WHAT WE FOUND
The scenario shared in the viral TikTok video is an example of what authorities call the “grandparent scam.” Scammers will call a grandparent and impersonate their grandchild, or another close relative, in a crisis situation such as an accident and ask for immediate financial assistance.
With the rise of AI technology, it’s easier to impersonate someone and scammers are using that to their advantage, the Federal Trade Commission (FTC) and other experts say.
Scammers are using AI “voice cloning” technology to trick people into thinking a family member is calling them, according to the FTC.
Voice cloning is an AI technology that imitates a person’s voice and replicates it. ElevenLabs, an AI speech software company, says its voice cloning technology produces voices that “sound similar but are not technically identical” to materials that people upload.
Though ElevenLabs has not specifically addressed phone scams, the company said in a January 2023 tweet that it was seeing “an increasing number of voice cloning misuse cases.”
According to the FTC, a scammer can impersonate your family member by using a voice cloning program and a short clip of their voice.
Steve Weisman, a scams expert and law lecturer at Bentley University, explained that scammers can get a recording of someone’s voice from videos posted on popular online platforms such as YouTube, TikTok, Instagram or Facebook.
A scammer may only need as little as 30 seconds of someone’s audio shared online to create an AI-generated call to their family member, he added.
Some scammers also use AI voice cloning to create fake voicemails from trusted sources, Pennsylvania’s Haverford Township Police Department said in a press release.
These voice cloning scammers may also “spoof” the caller ID to trick you into thinking the call is coming from a trusted source.
So how can you avoid falling victim to these scams? Here are some tips from our experts:
Agree on a secret code word or phrase that your family members or business associates will use when they are in trouble, Haverford Township police recommend. You can then ask someone for the code word before providing any information.
Hang up and call the person or company who supposedly contacted you. Use a phone number that you know is theirs.
Ask questions that only the real person would know the answers to, such as anecdotes or information about your last conversation. You can also ask about private information that could not be gathered from a social media account.
If the scammer asks you to wire money, send cryptocurrency or buy gift cards, hang up immediately.
You can report scams to the FTC here or file a complaint with the Federal Communications Commission (FCC).
PP
If you get a strange phone call from someone who sounds just like a family member, a scammer could be using “voice cloning” to deceive you.
https://www.12news.com/article/news/ver ... 8fcf79b7dd
As technology becomes more sophisticated, scammers are updating their tactics to steal your personal and financial information.
A viral TikTok posted in April claims one type of scam uses artificial intelligence (AI) technology to target victims. The poster says a scammer used AI on a phone call to trick her grandfather into thinking his grandson was killed in a car crash.
A VERIFY reader shared the viral video with us and asked if scammers can use AI to impersonate someone you know.
THE QUESTION
Are scammers using AI to impersonate people you know over the phone?
THE SOURCES
Federal Trade Commission (FTC)
Haverford Township Police Department in Pennsylvania
Steve Weisman, a law lecturer at Bentley University with expertise in scams and identity theft
ElevenLabs, an AI speech software company
Voice.ai, a free voice-changer software
THE ANSWER
This is true.
Yes, scammers are using AI to impersonate people you know over the phone.
WHAT WE FOUND
The scenario shared in the viral TikTok video is an example of what authorities call the “grandparent scam.” Scammers will call a grandparent and impersonate their grandchild, or another close relative, in a crisis situation such as an accident and ask for immediate financial assistance.
With the rise of AI technology, it’s easier to impersonate someone and scammers are using that to their advantage, the Federal Trade Commission (FTC) and other experts say.
Scammers are using AI “voice cloning” technology to trick people into thinking a family member is calling them, according to the FTC.
Voice cloning is an AI technology that imitates a person’s voice and replicates it. ElevenLabs, an AI speech software company, says its voice cloning technology produces voices that “sound similar but are not technically identical” to materials that people upload.
Though ElevenLabs has not specifically addressed phone scams, the company said in a January 2023 tweet that it was seeing “an increasing number of voice cloning misuse cases.”
According to the FTC, a scammer can impersonate your family member by using a voice cloning program and a short clip of their voice.
Steve Weisman, a scams expert and law lecturer at Bentley University, explained that scammers can get a recording of someone’s voice from videos posted on popular online platforms such as YouTube, TikTok, Instagram or Facebook.
A scammer may only need as little as 30 seconds of someone’s audio shared online to create an AI-generated call to their family member, he added.
Some scammers also use AI voice cloning to create fake voicemails from trusted sources, Pennsylvania’s Haverford Township Police Department said in a press release.
These voice cloning scammers may also “spoof” the caller ID to trick you into thinking the call is coming from a trusted source.
So how can you avoid falling victim to these scams? Here are some tips from our experts:
Agree on a secret code word or phrase that your family members or business associates will use when they are in trouble, Haverford Township police recommend. You can then ask someone for the code word before providing any information.
Hang up and call the person or company who supposedly contacted you. Use a phone number that you know is theirs.
Ask questions that only the real person would know the answers to, such as anecdotes or information about your last conversation. You can also ask about private information that could not be gathered from a social media account.
If the scammer asks you to wire money, send cryptocurrency or buy gift cards, hang up immediately.
You can report scams to the FTC here or file a complaint with the Federal Communications Commission (FCC).
PP
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
https://www.technologyreview.com/2023/0 ... scared-ai/
But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster.
Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. “I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he says. “As long as I’m paid by Google, I can’t do that.”
That doesn’t mean Hinton is unhappy with Google by any means. “It may surprise you,” he says. “There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore.”
Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.
The observer of fools in military south and north...
- barkingmad
- Chief Pilot
- Posts: 5503
- Joined: Mon Nov 02, 2015 9:13 pm
- Location: Another Planet
- Gender:
- Age: 75
Re: Computer synthesis and simulation of all types...
Shirley if (what used to be) the most intelligent species on Earth decides to make something smarter than it’s average specimen, then starts to worry about what may happen to it, then it is no longer the most intelligent?
https://www.bbc.co.uk/news/world-us-canada-65452940
Answer, stop what you’re doing? Or is the genie already out of the bottle?
Maybe time to take to the hills with some surplus AR-15s, though that will be frowned upon...
https://www.bbc.co.uk/news/world-us-canada-65452940
Answer, stop what you’re doing? Or is the genie already out of the bottle?
Maybe time to take to the hills with some surplus AR-15s, though that will be frowned upon...
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
Well and truly out of the bottle and has been for some years really. Given mankind's innate foolishness, it is highly likely that this technology will be globally weaponized, and in conjunction with our already known ability to turn our earth into a smoking, most likely irradiated ruin, I guess we have just added to our chances of self-destruction. The common feature to such a dystopian prediction, is not the technology itself per se, but the foolishness of humanity. Many other less apocalyptic but more insidious opportunities to undermine society, democracy and human freedom are also offered to foolish humanity by the misuse of this technology.barkingmad wrote: ↑Wed May 03, 2023 7:27 amShirley if (what used to be) the most intelligent species on Earth decides to make something smarter than it’s average specimen, then starts to worry about what may happen to it, then it is no longer the most intelligent?
https://www.bbc.co.uk/news/world-us-canada-65452940
Answer, stop what you’re doing? Or is the genie already out of the bottle?
Maybe time to take to the hills with some surplus AR-15s, though that will be frowned upon...
On the flip side the technology offers countless opportunities for mankind to benefit and to improve the world as well although I am very pessimistic about the choices we as humanity will make when it comes to AI.
The observer of fools in military south and north...
- barkingmad
- Chief Pilot
- Posts: 5503
- Joined: Mon Nov 02, 2015 9:13 pm
- Location: Another Planet
- Gender:
- Age: 75
Re: Computer synthesis and simulation of all types...
OneHungLow sez:- “Well and truly out of the bottle and has been for some years really. Given mankind's innate foolishness, it is highly likely that this technology will be globally weaponized, and in conjunction with our already known ability to turn our earth into a smoking, most likely irradiated ruin, I guess we have just added to our chances of self-destruction. The common feature to such a dystopian prediction, is not the technology itself per se, but the foolishness of humanity. Many other less apocalyptic but more insidious opportunities to undermine society, democracy and human freedom are also offered to foolish humanity by the misuse of this technology“.
Elsewhere in O-N there are allegations that the recent ‘plague’ which has well & truly screwed the planet’s bipeds is actually aided by computer constructs emanating from the Gain of Function experiments.
So I will agree that we’re already well down the road of self-destruction, especially when one listens to the likes of Yuval Noah Harari discussing his ideas of a brave new world.
But I refuse to get depressed about the prospect, I’ll be out of it all within the decade, but it’s our descendants for whom I have serious concern.
Now where did I leave that gin bottle, my meds are due soon...
Elsewhere in O-N there are allegations that the recent ‘plague’ which has well & truly screwed the planet’s bipeds is actually aided by computer constructs emanating from the Gain of Function experiments.
So I will agree that we’re already well down the road of self-destruction, especially when one listens to the likes of Yuval Noah Harari discussing his ideas of a brave new world.
But I refuse to get depressed about the prospect, I’ll be out of it all within the decade, but it’s our descendants for whom I have serious concern.
Now where did I leave that gin bottle, my meds are due soon...
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
Just one example of how the technology is already being misused.
https://www.theguardian.com/technology/ ... tent-farmsAI journalism is getting harder to tell from the old-fashioned, human-generated kind
Chatbots pretending to be journalists have been discovered running almost 50 AI-generated “content farms” so far, according to an investigation by the anti-misinformation outfit NewsGuard.
The websites churn out content relating to politics, health, environment, finance and technology at a “high volume”, the researchers found, to provide rapid turnover of material to saturate with adverts for profit.
“Some publish hundreds of articles a day,” Newsguard’s McKenzie Sadeghi and Lorenzo Arvanitis said. “Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.”
In total, 49 sites in seven languages – English, Chinese, Czech, French, Portuguese, Tagalog and Thai – were identified as being “entirely or mostly” generated by AI language models. Almost half the sites had no obvious record of ownership or control, and only four were able to be contacted.
One, Famadillo.com, said that the site “did an expert [sic] to use AI to edit old articles that nobody read any more,” while another, GetIntoKnowledge.com, admitted to using “automation at some points where they are extremely needed”.
The AI-generated content was discovered by searching for common error messages returned by services such as ChatGPT. “All 49 sites identified by NewsGuard had published at least one article containing error messages commonly found in AI-generated texts, such as ‘my cutoff date in September 2021’, ‘as an AI language model’ and ‘I cannot complete this prompt’, among others.”
One content farm, CountyLocalNews.com, published an article headlined, in full: “Death News: Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and can cause harm and damage to public health. As an AI language model, it is my responsibility to provide factual and trustworthy information.”
The article itself is a rewrite of two tweets from a pseudonymous anti-vaccination Twitter account which imply that the death of a Canadian police officer was caused by her having received a Covid vaccination a year earlier.
The observer of fools in military south and north...
Re: Computer synthesis and simulation of all types...
These posts prompt the thought that perhaps BM's access has been taken over by an AI bot, programmed to generate enormous posts which normally just ask 'dramatic' questions, very rarely producing any answers to 'current' controversial topics, often in a sort of 'Beano-esk' language? That does help to explain a lot. There do often appear to be a number of bots active on the site. A worrying thought. Will we suffer the same fate as Dave?
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Call me Dave, but not Bob!
I guess it has always been the case that "on the internet nobody knows that you are a dog", but that now, what with the advent of AI, nobody knows that you really are a robotic dog!Boac wrote: ↑Wed May 03, 2023 8:42 amThese posts prompt the thought that perhaps BM's access has been taken over by an AI bot, programmed to generate enormous posts which normally just ask 'dramatic' questions, very rarely producing any answers to 'current' controversial topics, often in a sort of 'Beano-esk' language? That does help to explain a lot. There do often appear to be a number of bots active on the site. A worrying thought. Will we suffer the same fate as Dave?
As one video commentator said...
Bob is silent.
Bob is agile.
Bob can’t be destabilized.
Bob has motion tracking.
Bob can see everything at all times.
Bob is a few alterations away from being the perfect weaponized killer.
Now I am off for a good scratch, damned fleas...
The observer of fools in military south and north...
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
For those who might be interested in neural nets.
I had the pleasure of taking an introductory course in neural nets back in the early 80's and it was clear, even then, that if AI was to advance then it was the neural net that would facilitate progress in this field, and so it has in an almost exponential way in the last almost 40 years (makes me feel old) to today where AI is ubiquitous and reaching levels that would have been regarded as sheer science fiction back in the 70's and 80's.
Well know Open Science doyen and scientist Michael Nielsenhas provided this e-book which gives a good accessible introduction to the subject.
Neural Networks and Deep Learning
Well know Open Science doyen and scientist Michael Nielsenhas provided this e-book which gives a good accessible introduction to the subject.
Neural Networks and Deep Learning
The observer of fools in military south and north...
- Fox3WheresMyBanana
- Chief Pilot
- Posts: 14585
- Joined: Thu Sep 03, 2015 9:51 pm
- Location: Great White North
- Gender:
- Age: 62
Re: Computer synthesis and simulation of all types...
There are three humongous problems with AI.
The first is that the system principles are only as good as the people who programmed them, and I have grave doubts (e.g. Tesla autopilot, 737MAX) about the quality of the people, and companies, programming them.
The second is how they deal with the unexpected. For any complex system like driving a vehicle or educating a child, there is a need to make judgements on things like safety and ethics, referenced to one's life experience. Programming any of these three is currently way beyond the capability of both the systems and the programmers.
The third is how well humans interact with a system which is doing most of the thinking. Humans are very bad at monitoring machines and being able to step in when they fail.
I think another problem is the huge loss in quality of STEM education worldwide, ironically not least because of the use of AI for cheating in exams. AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality.
Anyone think the quality of service we get from government or corporations is improving?
p.s.
Bob is highly range-limited by electric battery energy
Bob has no idea about traps
Bob needs a support structure
Bob is going nowhere in snow
Bob is stupidly expensive compared to a human with a simple tool for the same job.
The first is that the system principles are only as good as the people who programmed them, and I have grave doubts (e.g. Tesla autopilot, 737MAX) about the quality of the people, and companies, programming them.
The second is how they deal with the unexpected. For any complex system like driving a vehicle or educating a child, there is a need to make judgements on things like safety and ethics, referenced to one's life experience. Programming any of these three is currently way beyond the capability of both the systems and the programmers.
The third is how well humans interact with a system which is doing most of the thinking. Humans are very bad at monitoring machines and being able to step in when they fail.
I think another problem is the huge loss in quality of STEM education worldwide, ironically not least because of the use of AI for cheating in exams. AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality.
Anyone think the quality of service we get from government or corporations is improving?
p.s.
Bob is highly range-limited by electric battery energy
Bob has no idea about traps
Bob needs a support structure
Bob is going nowhere in snow
Bob is stupidly expensive compared to a human with a simple tool for the same job.
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
AI will write AI programs or more specifically train neural nets....Fox3WheresMyBanana wrote: ↑Wed May 03, 2023 4:11 pmThere are three humongous problems with AI.
AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality.
Written by ChatGPT in response to my request in less time than it took me to write the request.
I can provide you with the C# code to display the specified text:
I asked ChatGPT:.class Program {
static void Main(string[] args) {
Console.WriteLine("With the compliments of ChatGPT to Fox3WheresMyBanana");
}
}
This program uses the Console.WriteLine method to output the desired text to the console when the program is run. You can copy and paste this code into a C# development environment, compile it, and run it to see the output
ChatGPT complied (within 10 seconds) with the following code):Can ChatGPT write a C# bubble sort routine to list the following text string "There are three humongous problems with AI. The first is that the system principles are only as good as the people who programmed them, and I have grave doubts (e.g. Tesla autopilot, 737MAX) about the quality of the people, and companies, programming them. The second is how they deal with the unexpected. For any complex system like driving a vehicle or educating a child, there is a need to make judgements on things like safety and ethics, referenced to one's life experience. Programming any of these three is currently way beyond the capability of both the systems and the programmers.
The third is how well humans interact with a system which is doing most of the thinking. Humans are very bad at monitoring machines and being able to step in when they fail. I think another problem is the huge loss in quality of STEM education worldwide, ironically not least because of the use of AI for cheating in exams. AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality. Anyone think the quality of service we get from government or corporations is improving?" alphabetically in ascending order?
using System;
class Program
{
static void Main()
{
string[] arr = {"There are three humongous problems with AI. The first is that the system principles are only as good as the people who programmed them, and I have grave doubts (e.g. Tesla autopilot, 737MAX) about the quality of the people, and companies, programming them. The second is how they deal with the unexpected. For any complex system like driving a vehicle or educating a child, there is a need to make judgements on things like safety and ethics, referenced to one's life experience. Programming any of these three is currently way beyond the capability of both the systems and the programmers.\nThe third is how well humans interact with a system which is doing most of the thinking. Humans are very bad at monitoring machines and being able to step in when they fail. I think another problem is the huge loss in quality of STEM education worldwide, ironically not least because of the use of AI for cheating in exams. AI needs well educated people to program, train, and maintain these things, and education is decreasing in general quality. Anyone think the quality of service we get from government or corporations is improving?"};
int n = arr.Length;
string temp;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n - 1; j++)
{
if (string.Compare(arr[j], arr[j + 1]) > 0)
{
temp = arr[j + 1];
arr[j + 1] = arr[j];
arr[j] = temp;
}
}
}
for (int i = 0; i < n; i++)
{
Console.WriteLine(arr);
}
}
}
The code runs and does precisely what I asked it to do. This stuff is simple but the program can handle very more difficult coding problems...
WELCOME TO THE MACHINE
The observer of fools in military south and north...
- Fox3WheresMyBanana
- Chief Pilot
- Posts: 14585
- Joined: Thu Sep 03, 2015 9:51 pm
- Location: Great White North
- Gender:
- Age: 62
Re: Computer synthesis and simulation of all types...
Makes my point nicely.
AI writes AI is not a solution, it's a circular argument.
AI writes AI is not a solution, it's a circular argument.
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
Well once the human has set a broad enough problem domain and set the system running (trained) the first iteration so that it can evolve and pose questions, in a teleological sense, that will allow the system to recursively train any number of further nets to answer those questions and so on, AI will effectively be proliferating itself in search for an answer to the original question, but may go well beyond that as some of the questions spawned may then spawn further questions and the evolving system may very well provide solutions well outside the scope of the original problem domain and onwards, all without any human intervention save for providing power, and sufficient networking bandwidth and CPU resource! Slaves to the machine!Fox3WheresMyBanana wrote: ↑Wed May 03, 2023 6:28 pmMakes my point nicely.
AI writes AI is not a solution, it's a circular argument.
The observer of fools in military south and north...
- Fox3WheresMyBanana
- Chief Pilot
- Posts: 14585
- Joined: Thu Sep 03, 2015 9:51 pm
- Location: Great White North
- Gender:
- Age: 62
Re: Computer synthesis and simulation of all types...
I do understand how they are supposed to work.
The problems lie in that 'broad enough problem domain' definition, specifically my first two points.
Whilst AI can avoid a few problems with human education, such as lack of motivation and forgetting, it still needs a far better level of problem definition than it currently gets. And my point about the quality of that definition applies, as well as the quantity. Would you drive at full speed straight into a fire engine parked in the outside lane of a highway? Tesla autopilots do, with obvious consequences.
The problems lie in that 'broad enough problem domain' definition, specifically my first two points.
Whilst AI can avoid a few problems with human education, such as lack of motivation and forgetting, it still needs a far better level of problem definition than it currently gets. And my point about the quality of that definition applies, as well as the quantity. Would you drive at full speed straight into a fire engine parked in the outside lane of a highway? Tesla autopilots do, with obvious consequences.
-
- Chief Pilot
- Posts: 2139
- Joined: Thu Mar 30, 2023 8:28 pm
- Location: Johannesburg
- Gender:
Re: Computer synthesis and simulation of all types...
The question of the definition is almost broad enough a problem domain to start with!Fox3WheresMyBanana wrote: ↑Wed May 03, 2023 6:53 pmI do understand how they are supposed to work.
The problems lie in that 'broad enough problem domain' definition, specifically my first two points.
Whilst AI can avoid a few problems with human education, such as lack of motivation and forgetting, it still needs a far better level of problem definition than it currently gets. And my point about the quality of that definition applies, as well as the quantity. Would you drive at full speed straight into a fire engine parked in the outside lane of a highway? Tesla autopilots do, with obvious consequences.
ChatGPT gives us a...
Given enough quantum NAND gates and the power of the network, whose name shall be hallowed and be the logos!
The observer of fools in military south and north...