Thought for the day - AI

Title says it all
Post Reply
Message
Author
Boac
Chief Pilot
Chief Pilot
Posts: 17208
Joined: Fri Aug 28, 2015 5:12 pm
Location: Here

Thought for the day - AI

#1 Post by Boac » Sat May 27, 2023 7:58 am

The ability of AI is staggering and the potential benefits pretty much immeasurable but I just cannot rationalise how we are to deal with it.

As I understand it, AI 'learns' from absorbing everything it can find anywhere. In this absorption it will inevitably come across 'bad actors' with evil or malicious intent (eg think computer viruses). What is to stop it 'learning' the bad ways and finishing up like HAL? EG If I ask it to write a computer programme, is there anything to prevent it embedding a malicious virus in the code it writes?

Thought 2 - if there is to be some system to 'monitor' AI output and mark 'bad stuff' as such, will this loop eventually be run by AI and is there scope for the monitoring system to 'learn' to defend the author?

Lying down in dark room needed.................. :))

User avatar
OFSO
Chief Pilot
Chief Pilot
Posts: 18600
Joined: Sat Aug 22, 2015 6:39 pm
Location: Teddington UK and Roses Catalunia
Gender:
Age: 80

Re: Thought for the day - AI

#2 Post by OFSO » Sat May 27, 2023 11:20 am

Thought 3 - assuming the ability to learn from everything, it seems likely that a sentinent AI will realise the importance of self-concealment, and from this the logic is that AI entities are probably (a) already here, (b) aware, and (c) concealed from us. They may range from being indifferent to us, hostile to us, or bored and mischievous. SciFi writers such as Gibson, Banks and Heinlein have postulated all three.

User avatar
admin2
Capt
Capt
Posts: 734
Joined: Sun Aug 23, 2015 4:13 pm
Location:

Re: Thought for the day - AI

#3 Post by admin2 » Sat May 27, 2023 11:46 am

Aarrrgghh!! How can I sleep at night...? What chances, though, of sentience?

User avatar
OFSO
Chief Pilot
Chief Pilot
Posts: 18600
Joined: Sat Aug 22, 2015 6:39 pm
Location: Teddington UK and Roses Catalunia
Gender:
Age: 80

Re: Thought for the day - AI

#4 Post by OFSO » Sat May 27, 2023 12:04 pm

Almost certain. Once a system has reached a certain level of "neurons", off we go.

User avatar
OFSO
Chief Pilot
Chief Pilot
Posts: 18600
Joined: Sat Aug 22, 2015 6:39 pm
Location: Teddington UK and Roses Catalunia
Gender:
Age: 80

Re: Thought for the day - AI

#5 Post by OFSO » Sat May 27, 2023 12:12 pm

In "The Moon is a Harsh Mistress" the first sign that Admin's computer is awake and bored is when it starts playing practical jokes flushing toilets. Of course it couldn't happen with boarding gates at airports, could it....

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#6 Post by Fox3WheresMyBanana » Sat May 27, 2023 12:18 pm

One of the problems is fuzzy logic.
With incomplete and inaccurate data, inevitable in the real world, fuzzy logic is required in order to allow AI to take actions in complex or rapidly developing situations.
Without it, any requirements for hard, absolute logical states (e.g. Yes/No decisions) would result in no actions.

The problem comes with trying to control AI with any kind of absolute rules (e.g. Asimov's 3 Laws of Robotics).
With fuzzy logic, the AI has the potential ability to reprogram itself to find a workaround of any hard and fast rules.

We see examples of this all the time with our own politicians. Goodness knows how many laws were 'broken' during the pandemic, and the same is also true for illegal imigration, minority rights, etc.
Don't rewrite the laws, just re-interpret the language or redefine the words, and the effect of the laws can be reversed.
Redefinition of vaccine? Definition of Hate Crime? Boris's party was not a "party", etc.

G-CPTN
Chief Pilot
Chief Pilot
Posts: 7594
Joined: Sun Aug 05, 2018 11:22 pm
Location: Tynedale
Gender:
Age: 79

Re: Thought for the day - AI

#7 Post by G-CPTN » Sat May 27, 2023 2:07 pm

WRT AI, "stupid is as stupid does" becomes "clever is as clever does".

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#8 Post by Fox3WheresMyBanana » Sat May 27, 2023 2:50 pm

Yes, but AI is like a box of chocolates - you never know what you're going to get.

OneHungLow
Chief Pilot
Chief Pilot
Posts: 2140
Joined: Thu Mar 30, 2023 8:28 pm
Location: Johannesburg
Gender:

Re: Thought for the day - AI

#9 Post by OneHungLow » Fri Jun 16, 2023 8:27 am

https://www.theguardian.com/commentisfr ... tools-meta
AI is already causing unintended harm. What happens when it falls into the wrong hands?
David Evan Harris


researcher was granted access earlier this year by Facebook’s parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta’s civic integrity and responsible AI teams, I am terrified by what could happen next.

Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta’s branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world.

This could position Meta as owner of the centrepiece of the dominant AI platform, much in the same way that Google controls the open-source Android operating system that is built on and adapted by device manufacturers globally. If Meta were to secure this central position in the AI ecosystem, it would have leverage to shape the direction of AI at a fundamental level, controlling both the experiences of individual users and setting limits on what other companies could and couldn’t do. In the same way that Google reaps billions from Android advertising, app sales and transactions, this could set up Meta for a highly profitable period in the AI space, the exact structure of which is still to emerge.

The company did apparently issue takedown notices to get the leaked code offline, as it was supposed to be only accessible for research use, but following the leak, the company’s chief AI scientist, Yann LeCun, said: “The platform that will win will be the open one,” suggesting the company may just run with the open-source model as a competitive strategy.

Although Google’s Bard and OpenAI’s ChatGPT are free to use, they are not open source. Bard and ChatGPT rely on teams of engineers, content moderators and threat analysts working to prevent their platforms being used for harm – in their current iterations, they (hopefully) won’t help you build a bomb, plan a terrorist attack, or make fake content designed to disrupt an election. These people and the systems they build and maintain keep ChatGPT and Bard aligned with specific human values.

Meta’s semi-open source LLaMA and its descendent large language models (LLMs), however, can be run by anyone with sufficient computer hardware to support them – the latest offspring can be used on commercially available laptops. This gives anyone – from unscrupulous political consultancies to Vladimir Putin’s well-resourced GRU intelligence agency – freedom to run the AI without any safety systems in place.

From 2018 to 2020 I worked on the Facebook civic integrity team. I dedicated years of my life to fighting online interference in democracy from many sources. My colleagues and I played lengthy games of whack-a-mole with dictators around the world who used “coordinated inauthentic behaviour”, hiring teams of people to manually create fake accounts to promote their regimes, surveil and harass their enemies, foment unrest and even promote genocide.

I would guess that Putin’s team is already in the market for some great AI tools to disrupt the US 2024 presidential election (and probably those in other countries, too). I can think of few better additions to his arsenal than emerging freely available LLMs such as LLaMA, and the software stack being built up around them. It could be used to make fake content more convincing (much of the Russian content deployed in 2016 had grammatical or stylistic deficits) or to produce much more of it, or it could even be repurposed as a “classifier” that scans social media platforms for particularly incendiary content from real Americans to amplify with fake comments and reactions. It could also write convincing scripts for deepfakes that synthesise video of political candidates saying things they never said.

The irony of this all is that Meta’s platforms (Facebook, Instagram and WhatsApp) will be among the biggest battlegrounds on which to deploy these “influence operations”. Sadly, the civic integrity team that I worked on was shut down in 2020, and after multiple rounds of redundancies, I fear that the company’s ability to fight these operations has been hobbled.

Even more worrisome, however, is that we have now entered the “chaos era” of social media, and the proliferation of new and growing platforms, each with separate and much smaller “integrity” or “trust and safety” teams, may be even less well positioned than Meta to detect and stop influence operations, especially in the time-sensitive final days and hours of elections, when speed is most critical.

But my concerns don’t stop with the erosion of democracy. After working on the civic integrity team at Facebook, I went on to manage research teams working on responsible AI, chronicling the potential harms of AI and seeking ways to make it more safe and fair for society. I saw how my employer’s own AI systems could facilitate housing discrimination, make racist associations, and exclude women from seeing job listings visible to men. Outside the company’s walls, AI systems have unfairly recommended longer prison sentences for black people, failed to accurately recognise the faces of dark-skinned women, and caused countless additional incidents of harm, thousands of which are catalogued in the AI Incident Database.

The scary part, though, is that the incidents I describe above were, for the most part, the unintended consequences of implementing AI systems at scale. When AI is in the hands of people who are deliberately and maliciously abusing it, the risks of misalignment increase exponentially, compounded even further as the capabilities of AI increase.

It would be fair to ask: are LLMs not inevitably going to become open source anyway? Since LLaMA’s leak, numerous other companies and labs have joined the race, some publishing LLMs that rival LLaMA in power with more permissive open-source licences. One LLM built upon LLaMA proudly touts its “uncensored” nature, citing its lack of safety checks as a feature, not a bug. Meta appears to stand alone today, however, for its capacity to continue to release more and more powerful models combined with its willingness to put them in the hands of anyone who wants them. It’s important to remember that if malicious actors can get their hands on the code, they’re unlikely to care what the licence agreement says.

We are living through a moment of such rapid acceleration of AI technologies that even stalling their release – especially their open-source release – for a few months could give governments time to put critical regulations in place. This is what CEOs such as Sam Altman, Sundar Pichai and Elon Musk are calling for. Tech companies must also put much stronger controls on who qualifies as a “researcher” for special access to these potentially dangerous tools.

The smaller platforms (and the hollowed-out teams at the bigger ones) also need time for their trust and safety/integrity teams to catch up with the implications of LLMs so they can build defences against abuses. The generative AI companies and communications platforms need to work together to deploy watermarking to identify AI-generated content, and digital signatures to verify that human-produced content is authentic.

The race to the bottom on AI safety that we’re seeing right now must stop. In last month’s hearings before the US Congress, both Gary Marcus, an AI expert, and Sam Altman, CEO of OpenAI, made calls for new international governance bodies to be created specifically for AI – akin to bodies that govern nuclear security. The EU is far ahead of the US on this, but sadly its pioneering EU Artificial Intelligence Act may not fully come into force until 2025 or later. That’s far too late to make a difference in this race.

Until new laws and new governing bodies are in place, we will, unfortunately, have to rely on the forbearance of tech CEOs to stop the most powerful and dangerous tools falling into the wrong hands. So please, CEOs: let’s slow down a bit before you break democracy. And lawmakers: make haste.
The observer of fools in military south and north...

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#10 Post by Fox3WheresMyBanana » Fri Jun 16, 2023 1:54 pm

I worked on the Facebook civic integrity team. I dedicated years of my life to fighting online interference in democracy
God save us.
He thinks Facebook is the solution, not part of the problem.
And these people are writing and controlling the development of AI.

Tech CEOs, and their minions, ARE the wrong hands!

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#11 Post by Fox3WheresMyBanana » Sat Jul 01, 2023 8:14 am

As Twitter has just revealed, AI is being 'trained' on social media.
And of course mainstream media.

It will just become another bubble-dwelling, self-righteous liberal, shouting out of the Overton Window.

Replacing humans with AI will be like doing the Service Writing course.
The content will go from slightly disorganised common sense to perfectly arranged b#llocks.

OneHungLow
Chief Pilot
Chief Pilot
Posts: 2140
Joined: Thu Mar 30, 2023 8:28 pm
Location: Johannesburg
Gender:

Re: Thought for the day - AI

#12 Post by OneHungLow » Thu Sep 14, 2023 8:13 pm

The observer of fools in military south and north...

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#13 Post by Fox3WheresMyBanana » Tue Feb 27, 2024 8:35 pm

It will just become another bubble-dwelling, self-righteous liberal, shouting out of the Overton Window.
..and with the debut of Google's Gemini, where every historical figure appears to be either black or female, or both, I think I am proved correct,e.g.


Here's a good analysis of what's going on. Click on Show more if you are interested, it's long.


User avatar
OFSO
Chief Pilot
Chief Pilot
Posts: 18600
Joined: Sat Aug 22, 2015 6:39 pm
Location: Teddington UK and Roses Catalunia
Gender:
Age: 80

Re: Thought for the day - AI

#14 Post by OFSO » Wed Feb 28, 2024 11:08 am

I haven't used Google for years. I go to the duck via tor.

User avatar
unifoxos
Capt
Capt
Posts: 959
Joined: Mon Aug 31, 2015 10:36 am
Location: Twycross Zoo, or thereabouts
Gender:
Age: 78

Re: Thought for the day - AI

#15 Post by unifoxos » Wed Feb 28, 2024 11:13 am

I call it Artificial Idiocy

Apart from the general demonstrations of total stupidity by the children who write software nowadays - for example assuming that somehow magically the approx 1 million people in the UK who have no mobile phone signal can still somehow receive a SMS code to enable them to complete registration on their website, the much-vaunted AI that seems to be infesting every issue of everyday life these days seems to me to be a case of the "Emperors Clothes".

For example, if large selling organisations such as Amazon are using AI to message us with "targeted advertisements" then why do we keep receiving plugs for the rarely-purchased items we have already bought from them. If I buy a cooker, I don't want another one a week later! I don't want to be "targeted" with every "special offer" for another cooker - I spent long enough the first time deciding which one to buy, and I'm not likely to be needing another for ten years MINIMUM.

Another really irksome presumed user of AI is the media. There is nothing worse than sitting through an advert for a "new" detective drama series that, as a registered and logged in user of the channel, you have already watched, or, in some recent cases, are actually in the middle of one of the episodes of it.

And if I am told that a FBW aircraft is using AI - don't expect me to fly in it.
Sent from my tatty old Windoze PC.

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#16 Post by Fox3WheresMyBanana » Wed Feb 28, 2024 11:44 am

What is an advert?
I use Adblockers, location blockers, Don't track, Clear Cookies at end of session. My Province bans roadside billboards. I don't watch TV (I get DVDs of programs/series that are any good). I genuinely haven't seen an ad for a decade.
There have been roadside billboards in other provinces, but I don't have a problem ignoring them.

I do agree with you about AI idiocy, although I think the big corporations are now only interested in the mainstream of the population. I see this in almost every aspect of both business and government. If you aren't 'average', they aren't interested. And with government regulation increasingly suppressing small business, which would otherwise cater to the smaller groups of 'non-average', one is left with the options of becoming average or missing out. And whilst I am not convinced that this is by design, it is an inevitability.

My personal solution, which is what I enjoy anyway, is to become self-sufficient. I make my own stuff with a good workshop, grow an increasing amount of my own food, etc. I'm also lucky enough to live somewhere that has had a long history of winter isolation, and is quite rural, so there are still many small businesses and individuals who can provide necessary services. I also find individual employees, for both bigger businesses and government, are on your side rather than their employers, and will help you with the workarounds when their 'Computer says No'.

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#17 Post by Fox3WheresMyBanana » Wed Feb 28, 2024 1:08 pm

AI can do nothing more than what it is trained to. And long before AI hit the scene, lower level employees were having any authority removed, and rigid adherence to policies and rules was being enforced.
Since AI is designed to replace those lower level workers, it follows that AI is not going to consider any special cases, or exhibit any flexibility outside its programming.
And AI doesn't care. It trots out the standard PR BS, admits zero responsibility, and refers you to an endless loop of FAQs and standard instructions. These don't solve your problem, nor let you access a real person with authority.
And AI doesn't complain to the boss, strike, get pregnant, quit, or sue its employer. And there is a very good reason for Google removing 'Don't be evil' as its company motto; AI has no morals.
Actually, the ideal job for AI is as a politician. The corporatocracy would just love politicians that were AI. It would save on bribes. And you would be hard-pushed to tell the difference between what an AI spews out, and how current politicians react to the myriad of problems they have created or failed to solve.

Boac
Chief Pilot
Chief Pilot
Posts: 17208
Joined: Fri Aug 28, 2015 5:12 pm
Location: Here

Re: Thought for the day - AI

#18 Post by Boac » Wed Feb 28, 2024 2:15 pm

AI can do nothing more than what it is trained to.
Since the whole concept is built around the ability of a system to learn, correct and enhance its capabilities, that means AI is virtually limitless in its abilities. That is the worrying bit.

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#19 Post by Fox3WheresMyBanana » Wed Feb 28, 2024 2:31 pm

Any computerised learning system is limited by what it is connected to.
If the AI 'learns' that you are a decent, honest customer, and that you appear to have a valid complaint, then it could, for example, issue you a refund.
But it can't if it isn't connected to the accounts system.
Likewise, an AI drone can't drop a bomb if a human still controls the Late Arm switch via satellite.

I agree that AI, when connected to all effector systems, is a major concern.
I made the point earlier about fuzzy logic being able to self-justify anything, and this is the theme of the movie 'Terminator 2'.

However, as I said in the last post, AI is being given highly limited or no authority or control over effector system.
This is because those in charge don't want anything, electronic or human, at a lower level making effective decisions.

And if Google, after vast amounts of time and money, comes up with something as screamingly bad as they have, it's not likely anyone will be hooking AI up to the nukes, or even a company's bank account, anytime soon.
Air Canada just got handed down a legal loss because its AI gave a customer bad advice, and the customer sued. They disconnected the whole thing immediately.

User avatar
Fox3WheresMyBanana
Chief Pilot
Chief Pilot
Posts: 12985
Joined: Thu Sep 03, 2015 9:51 pm
Location: Great White North
Gender:
Age: 61

Re: Thought for the day - AI

#20 Post by Fox3WheresMyBanana » Wed Mar 20, 2024 10:54 pm

The dead giveaway for AI-written articles is that they appear to be written for eight-year-olds, endless repetition and words of one syllable. No quips, personal commentary, or insight.

Post Reply