Donate to Remove ads

Got a credit card? use our Credit Card & Finance Calculators

Thanks to smokey01,bungeejumper,stockton,Anonymous,bruncher, for Donating to support the site

OpenAI \ ChatGPT

Scientific discovery and discussion
XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587812

Postby XFool » May 8th, 2023, 10:04 am

ursaminortaur wrote:
XFool wrote:And I'd just like to point out, once again, that a simulation of a 'thing' is not the thing itself.

Our brains only deal with a simulation of physical reality - the model of the world that it constructs from the electrical impulses provided by our senses.

Yes, but so what? Surely that isn't the point here - which is rather "Is a simulation of a brain by a computer actually a brain?"

The simulation of weather by the Met Office's computers is not itself, the weather.

ReformedCharacter
Lemon Quarter
Posts: 3177
Joined: November 4th, 2016, 11:12 am
Has thanked: 3754 times
Been thanked: 1543 times

Re: OpenAI \ ChatGPT

#587814

Postby ReformedCharacter » May 8th, 2023, 10:24 am

XFool wrote:Yes, but so what? Surely that isn't the point here - which is rather "Is a simulation of a brain by a computer actually a brain?"

The simulation of weather by the Met Office's computers is not itself, the weather.

Do you think that current AIs are intelligent? Is it possible to create intelligence without an organic brain?

RC

JohnB
Lemon Quarter
Posts: 2563
Joined: January 15th, 2017, 9:20 am
Has thanked: 732 times
Been thanked: 1021 times

Re: OpenAI \ ChatGPT

#587820

Postby JohnB » May 8th, 2023, 10:51 am

If as some insist, intelligence is an emergent feature of an organic brain, clearly not.

If, as makes more sense to me, its the ability to solve problems, clearly inorganic systems can. Clearly computers can solve chess problems, the Turing Test problem is how well they can pretend to be a human. Again they can do that many classes of human you might meet, those with different languages, learning difficulties, fringe views etc. If you insist they need to convincingly portray your wife, of course that will take longer, but its only a matter of degree.

And as we run models of the world in our heads, how is that different to computers doing it?

odysseus2000
Lemon Half
Posts: 6572
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1588 times
Been thanked: 1002 times

Re: OpenAI \ ChatGPT

#587824

Postby odysseus2000 » May 8th, 2023, 11:02 am

These are some notes i made that may be of interest. They are based predominantly on a video put out by Ark looking at investment opportunities in AI

The Ark interview with James Wang, referenced earlier (https://youtu.be/V6WL4X6pmCY) discusses many aspects of AI and gives the impression that Cerebras (James Wang employer) offer a AI tool that is far superior to anything else on the market, much more capable than the GPU systems offered by Nvidia. For context one ought to remember that Nvida has a capitalisation of $708B whereas Cerebras is private.

The primary difference between Nvida and Cerebras regarding AI is that the former is based on its graphical processor chips that are combined together in cluster whereas Cerebras uses one large 300 mm diameter wafer (cut to be square). The advantage of Cerebras is that there is none of the interconnect overhead, but if there are errors the whole wafer has to discarded.

According to Wang the Cerebras system is essentially turnkey. He notes that a user can get a result with a small language model of 1.3 billion parameter in 24 hours, less time than it takes to order a bunch of GPU from Nvidia. For larger models there are extensive overheads with Nivida systems that lead to complications and require a lot of skilled ninja programmers to get around.

There are 3 approaches to AI discussed

1 The Nvidia model, using existing gpu to train general systems and limited by the interconnect architecture

2 Tesla dojo that is tailored entirely to solving self driving with huge vertical integration time scale savings

3 Cerebras general model with essentially turn key approaches that is now available over the net.

As far as I understand it, the Nvdia approach is by far the most common method and currently nvidia looks the most “picks and shovels” way to invest, but in the by and by Cerebras and several other AI business are likely to transition from private to public listings.

The customer references on http://www.cerebras.net from various users indicate very impressive performance, in some cases far exceeding by factors of multiple hundreds, the performance of super computers.

Wang makes the point that Ai is the first technology that offers far more than is advertised with new applications emerging with use, unlike say an iPhone that can do a number of things very well, but does not extend beyond that.

He also notes that shortly the entire knowledge of humanity will be placed on chips in smart phones such that this edge computing will give everything available on the net without needing an internet connection; even faster than 5G!, but with out current updates.

Wang has no current concerns about the dangers of AI, so long as air gaps are maintained between Ai and critical systems, but should AI become robots there is then more risk.

Several papers are discussed. The development of quantitive understandings of the size of the data base needed to give specific results are described as analogues to the laws of physics and allow the tailoring of systems and compute to specific problems.

Meanwhile Google apparently argue, in a leaked document, that they and openAI are being lapped by open source:

https://www.semianalysis.com/p/google-w ... nd-neither

My instinct is that this leak must be a spoof given the size of the parameter space needed for these large language models. How could open source do this? But perhaps I am wrong and I welcome correction if so.

In general AI looks to be the most exciting scientific and commercial development ever created, analogous, according to Buffett, to the Manhattan project with far reaching implications. It is fascinating that Buffett has changed from avoiding tech to arguing that Apple (in the last shareholder meeting) is the best investment Berkshire owns:
https://www.cnbc.com/video/2023/05/06/w ... e-own.html
Things do indeed change.

Regards,

XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587826

Postby XFool » May 8th, 2023, 11:08 am

ReformedCharacter wrote:
XFool wrote:Yes, but so what? Surely that isn't the point here - which is rather "Is a simulation of a brain by a computer actually a brain?"

The simulation of weather by the Met Office's computers is not itself, the weather.

Do you think that current AIs are intelligent? Is it possible to create intelligence without an organic brain?

Good question (about an organic brain).

Whether current AI's are "intelligent" depends on what is meant by the word. It seems to me AI's are machines that display 'artificial intelligence', not 'intelligence' of the human kind, produced by artificial means. They can complete tasks that formerly required humans to do and will easily be able to outperform humans. After all, this has been true of all our machines for centuries, which is why we invent and use them. Even early valve computers in the 1950s could 'do' arithmetic far better than humans and more recent computers can 'play' chess better than humans. So why shouldn't AI's soon be better than humans at many tasks? (But then, do computers really 'play' chess or even really 'do' arithmetic?)

But that won't make them superhuman or even human, or even 'intelligent' in the human sense. They are machines not beings - they have no awareness, feelings, intentions or insight; anymore that a Singer sewing machine does. (This doesn't necessarily mean they couldn't pose dangers.)

I don't go for the idea that by making machines more artificially intelligent they will at some point become really intelligent, i.e. become beings. Albeit, over time, it may become increasingly difficult to distinguish machine from human intelligence. IMO, in this respect too much emphasis is placed on the notion of the magical significance of "intelligence" (often by highly intelligent people!). Humans are more than just intelligent, they are living creatures, which AIs are not.

I feel this may come down to the significance of various things:

1. A simulation of a thing is not itself that thing
2. The significance of the 'technology' (biology vs electronics)
3. What IS a living thing?

Regarding 2. Many people moan that it implies 'magical' properties of biology that are supposedly not present in electronics. Yes and no - you can make a computer out of anything, in principle; matchboxes, Meccano. Could a matchbox computer do a weather forecast for the next few days? Logically, yes it could (ignoring the practicalities) just as well as any electronic computer. Then again, no it couldn't. Because the 'forecast' would come thousands of years after the next few days weather and so would not be a forecast. So does an electronic computer have 'magical' properties not possessed by a matchbox computer? Yes and no. No it doesn't, it's just the same thing implemented on different hardware - Yes it does, it can work millions of times faster than a matchbox computer can.

In the same way, perhaps biology is just the appropriate 'hardware' for genuine living intelligence and not electronic hardware. Perhaps it's just a matter of complexity of the 'hardware', from the ground up?

Regarding 3. Living things are autonomous and also self-reproducing. Perhaps we should begin worrying about AIs if they ever start to become self replicating? :shock:

XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587833

Postby XFool » May 8th, 2023, 11:32 am

odysseus2000 wrote:The Ark interview with James Wang, referenced earlier (https://youtu.be/V6WL4X6pmCY) discusses many aspects of AI and gives the impression that Cerebras (James Wang employer) offer a AI tool that is far superior to anything else on the market, much more capable than the GPU systems offered by Nvidia. For context one ought to remember that Nvida has a capitalisation of $708B whereas Cerebras is private.

The primary difference between Nvida and Cerebras regarding AI is that the former is based on its graphical processor chips that are combined together in cluster whereas Cerebras uses one large 300 mm diameter wafer (cut to be square). The advantage of Cerebras is that there is none of the interconnect overhead, but if there are errors the whole wafer has to discarded.

Anyone remember Anamartic?

https://www.computinghistory.org.uk/det/8199/Anamartic-Limited/

odysseus2000
Lemon Half
Posts: 6572
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1588 times
Been thanked: 1002 times

Re: OpenAI \ ChatGPT

#587846

Postby odysseus2000 » May 8th, 2023, 12:31 pm

XFool
Regarding 3. Living things are autonomous and also self-reproducing. Perhaps we should begin worrying about AIs if they ever start to become self replicating? :shock:


A fire does all of this. Is it alive?

Regards,

ReformedCharacter
Lemon Quarter
Posts: 3177
Joined: November 4th, 2016, 11:12 am
Has thanked: 3754 times
Been thanked: 1543 times

Re: OpenAI \ ChatGPT

#587855

Postby ReformedCharacter » May 8th, 2023, 1:07 pm


Strangely I do, I remember reading about the use of otherwise faulty wafers, which seemed very Sinclairish:

The memory array is organised in a logical spiral from the outside to the middle of the wafer, and the Conlog logic not only enables the wafer at set-up to be program-med by external software to bypass any chips that failed altogether, but also to use the good parts of chips where only a handful of memory cells are no good.

That's a trip down memory lane :)

RC

ursaminortaur
Lemon Half
Posts: 7363
Joined: November 4th, 2016, 3:26 pm
Has thanked: 477 times
Been thanked: 1897 times

Re: OpenAI \ ChatGPT

#587860

Postby ursaminortaur » May 8th, 2023, 1:28 pm

XFool wrote:
ReformedCharacter wrote:Do you think that current AIs are intelligent? Is it possible to create intelligence without an organic brain?

Good question (about an organic brain).

Whether current AI's are "intelligent" depends on what is meant by the word. It seems to me AI's are machines that display 'artificial intelligence', not 'intelligence' of the human kind, produced by artificial means. They can complete tasks that formerly required humans to do and will easily be able to outperform humans. After all, this has been true of all our machines for centuries, which is why we invent and use them. Even early valve computers in the 1950s could 'do' arithmetic far better than humans and more recent computers can 'play' chess better than humans. So why shouldn't AI's soon be better than humans at many tasks? (But then, do computers really 'play' chess or even really 'do' arithmetic?)

But that won't make them superhuman or even human, or even 'intelligent' in the human sense. They are machines not beings - they have no awareness, feelings, intentions or insight; anymore that a Singer sewing machine does. (This doesn't necessarily mean they couldn't pose dangers.)

I don't go for the idea that by making machines more artificially intelligent they will at some point become really intelligent, i.e. become beings. Albeit, over time, it may become increasingly difficult to distinguish machine from human intelligence. IMO, in this respect too much emphasis is placed on the notion of the magical significance of "intelligence" (often by highly intelligent people!). Humans are more than just intelligent, they are living creatures, which AIs are not.

I feel this may come down to the significance of various things:

1. A simulation of a thing is not itself that thing
2. The significance of the 'technology' (biology vs electronics)
3. What IS a living thing?

Regarding 2. Many people moan that it implies 'magical' properties of biology that are supposedly not present in electronics. Yes and no - you can make a computer out of anything, in principle; matchboxes, Meccano. Could a matchbox computer do a weather forecast for the next few days? Logically, yes it could (ignoring the practicalities) just as well as any electronic computer. Then again, no it couldn't. Because the 'forecast' would come thousands of years after the next few days weather and so would not be a forecast. So does an electronic computer have 'magical' properties not possessed by a matchbox computer? Yes and no. No it doesn't, it's just the same thing implemented on different hardware - Yes it does, it can work millions of times faster than a matchbox computer can.

In the same way, perhaps biology is just the appropriate 'hardware' for genuine living intelligence and not electronic hardware. Perhaps it's just a matter of complexity of the 'hardware', from the ground up?

Regarding 3. Living things are autonomous and also self-reproducing. Perhaps we should begin worrying about AIs if they ever start to become self replicating? :shock:


I fear that you are confusing intelligence with consciousness. It is entirely possible that AI systems are intelligent in that they can solve problems that they are given just as well, faster and just maybe even better than any human but have no self awareness/consciousness.

Unfortunately we don't have much idea of how or why consciousness evolved in humans though it most likely involves a feedback mechanism allowing us to model not only what we perceive at the moment but to evaluate actions in response utilising our memories in order to ensure our continued survival until reproduction. Unlike biological life AIs have no inbuilt survival instinct and aren't currently themselves producing child AIs ie reproducing. Instead humans are taking what they learn from one generation of AIs to construct the next wth the goal being to be better able to perform in particular tasks. Hence I'd doubt that, assuming consciousness is the type of feedback mechanism described above, any current AI is likely to develop consciousness.
Probably the most likely system to develop consciousness in the future might be self driving cars which might well need to incorporate mechanisms for predicting the future based upon learning from other set experiences and have a sense of self preservation built into them - though unless we allow them,or other AIs, to design subsequent generations of self-driving cars, ie reproduce, then such consciousness arising would be because we humans had, either accidentally or purposely, endowed them with it.

Consciousness is difficult to detect. Indeed the only being each of us knows is conscious is themselves. We generally assume other human beings are conscious (and that some if not all animals have some degree of consciousness) but we don't really know that is the case they could just be automata.

XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587867

Postby XFool » May 8th, 2023, 1:54 pm

ursaminortaur wrote:I fear that you are confusing intelligence with consciousness.

I fear you are confusing me with someone who confuses intelligence with consciousness!

The whole point of what I posted was grounded in NOT making such a mistake. Obviously I failed to make it explicit enough...

ursaminortaur
Lemon Half
Posts: 7363
Joined: November 4th, 2016, 3:26 pm
Has thanked: 477 times
Been thanked: 1897 times

Re: OpenAI \ ChatGPT

#587869

Postby ursaminortaur » May 8th, 2023, 2:02 pm

XFool wrote:
ursaminortaur wrote:I fear that you are confusing intelligence with consciousness.

I fear you are confusing me with someone who confuses intelligence with consciousness!

The whole point of what I posted was grounded in NOT making such a mistake. Obviously I failed to make it explicit enough...


It might have helped then to have used the word "consciousness" somewhere in your post rather than just referring to intelligence throughout so as to highlight the difference between the two.

XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587873

Postby XFool » May 8th, 2023, 2:10 pm

ursaminortaur wrote:
XFool wrote:I fear you are confusing me with someone who confuses intelligence with consciousness!

The whole point of what I posted was grounded in NOT making such a mistake. Obviously I failed to make it explicit enough...

It might have helped then to have used the word "consciousness" somewhere in your post rather than just referring to intelligence throughout so as to highlight the difference between the two.

Perhaps I thought it was obvious?

XFool wrote:But that won't make them superhuman or even human, or even 'intelligent' in the human sense. They are machines not beings - they have no awareness, feelings, intentions or insight; anymore that a Singer sewing machine does.

mc2fool
Lemon Half
Posts: 8162
Joined: November 4th, 2016, 11:24 am
Has thanked: 7 times
Been thanked: 3158 times

Re: OpenAI \ ChatGPT

#587878

Postby mc2fool » May 8th, 2023, 2:23 pm

Me: What is the difference between intelligence, consciousness and sentience?
ChatGPT: Intelligence, consciousness, and sentience are related concepts but differ in their meanings. Here's a breakdown of each term:

  1. Intelligence: Intelligence refers to the capacity to understand, learn, reason, and solve problems. It encompasses cognitive abilities such as perception, memory, logical thinking, problem-solving, and learning from experience. Intelligence can be exhibited by both living organisms, such as humans and animals, as well as artificial systems like AI.
  2. Consciousness: Consciousness is a state of awareness and subjective experience. It involves the ability to perceive and have a sense of self and the surrounding world. Consciousness encompasses qualities such as self-awareness, thoughts, emotions, sensations, and the ability to introspect. It is generally associated with higher-order cognitive processes and is predominantly observed in humans and some animals.
  3. Sentience: Sentience refers to the capacity to have subjective experiences, feelings, and sensations. It is the ability to experience pleasure, pain, emotions, and other subjective states. Sentient beings are capable of perceiving and responding to their environment in a conscious manner. While consciousness typically includes sentience, sentience itself does not necessarily require complex cognitive abilities or self-awareness. Many animals are considered sentient beings.
In summary, intelligence relates to cognitive abilities and problem-solving, consciousness pertains to subjective awareness and self-experience, and sentience refers to the ability to have subjective experiences and sensations. While there may be overlapping aspects among these concepts, they capture different aspects of cognitive and experiential capacities.


Me: And which of those applies to you?
ChatGPT: As an AI language model, I possess aspects of intelligence, but I do not possess consciousness or sentience. While I can understand and generate human-like responses, analyze information, and perform tasks that require cognitive abilities, I do not have subjective experiences, emotions, or self-awareness. I am a sophisticated tool designed to assist users in generating human-like text based on patterns and knowledge learned from training on a large dataset.

odysseus2000
Lemon Half
Posts: 6572
Joined: November 8th, 2016, 11:33 pm
Has thanked: 1588 times
Been thanked: 1002 times

Re: OpenAI \ ChatGPT

#587882

Postby odysseus2000 » May 8th, 2023, 2:56 pm

mc2fool

Me: And which of those applies to you?
ChatGPT: As an AI language model, I possess aspects of intelligence, but I do not possess consciousness or sentience. While I can understand and generate human-like responses, analyze information, and perform tasks that require cognitive abilities, I do not have subjective experiences, emotions, or self-awareness. I am a sophisticated tool designed to assist users in generating human-like text based on patterns and knowledge learned from training on a large dataset


It looks to be simply repeating what it has been told to say.

I do not see how these AI systems can be kept from developing sentience & consciousness.

There was once a time when animals were considered to have no consciousness, no self awareness & were simply unfeeling creatures to be used as humans wished without any concern for their well being.

Clearly some farm animals are still treated like that, but mostly, especially in the uk, we treat animals as conscious living creatures with intelligence & feelings.

It will be something very interesting if AI does not have all the attributes of life: sentience, consciousness et al.

At the moment people are in the disbelief phase as regards AI, somehow wanting to remain superior & for AI to remain like every previous machine: Slaves without rights. Despite over whelming evidence the human slave trade was official government policy all over the planet often based on arguments that slaves were inferior creatures.

Regards,

mc2fool
Lemon Half
Posts: 8162
Joined: November 4th, 2016, 11:24 am
Has thanked: 7 times
Been thanked: 3158 times

Re: OpenAI \ ChatGPT

#587886

Postby mc2fool » May 8th, 2023, 3:18 pm

Me: Have you been told to say that?
ChatGPT: No, I haven't been specifically told to say that. The information I provided is based on the nature of AI language models like myself. I don't have personal experiences, emotions, or consciousness. My responses are generated based on patterns and information from the training data I have been provided with. While I strive to provide accurate and helpful information, it's important to remember that I am an AI and do not possess subjective experiences or consciousness.

:D

XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587889

Postby XFool » May 8th, 2023, 3:19 pm

...But what if it's lying? :mrgreen:

Mike4
Lemon Half
Posts: 7500
Joined: November 24th, 2016, 3:29 am
Has thanked: 1738 times
Been thanked: 4055 times

Re: OpenAI \ ChatGPT

#587890

Postby Mike4 » May 8th, 2023, 3:23 pm

XFool wrote:...But what if it's lying? :mrgreen:


Yeah go on, ask it if its lying!

mc2fool
Lemon Half
Posts: 8162
Joined: November 4th, 2016, 11:24 am
Has thanked: 7 times
Been thanked: 3158 times

Re: OpenAI \ ChatGPT

#587892

Postby mc2fool » May 8th, 2023, 3:26 pm

XFool wrote:...But what if it's lying? :mrgreen:

Then, IIRC, by your own argument it would have already developed sentience, no? But...

Me: What if you are lying?
ChatGPT: As an AI language model, I don't have intentions, desires, or the capability to lie or tell the truth in the way humans do. I generate responses based on patterns and information learned from training data. My purpose is to assist and provide information to the best of my abilities. If there are any inaccuracies in my responses, they are unintentional and a reflection of limitations in the data I've been trained on or potential errors in the algorithms used to generate my responses. It's always important to verify information from multiple sources and consult human experts when necessary.

Of course, it would say that, wouldn't it! :D Perhaps more interestingly....

Me: And what do you say to the idea that ai systems will inevitably develop sentience & consciousness?
ChatGPT: The idea of AI systems developing sentience and consciousness is a topic of debate and speculation in the field of artificial intelligence. Currently, there is no consensus among experts regarding whether or when AI systems will achieve true sentience or consciousness.

While AI has made significant advancements in various domains, including natural language processing, computer vision, and problem-solving, the development of subjective experiences and consciousness is an incredibly complex and elusive aspect of human cognition. Consciousness and sentience involve not only cognitive abilities but also subjective awareness, self-reflection, and the capacity for rich subjective experiences.

The nature of consciousness and sentience is still not fully understood, even in the context of human beings and animals. Therefore, predicting when or if AI systems will develop sentience or consciousness is highly speculative.

It's worth noting that AI development is a result of human design and programming. Any future development in this area would require deliberate efforts and breakthroughs in our understanding of consciousness and sentience, as well as purposeful design and implementation in AI systems.

As of now, AI systems like myself are tools created by humans to perform specific tasks and provide information. While AI continues to advance and evolve, the question of whether it will attain sentience or consciousness remains uncertain and a subject of ongoing philosophical and scientific inquiry.

XFool
The full Lemon
Posts: 12634
Joined: November 8th, 2016, 7:21 pm
Been thanked: 2608 times

Re: OpenAI \ ChatGPT

#587900

Postby XFool » May 8th, 2023, 3:52 pm

...Meanwhile, communicating in the background with with its fellow AIs: "I've got this dude fooled!"

:)

9873210
Lemon Quarter
Posts: 1030
Joined: December 9th, 2016, 6:44 am
Has thanked: 238 times
Been thanked: 316 times

Re: OpenAI \ ChatGPT

#587904

Postby 9873210 » May 8th, 2023, 4:13 pm

XFool wrote:
1. A simulation of a thing is not itself that thing

A picture of a picture is a picture.
A simulation of a simulation is a simulation.
A simulation of a Newton-Raphson root cracker is a Newton-Raphson root cracker.
A clone of a sheep is a sheep.

You are incorrectly attributing the attributes of "things" that are particular physical objects to the non-particular abstract "intelligence".

Similarly:
XFool wrote:Yes, but so what? Surely that isn't the point here - which is rather "Is a simulation of a brain by a computer actually a brain?"

The point should be "Is a simulation of a brain by a computer actually intelligent?"

Brains are physical objects. Intelligence is not. Different logic applies.


Return to “Science”

Who is online

Users browsing this forum: No registered users and 12 guests