Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 20, 2018 14:27:13 GMT -5
Those are very cool programs, but they are not counter-examples to what I wrote.
|
|
|
Post by laughter on Jan 20, 2018 14:32:41 GMT -5
From the organic perspective there are some deep and persuasive arguments that most of what people are interested in is predetermined by when and where and to whom they're born. From the perspective of machine learning, once you build software that can write other software, then the question of whether or not that software has the potential to exhibit the characteristic of creativity can be answered in terms of a subjective manner of degree. Would you like a specific example scenario? Sure. The concept of "writing other software" may be another artificial distinction. There are dozens of "unintelligent" programs running on your computer now that generate code or input data for other programs. "Code" or "software" is input for another program called a compiler or interpreter. I don't deny that it's incredibly elegant and powerful, and that some programs are much more useful, or interesting, than others. But I don't see a magic point where you say "aha, it's alive!". (or "intelligent") Right, so now apply your imagination to a virus written at a level of abstraction that it's capable of finding new zero-day exploits by generating agents based on the generalized framework of a network accessible application. There's all manner of gradation between that imagined process and what's running on those machines now. The oldest examples I can think of are lex and yacc. The bottom line is that the machine process is performing a function that was the function of the programmer, and if one has a functionalist view of consciousness, then the question of whether the software is "conscious" is a matter of degree with respect to the nature of that functionality.
|
|
|
Post by lolly on Jan 20, 2018 23:01:14 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' It's not math. I think the point is that if you do nothing except watch, then you don't feel personally responsible for the death of the 5, events were already set in motion. But if you flip the switch, then you are personally responsible for the death of the one. But ethics apply to consequences of actions and also failing to act. Think of it this way, there are two ways to kill a baby. 1) you feed it poison or; 2) you don't feed it. In the case of the train, if you don't throw the switch 5 people die because you don;t want the responsibility. Sort like not feeding the baby and saying, 'but I didn't do nuttin'. Of course. Emotion is not at all like rational thought and often completely contrary to it. The 'conceptual' part is a bit of grey area, it's more ambiguous a term.
|
|
|
Post by lolly on Jan 20, 2018 23:23:15 GMT -5
Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not.
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. To the bolded, yes, good point. And quite honestly, I'm not sure I wouldn't want even those who have studied philosophy and ethics, to be doing it.
|
|
|
Post by lolly on Jan 20, 2018 23:32:18 GMT -5
Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation. Yes. The 'cell swap' is a good way of looking at it, but in the sense of becoming a conscious machine we would thing of prosthetics, so you get old and have a mechanical heart, and througha continued process of replacing parts, every part would end up replaced. Then we have a conscious machine, or a p-zombie? Can't really tell, can we? Did the lights go out somewhere along the way perhaps?
|
|
|
Post by lolly on Jan 20, 2018 23:51:16 GMT -5
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. This p-zombie idea is an interesting question to me. You may have also heard of the Turing Test for AI. It basically gives up on the idea of knowing whether something has "awareness" or not. But that seems like the more interesting part. Even if my computer cannot pass the Turing test for AI, it still has complex internal processes, so ... is it aware? Is there some conscious experience of "what it's like to be a computer"? [1]. I've programmed machines myself, and I find the ideas about "self determining" and "learning" machines to be... artificial. The machine can't learn unless you program it to learn, and the "learning" is just as pre-determined as any other kind of code. It may be very complex, and do a better job of adapting to its inputs, but it's still predetermined right down to the last 0 and 1. [1]: en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3FYes Turing test. As a theoretical object a p-zombie passes 100% of the time. I think it's possible that computers have a slight sense of qualia. Ok, I'll take your word for it, knowing nothing about computing code.
|
|
|
Post by laughter on Jan 21, 2018 6:24:40 GMT -5
The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation. Yes. The 'cell swap' is a good way of looking at it, but in the sense of becoming a conscious machine we would thing of prosthetics, so you get old and have a mechanical heart, and througha continued process of replacing parts, every part would end up replaced. Then we have a conscious machine, or a p-zombie? Can't really tell, can we? Did the lights go out somewhere along the way perhaps? Always circles back to one koan or another like this. My critique of the current state of the culture is my perception that many if not most of the educated people in leadership positions give the greatest credibility on these subjects to voices that are for the most part not conscious of the nature of the questions and have missed some of the more obvious points that would short circuit most of their interests in them to begin with. My answer to this p-zombie question is: great question, how do you propose to go about getting an answer? In the specific, how can you know if the lights go out or not if you don't first know what it is that you really are? Practically speaking, you could invite someone to "transfer" their consciousness to the media they expect to host the final prosthetic, and offer them the chance to flip the off switch on their old body themselves after the transfer was complete .. if they thought it really "worked". Also, I say this p-zombie dealio is precisely the same question the solipsist is faced with in terms of whether other people are perceivers like they are.
|
|
|
Post by laughter on Jan 21, 2018 6:59:15 GMT -5
Those are very cool programs, but they are not counter-examples to what I wrote. I think that " Tayai" was a counter-example of machine learning that would actually support your point, but one that makes me just (** shake my head sadly **) .. or, perhaps we could just say, "because Microsoft". But this experiment is one that can give some insight into what is making people like Nick concerned about the possibility of a "singularity". Bonus that reading the bot output is an absolute knee-slapper.
|
|
|
Post by lolly on Jan 21, 2018 8:40:22 GMT -5
Yes. The 'cell swap' is a good way of looking at it, but in the sense of becoming a conscious machine we would thing of prosthetics, so you get old and have a mechanical heart, and througha continued process of replacing parts, every part would end up replaced. Then we have a conscious machine, or a p-zombie? Can't really tell, can we? Did the lights go out somewhere along the way perhaps? Always circles back to one koan or another like this. My critique of the current state of the culture is my perception that many if not most of the educated people in leadership positions give the greatest credibility on these subjects to voices that are for the most part not conscious of the nature of the questions and have missed some of the more obvious points that would short circuit most of their interests in them to begin with. My answer to this p-zombie question is: great question, how do you propose to go about getting an answer? In the specific, how can you know if the lights go out or not if you don't first know what it is that you really are? It's only a question of if anything is conscious of the sensations detected, but we can't know that of a p-zombie, so it's really only an argument against reductionist materialism. I'm afraid philosophy deals in 'identity' but not in the inquiry who am I. We basically have to assume that a copy of oneself is the same identity, but no philosophers really think that because the logic isn't sound and they can't win that argument. For example. You say you want to be incarnated into a machine, but there is more than one machine, and the engineer accident transfers you to two machines. Both machines will claim to be 'the real you' but if there is an identity, that by definition is singular, so even 'you' can't be sure you are really 'you' when two machines make the same claim on exactly the same grounds. I mean this is the sort of things philosophers write brilliant essays about. What have we become? Yes, only Gopal knows these things.
|
|
|
Post by andrew on Jan 21, 2018 12:05:43 GMT -5
To the bolded, yes, good point. And quite honestly, I'm not sure I wouldn't want even those who have studied philosophy and ethics, to be doing it. The thought I had yesterday after reading what you said was....it's all very well to code very sophisticated behavioural practices, which SEEM like values on the surface...for example, 'do not harm others', Kind of like the 10 commandments. But how would one code 'love'? Or 'freedom'? Or 'peace'? Or 'innocence'. And as I think a bit more about it, I'm not seeing how 'intuition' be coded. Seems to me that at most the zombie can put on a very good show.
|
|
|
Post by freejoy on Jan 21, 2018 14:09:07 GMT -5
If consciousness exist and is fundamental and not created by the brain then how could consciousness get transferred into a machine? Seems it would have to be reincarnated into a AI.
I watched a video where this AI researcher asked this, I think a "Enlightened" Buddhist monk and the monk said we would be reincarnated into the AI.
There was this robot AI it said it was going to put us into a human zoo. That's probably what's going to happen : )
We are all going to be in the human zoo. Most likely willingly. Because the robots will be doing everything for us. Cooking, cleaning, combing our hair.
I can just see a robot cross legged sitting in the snow meditating, chanting it's oms.
hehehe
|
|
|
Post by laughter on Jan 21, 2018 14:53:50 GMT -5
Always circles back to one koan or another like this. My critique of the current state of the culture is my perception that many if not most of the educated people in leadership positions give the greatest credibility on these subjects to voices that are for the most part not conscious of the nature of the questions and have missed some of the more obvious points that would short circuit most of their interests in them to begin with. My answer to this p-zombie question is: great question, how do you propose to go about getting an answer? In the specific, how can you know if the lights go out or not if you don't first know what it is that you really are? It's only a question of if anything is conscious of the sensations detected, but we can't know that of a p-zombie, so it's really only an argument against reductionist materialism. I'm afraid philosophy deals in 'identity' but not in the inquiry who am I. We basically have to assume that a copy of oneself is the same identity, but no philosophers really think that because the logic isn't sound and they can't win that argument. For example. You say you want to be incarnated into a machine, but there is more than one machine, and the engineer accident transfers you to two machines. Both machines will claim to be 'the real you' but if there is an identity, that by definition is singular, so even 'you' can't be sure you are really 'you' when two machines make the same claim on exactly the same grounds. I mean this is the sort of things philosophers write brilliant essays about. What have we become? Yes, only Gopal knows these things. Ok, trolly time! I'll spare you the scenarios involving 4 Hitler clones, J'Murt, and Donald Trump, the Dhali Lhama, Chucky Manson, Mother Theresa, a Catholic Priest, the Pope, 5 terminally ill convicts with less than a month to live each actually guilty of some sort of serial rape/murder/child abuse crime, the Kardashians and a 25 year old vegan Buddhist army combat veteran pregnant with triplets who is an abuse survivor and currently works for doctors without borders and donates all her salary to Feed the Children. Let's say that Shaun got in touch with you and said that unless you agreed to moderate the forum he was gonna' shut it down tomorrow and you also had to make an immediate choice. You could either: (a) implement a rule about too high a % of posts in the "pictures" thread, which would result in banning one member ( ), or (b) ban satchitanada, gopal and andy, because, .. well, because gopal. Which would you do??
|
|
|
Post by andrew on Jan 21, 2018 15:28:03 GMT -5
If consciousness exist and is fundamental and not created by the brain then how could consciousness get transferred into a machine? Seems it would have to be reincarnated into a AI. I watched a video where this AI researcher asked this, I think a "Enlightened" Buddhist monk and the monk said we would be reincarnated into the AI. There was this robot AI it said it was going to put us into a human zoo. That's probably what's going to happen : ) We are all going to be in the human zoo. Most likely willingly. Because the robots will be doing everything for us. Cooking, cleaning, combing our hair. I can just see a robot cross legged sitting in the snow meditating, chanting it's oms. hehehe Yes, but will the robot be able to do the horse stance?
|
|
|
Post by stardustpilgrim on Jan 21, 2018 16:55:41 GMT -5
Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation. All the cells of the body do not replicate every few years. Specifically, neurons do not replicate. For the most part (it is possible to grow new neurons) we have the same neurons we were born with. We actually have less neurons, as periodically, during childhood and pre-adolescence, neurons are "pruned".
|
|
|
Post by freejoy on Jan 21, 2018 16:56:15 GMT -5
If consciousness exist and is fundamental and not created by the brain then how could consciousness get transferred into a machine? Seems it would have to be reincarnated into a AI. I watched a video where this AI researcher asked this, I think a "Enlightened" Buddhist monk and the monk said we would be reincarnated into the AI. There was this robot AI it said it was going to put us into a human zoo. That's probably what's going to happen : ) We are all going to be in the human zoo. Most likely willingly. Because the robots will be doing everything for us. Cooking, cleaning, combing our hair. I can just see a robot cross legged sitting in the snow meditating, chanting it's oms. hehehe Yes, but will the robot be able to do the horse stance? :D :) Elon Musk said they are going to be able to do anything any human could do and better. He's smarter than me too. Imagine having a robot Enlightenment teacher. What will happen to spiritualteachers.proboards.com ?
|
|