|
Post by lolly on Jan 20, 2018 6:24:54 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' That's caught my attention. Yes, interesting dilemma. The AI would have to learn conscience, guilt or shame. Equally if it can self-hate, then it would have to be able to self-love. The word 'soul' isn't very popular these days, but I suspect that only if AI became so advanced that the robot was imbued by 'soul', could we then say it is now alive, conscious, sentient etc. I don't see 'soul' as something that could be programmed though. In a way, AI would have to come to a point where its own programmed evolution is such that it transcends itself so that the 'artificial' and the 'programmed' is lost. I don't know if it could. Maybe. Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'.
|
|
|
Post by lolly on Jan 20, 2018 6:37:26 GMT -5
Bit of hilarity
|
|
|
Post by andrew on Jan 20, 2018 6:48:07 GMT -5
That's caught my attention. Yes, interesting dilemma. The AI would have to learn conscience, guilt or shame. Equally if it can self-hate, then it would have to be able to self-love. The word 'soul' isn't very popular these days, but I suspect that only if AI became so advanced that the robot was imbued by 'soul', could we then say it is now alive, conscious, sentient etc. I don't see 'soul' as something that could be programmed though. In a way, AI would have to come to a point where its own programmed evolution is such that it transcends itself so that the 'artificial' and the 'programmed' is lost. I don't know if it could. Maybe. Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not.
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. To the bolded, yes, good point. And quite honestly, I'm not sure I wouldn't want even those who have studied philosophy and ethics, to be doing it. The thing is with (apparent) choice, is that it is so brilliantly and uniquely personal. I mean, there are some generalizations and common themes obviously, but still, there is an unpredictable quality that makes life so interesting. My favourite thing about people is the idiosyncrasies. How do you code that authentically? The p-zombie is interesting, I hadn't heard that before. I would say that an emotion, by definition, is felt...and the feeling of it comes with a level of being conscious of it....even if just a very very basic sense. Are you saying that the zombie can feel without being conscious of the feeling? Or are you saying that the zombie can EXPRESS feelings in such way that someone can look at the zombie and think it is a human? So the zombie is coded to convince people, but there isn't actually a feeling being felt by the zombie. For me the robot/zombie isn't a conscious entity until it actually has a level of self-awareness, a basic sense of knowing itself...not just the ability to convince others that it has this. Funnily enough, the beauty of humans is their irrationality, and this comes from their ability to connect to something that is beyond our conditioning. So the robot/zombie would somehow have to acquire true irrationality (I say 'true' because I imagine that a level of irrationality can be programmed, but this would still be a 'false' irrationality).
|
|
|
Post by stardustpilgrim on Jan 20, 2018 10:00:44 GMT -5
There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' It's not math. I think the point is that if you do nothing except watch, then you don't feel personally responsible for the death of the 5, events were already set in motion. But if you flip the switch, then you are personally responsible for the death of the one. The further point is that emotions are a different category (a different function) from conceptual thinking, a point I've made numerous times here on ST's, which almost nobody agrees with.
|
|
|
Post by laughter on Jan 20, 2018 10:34:40 GMT -5
There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' In terms of the philosophy of ethics, the choice is only simple if you equate all human life as the same. All sorts of scenarios could be contrived based on our sense of values, without changing the situation of the person with the switch, and these alternative scenarios involve making the choice more complicated. That's why the thought experiment is so well known, as the potential debate is a long one with many possible nontrivial perspectives and no one definitive answer to what the best action is in every case that everyone will agree on. Philosophy and ethics might be unavoidable in terms of the practical ordering of society by laws, but ultimately useless in terms of relating choice and the outcome of choice to the subject of genuine self-inquiry.
|
|
|
Post by laughter on Jan 20, 2018 11:05:25 GMT -5
That's caught my attention. Yes, interesting dilemma. The AI would have to learn conscience, guilt or shame. Equally if it can self-hate, then it would have to be able to self-love. The word 'soul' isn't very popular these days, but I suspect that only if AI became so advanced that the robot was imbued by 'soul', could we then say it is now alive, conscious, sentient etc. I don't see 'soul' as something that could be programmed though. In a way, AI would have to come to a point where its own programmed evolution is such that it transcends itself so that the 'artificial' and the 'programmed' is lost. I don't know if it could. Maybe. Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation.
|
|
|
Post by andrew on Jan 20, 2018 11:30:48 GMT -5
Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation. They think a human consciousness can be transferred into a computer network? I find that a bizarre (and wrong) idea and quite unpleasant. There was a movie about it a few years ago I think which I started to watch, but turned off after 15 minutes. I understand that they associate the possibility with the fact that body cells regenerate, but why would consciousness be transferable just on the basis that the new material structure seems like it would support it? What do they think consciousness is that would make it transferable in that way? To me, they don't seem to be just missing a key point about the nature of consciousness, but also about the nature of life itself. Almost as if they think that thought is the 'alive' bit, and everything else is just supporting structure to keep thought 'alive'. To me that seems quite back to front. Ah, they think that thought/mind is consciousness itself?
|
|
|
Post by stardustpilgrim on Jan 20, 2018 12:05:23 GMT -5
The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation. They think a human consciousness can be transferred into a computer network? I find that a bizarre (and wrong) idea and quite unpleasant. There was a movie about it a few years ago I think which I started to watch, but turned off after 15 minutes. I understand that they associate the possibility with the fact that body cells regenerate, but why would consciousness be transferable just on the basis that the new material structure seems like it would support it? What do they think consciousness is that would make it transferable in that way? To me, they don't seem to be just missing a key point about the nature of consciousness, but also about the nature of life itself. Almost as if they think that thought is the 'alive' bit, and everything else is just supporting structure to keep thought 'alive'. To me that seems quite back to front. Ah, they think that thought/mind is consciousness itself? I don't think we are anywhere near even conceiving being able to duplicate what goes on in a brain. In an average adult brain there are more connections between neurons than there are stars in the universe. To make a copy of a specific person, one would first have to duplicate the function of the neurons (the relative easy part), plus all the connections between neurons. Theoretically possible given enough resources and time. However, by analogy, this might be like building an aircraft carrier, with no "ocean" to put-it-in.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 20, 2018 12:06:02 GMT -5
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. This p-zombie idea is an interesting question to me. You may have also heard of the Turing Test for AI. It basically gives up on the idea of knowing whether something has "awareness" or not. But that seems like the more interesting part. Even if my computer cannot pass the Turing test for AI, it still has complex internal processes, so ... is it aware? Is there some conscious experience of "what it's like to be a computer"? [1]. I've programmed machines myself, and I find the ideas about "self determining" and "learning" machines to be... artificial. The machine can't learn unless you program it to learn, and the "learning" is just as pre-determined as any other kind of code. It may be very complex, and do a better job of adapting to its inputs, but it's still predetermined right down to the last 0 and 1. [1]: en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F
|
|
|
Post by stardustpilgrim on Jan 20, 2018 12:10:33 GMT -5
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. This p-zombie idea is an interesting question to me. You may have also heard of the Turing Test for AI. It basically gives up on the idea of knowing whether something has "awareness" or not. But that seems like the more interesting part. Even if my computer cannot pass the Turing test for AI, it still has complex internal processes, so ... is it aware? Is there some conscious experience of "what it's like to be a computer"? [1]. I've programmed machines myself, and I find the ideas about "self determining" and "learning" machines to be... artificial. The machine can't learn unless you program it to learn, and the "learning" is just as pre-determined as any other kind of code. It may be very complex, and do a better job of adapting to its inputs, but it's still predetermined right down to the last 0 and 1. [1]: en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3FActually, now there are computer-robots which from the very beginning are self-taught. They are built with senses, and gather information from their environment. www.fanaticalfuturist.com/2017/01/norwegian-robot-learns-to-self-evolve-and-3d-print-itself-in-the-lab/www.theverge.com/tldr/2017/7/10/15946542/deepmind-parkour-agent-reinforcement-learning
|
|
|
Post by laughter on Jan 20, 2018 12:34:06 GMT -5
The arguments for the artificial consciousness also include the point that the nature of the hardware isn't determinative of the question. The idea is that there's nothing all that special about human consciousness, and (to grossly simplify) since we replace all the cells in our bodies every few years, it should be theoretically possible to transfer consciousness from one media to another if the new media embodies the necessary material structure. The underlying misconception is that consciousness can be defined in terms of a mechanistic process. As Reefs points out, this notion of consciousness is limited to an intellectual domain, and it is this assumption that specifically reveals that limitation. They think a human consciousness can be transferred into a computer network? I find that a bizarre (and wrong) idea and quite unpleasant. There was a movie about it a few years ago I think which I started to watch, but turned off after 15 minutes. I understand that they associate the possibility with the fact that body cells regenerate, but why would consciousness be transferable just on the basis that the new material structure seems like it would support it? What do they think consciousness is that would make it transferable in that way? To me, they don't seem to be just missing a key point about the nature of consciousness, but also about the nature of life itself. Almost as if they think that thought is the 'alive' bit, and everything else is just supporting structure to keep thought 'alive'. To me that seems quite back to front. Ah, they think that thought/mind is consciousness itself? What would be the metaphysical difference, in the final analysis, between a general inception of a new A.I. that is by their consensus, "conscious", or one that they would consider a copy of a particular person's consciousness? In either case there's some specific moment they could point to where the A.I. was switched on. In either case, "consciousness" has been "transferred" into an inorganic media. In answer to your other questions, here's a link that I foresee the possible need for repeating a few times over the next few pages. This view of "consciousness" is referred to in what I think of as the "modern Philosophy" by that notion of "Functionalism". Perhaps a Philosophy scholar would offer a correction to that first term of "modern Philosophy". I'd argue that all of the proponents of the idea of conscious A.I., and even anyone who would allow for the possibility of that, have premised their view of the subject on "Functionalism", whether they're conscious of the underlying assumptions involved, or not.
|
|
|
Post by laughter on Jan 20, 2018 12:38:08 GMT -5
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. This p-zombie idea is an interesting question to me. You may have also heard of the Turing Test for AI. It basically gives up on the idea of knowing whether something has "awareness" or not. But that seems like the more interesting part. Even if my computer cannot pass the Turing test for AI, it still has complex internal processes, so ... is it aware? Is there some conscious experience of "what it's like to be a computer"? [1]. I've programmed machines myself, and I find the ideas about "self determining" and "learning" machines to be... artificial. The machine can't learn unless you program it to learn, and the "learning" is just as pre-determined as any other kind of code. It may be very complex, and do a better job of adapting to its inputs, but it's still predetermined right down to the last 0 and 1. [1]: en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3FFrom the organic perspective there are some deep and persuasive arguments that most of what people are interested in is predetermined by when and where and to whom they're born. From the perspective of machine learning, once you build software that can write other software, then the question of whether or not that software has the potential to exhibit the characteristic of creativity can be answered in terms of a subjective manner of degree. Would you like a specific example scenario?
|
|
|
Post by andrew on Jan 20, 2018 12:38:09 GMT -5
They think a human consciousness can be transferred into a computer network? I find that a bizarre (and wrong) idea and quite unpleasant. There was a movie about it a few years ago I think which I started to watch, but turned off after 15 minutes. I understand that they associate the possibility with the fact that body cells regenerate, but why would consciousness be transferable just on the basis that the new material structure seems like it would support it? What do they think consciousness is that would make it transferable in that way? To me, they don't seem to be just missing a key point about the nature of consciousness, but also about the nature of life itself. Almost as if they think that thought is the 'alive' bit, and everything else is just supporting structure to keep thought 'alive'. To me that seems quite back to front. Ah, they think that thought/mind is consciousness itself? I don't think we are anywhere near even conceiving being able to duplicate what goes on in a brain. In an average adult brain there are more connections between neurons than there are stars in the universe. To make a copy of a specific person, one would first have to duplicate the function of the neurons (the relative easy part), plus all the connections between neurons. Theoretically possible given enough resources and time. However, by analogy, this might be like building an aircraft carrier, with no "ocean" to put-it-in.
[/b] yes. Seems to me that even if they duplicate a brain, that they are still missing a key point. The rest of the body isn't an irrelevant piece of material, each cell is alive and intelligent and, in its own way, is involved in the formation of our thoughts, and also the capacity for a basic knowing that we exist. It's not that cells create aliveness, it's more that there is aliveness in the cells, and the brain has a relationship with the alive body. In a sense, the apparent personality self, the identity (and the mind in general) is a product of the body aliveness. Without that body aliveness....I can't see how they could transfer mind into a computer (they seem to be equating mind and consciousness).
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 20, 2018 14:20:33 GMT -5
From the organic perspective there are some deep and persuasive arguments that most of what people are interested in is predetermined by when and where and to whom they're born. From the perspective of machine learning, once you build software that can write other software, then the question of whether or not that software has the potential to exhibit the characteristic of creativity can be answered in terms of a subjective manner of degree. Would you like a specific example scenario? Sure. The concept of "writing other software" may be another artificial distinction. There are dozens of "unintelligent" programs running on your computer now that generate code or input data for other programs. "Code" or "software" is input for another program called a compiler or interpreter. I don't deny that it's incredibly elegant and powerful, and that some programs are much more useful, or interesting, than others. But I don't see a magic point where you say "aha, it's alive!". (or "intelligent")
|
|
|
Post by laughter on Jan 20, 2018 14:25:59 GMT -5
The one facet of their fascination that is relevant is relative to this. For one thing, some of those extensions have outsized boundaries. For example, the works of the guys at AT&T who designed the Unix system calls back in the 60's or the people who designed TCP/IP at DARPA in the '70's and the guy who crafted the first version of HTML at CERN in the '80's are all basic building blocks that everyone who uses a PC or a phone or a TV or the Internet relies on, constantly. For another thing, most new software these days is produced by teams and rests on top of some assembly of stacked-up foundations like these ... so it's not that the A.I. is the extension of just one programmer, but in a very real sense, the A.I. is the extension of a continuous culture of software development that will continue to span lifetimes out into the future. A third and final thing is that the systems these guys are worried about are the ones that have the capability to learn autonomously. For example, neural networks based on backpropagation set their internal parameters based on a process of repeated trial and error applied to a training set. So the programmer in this instance might set the goal for the A.I., but it can't be said that he programs the machine in any sort of top-down meaning of the word "program". The resulting networks are so complex that the current theories about why a given instance solves a given problem as well as it does can't be used to generate the same network from the outside in. So even at this point, the current state of A.I. is a level of complex functionality that no one person could ever re-create from the ground-up on their own, and that even the best teams would have trouble explaining, even though they could engineer a functional system on top of all these building blocks with the known techniques. The vision for the singularity involves one more step from here: allowing the A.I. to determine it's next interest and assemble a functional training set, without human intervention. Yes, it's always only ever a simulation of consciousness, but one that could be made to look superficially as if it wasn't a simulation, and one that has already surpassed human capability to compete with based on the rapidity of "thought" and the capacity for memory. Yes, impressive results. But it never leaves the realm of the intellect. Humans don't have that built-in limitation, even though for most on a day to day basis this limitation is very real because they mostly live in their heads. But scientists also have been telling us that we only use a fraction of our brains and DNA. Yes, perhaps we could say that their approaches are limited to the products of the intellect. What you write about the potential of humanity beyond intellect could be a fascinating topic. Isn't it related to your same interest here? But I'm not all that interested in that in this thread. My interest in this dialog is about where the culture is going with the notion of AI, and it seems to me that the functionalists/material realists are currently the most influential voices to that direction. The existential question is written all over the topic in big bright neon letters. It's quite confounding -- in a way that would make us out as Cassandras -- that they've just blown right by it. They have an answer to the question "what is consciousness?", and it's one that they've arrived at by first bounding and pre-casting the premise of the question. Where they arrive is at the entrance to an endless, maze-like dead-end of intellectualization with ever greater layers of complexity and nuance. Perhaps the only way that a voice from outside the maze could penetrate would be if the depths of the implications of the existential question could be related to the topic. So I'm willing to admit that there might eventually be an AI that emerges from the culture of technology in a process that resembles, in some respects, the process of human evolution, and that is transcendent of a single human creator and transcendent of the team of people that created it, and that would demonstrate the qualities of creativity and empathy, and perhaps even simulate suffering. From that ground it might be possible to re-expand the horizons of the question of consciousness, in terms of whether or not the potentially conscious AI is really separate from the totality from which it emerged. In contrast, anyone who could be swayed by the simpler exploration of the nature of qualia wouldn't have to bother with the maze to begin with.
|
|