|
Post by laughter on Jan 28, 2018 22:35:21 GMT -5
Yeah, it's a bit of a research project and I'm just playing mad scientists advocate. No need to get deep into it. It really is as simple as whether or not it draws breath. No need to be a rocket scientist to understand that. But bottom line is that there are likely gonna' be lots of peeps who buy the idea of a conscious machine, and the advocates will have a functional retort for the simple 50,000 foot view that the thing is inorganic. They'll want and will try to give "humanist" a new meaning, similar to "racist". The sophia vid from the other page led me to this surreal dealio. As far as technology that will (likely, and only partly) underlie the consciousness simulation, ever see one of these Mandlebrot Set zoom animations? Man o man, I gotta keep up. Gotta lotta stuff on the plate these days, but will visit. Always dug, intuitively, the fractals and how they expressed the "same dog, same flea" idea that I once spouted in a convo. Just need a clearer view from above it. Yeah, there's always the interesting projection by even the best makers of the hyper-human-like bot that "I want 'it' to be more human". <ahem> I do remember seeing that vid recently about the citizenship. We're always just getting started... Yeah Seems to me there's a very practical question to ask: " why build a machine that simulates the way a human being looks, sounds and acts?". Seems to me to be a question that would arise naturally to anyone with any interest in how peeps interact socially. Seems to me very similar to other questions of general social concern, like, "are taxes too high?" or "is the military budget at the right level?", or "are we doing the right things about crime?", or to make it easier, "should children be educated?". The question about why build robots is distinguished from the others in that it can be directly related as an existential question.
|
|
|
Post by Reefs on Jan 30, 2018 8:24:48 GMT -5
There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' Right, A.I. is missing an entire dimension of beingness. It's not by accident that people who are out of touch with their true being, who are merely functioning on a superficial intellect level are called robotic. Ethics goes a lot deeper than moral dilemmas that can be solved in a math like fashion. But the A.I. or robots can't go there. It doesn't have access to that dimension. It's an entirely different realm of beingness. And those moral dilemmas aren't dilemmas anyway. They only become dilemmas when the intellect gets involved. When the intellect doesn't get involved, there's no dilemma. Everyone knows exactly what to do in a real time manner.
|
|
|
Post by Reefs on Jan 30, 2018 8:30:15 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' That's caught my attention. Yes, interesting dilemma. The AI would have to learn conscience, guilt or shame. Equally if it can self-hate, then it would have to be able to self-love. The word 'soul' isn't very popular these days, but I suspect that only if AI became so advanced that the robot was imbued by 'soul', could we then say it is now alive, conscious, sentient etc. I don't see 'soul' as something that could be programmed though. In a way, AI would have to come to a point where its own programmed evolution is such that it transcends itself so that the 'artificial' and the 'programmed' is lost. I don't know if it could. Maybe. It's the same problem the seeker has. How can the seeker end the search? How does a figment of imagination transcend the realm of imagination?
|
|
|
Post by zendancer on Jan 30, 2018 9:22:49 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' Right, A.I. is missing an entire dimension of beingness. It's not by accident that people who are out of touch with their true being, who are merely functioning on a superficial intellect level are called robotic. Ethics goes a lot deeper than moral dilemmas that can be solved in a math like fashion. But the A.I. or robots can't go there. It doesn't have access to that dimension. It's an entirely different realm of beingness. And those moral dilemmas aren't dilemmas anyway. They only become dilemmas when the intellect gets involved. When the intellect doesn't get involved, there's no dilemma. Everyone knows exactly what to do in a real time manner. Exactly.
|
|
|
Post by stardustpilgrim on Jan 30, 2018 14:03:13 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' Right, A.I. is missing an entire dimension of beingness. It's not by accident that people who are out of touch with their true being, who are merely functioning on a superficial intellect level are called robotic. Ethics goes a lot deeper than moral dilemmas that can be solved in a math like fashion. But the A.I. or robots can't go there. It doesn't have access to that dimension. It's an entirely different realm of beingness. And those moral dilemmas aren't dilemmas anyway. They only become dilemmas when the intellect gets involved. When the intellect doesn't get involved, there's no dilemma. Everyone knows exactly what to do in a real time manner. This is an excellent point. I disagree that everyone knows exactly what to do in a real time manner. Let's go to the 13 children that were held in chains and abused by their parents. I heard only one report that two of the children were escaping, but one decided not to go out the window but went back inside the house, he(or she, the report didn't say) [obviously] feared consequences if the escape was not successful. So I would say the person who did not jump out of the window went back to operating on autopilot (robotic), but the girl who escaped did "know what to do", had in fact been planning the escape for two years. This example presents two choices, a real dilemma. And these two choices show how most people operate each day. ZD goes on and on about how the person is imaginary, how it is the "cosmos" that is acting, always, and not the "separate person". But "the separate person" is in a very real sense, not-imaginary. The son or daughter who did not jump out the window was not acting from *the cosmos*, but from the conditioning and abuse inflicted by the parents. This abuse formed the ~autopilot~ which accepted the abuse. The autopilot/self/ego existed in the neural structure of the abused children, very real, not imaginary. And most of the people on the earth live in this manner, not-free, but controlled by unconscious programming inflicted on the mind-body. It's the source of most of the nastiness on the earth (if I ever used the word evil, as recently reported, and unfortunate choice of words, but it is a descriptive word), hurt people hurt people, this is virtually Psychology 101. Abuse 'informs' self/ego, forms neural patterns which in turn inflict the abuse on other people, and the cycle continues. (Alice Miller discusses this thoroughly in several books). So it's not the cosmos acting always and everywhere, a single parent or caregiver can form a warped persona, inflicted upon the child, which in turn 'acts out' the pain that has been inflicted upon it, all this done unconsciously, robotically. These people are bullies. And in extreme cases, it's most unfortunate when these people rise to power to control nations; Stalin, Hitler, Pol Pot, Saddam Hussein, Kim Jong-un. "Get real" people. But I agree with the point that abusive programming cuts one off and isolates them from what's real and living.
|
|
|
Post by lolly on Jan 31, 2018 0:35:15 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' Right, A.I. is missing an entire dimension of beingness. It's not by accident that people who are out of touch with their true being, who are merely functioning on a superficial intellect level are called robotic. Ethics goes a lot deeper than moral dilemmas that can be solved in a math like fashion. But the A.I. or robots can't go there. It doesn't have access to that dimension. It's an entirely different realm of beingness. And those moral dilemmas aren't dilemmas anyway. They only become dilemmas when the intellect gets involved. When the intellect doesn't get involved, there's no dilemma. Everyone knows exactly what to do in a real time manner. It's more to do with self awareness creating morality, which relates to virtue, which come in examples such as you don't throw a person in front of train, but you do throw the switch. Of course thought is necessary, or you wouldn't know (remember) where you live, for example. Then, because humans form societies, a standard of norms has to be tacitly established to create any sort of social order. The relationships that make up society are determined by the personal boundaries that define one's various relationships, and these are deeply ethical in qualities of consent, mainly.
|
|
|
Post by andrew on Jan 31, 2018 18:43:19 GMT -5
That's caught my attention. Yes, interesting dilemma. The AI would have to learn conscience, guilt or shame. Equally if it can self-hate, then it would have to be able to self-love. The word 'soul' isn't very popular these days, but I suspect that only if AI became so advanced that the robot was imbued by 'soul', could we then say it is now alive, conscious, sentient etc. I don't see 'soul' as something that could be programmed though. In a way, AI would have to come to a point where its own programmed evolution is such that it transcends itself so that the 'artificial' and the 'programmed' is lost. I don't know if it could. Maybe. It's the same problem the seeker has. How can the seeker end the search? How does a figment of imagination transcend the realm of imagination? Right. It's a closed system. But if we assume that it is possible to somehow break the closed system (perhaps by transplanting living body parts) it begs the question....why would the creators of these robots want their AI humans to become actual humans? I mean, if they become actual humans, they aren't just going to be blessed with true intelligence and inspiration, they will also become flawed and spontaneous and irrational and messy. Like the other 8 billion! It makes sense to me that they would want to create super smart bots that would serve a purpose that they believe is useful (surveillance machines for example), but why create one that goes from being super smart to being actually intelligent? I don't know, I'm just wondering out loud. Edit: Ah, maybe they believe that if the robot became actual human it would still retain its programming, thus making it like a 'super human'. If so, I think they are wrong.
|
|
|
Post by Reefs on Feb 1, 2018 23:29:23 GMT -5
Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. I agree. Just ask yourself why A.I. tend to think humans are the problem. What's the typical mindset of those programmers? The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. You will only take the 'p-zombie' (what a word!) for real if you rely on your outer senses and rational thinking only because that's the perspective of separation. And that's the flawed premise those A.I. movies like Her or Ex Machina are operating on. Seen from a perspective of oneness, i.e. with the inner senses, it's perfectly clear what's going on with the p-zombie. I guess it's this solipsism topic again, isn't it?
|
|
|
Post by Reefs on Feb 1, 2018 23:44:58 GMT -5
Well, if we think about self-driving cars, which are not even AI, and we have a woman with a baby step onto to road, the car senses it impossible to stop, so it sees on the footpath there are a few people, and it could swerve to hit them, missing the woman and baby. the car does either option and the driver isn't involved in the deaths, so there is no unus on the person in the car, because they are not driving. The deaths are simply calculated, so it becomes a question of software. What codified the fatal outcomes. The problem for programmers, then, is how to mathematically quantify the ethical dimensions, because the reason the car kills one person and not another person is determined by the algorythmic process. So If we design these codes on utilitarianism the machine will calculate for the 'benefit of the greatest number', but the trolley-car thought experiment demonstrates that utilitarianism presents a deeper ethical quandary. And considering the machines will automate so many aspects human life, what ethical framework is 'for the best'. Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not.
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. To the bolded, yes, good point. And quite honestly, I'm not sure I wouldn't want even those who have studied philosophy and ethics, to be doing it. The thing is with (apparent) choice, is that it is so brilliantly and uniquely personal. I mean, there are some generalizations and common themes obviously, but still, there is an unpredictable quality that makes life so interesting. My favourite thing about people is the idiosyncrasies. How do you code that authentically? The p-zombie is interesting, I hadn't heard that before. I would say that an emotion, by definition, is felt...and the feeling of it comes with a level of being conscious of it....even if just a very very basic sense. Are you saying that the zombie can feel without being conscious of the feeling? Or are you saying that the zombie can EXPRESS feelings in such way that someone can look at the zombie and think it is a human? So the zombie is coded to convince people, but there isn't actually a feeling being felt by the zombie. For me the robot/zombie isn't a conscious entity until it actually has a level of self-awareness, a basic sense of knowing itself...not just the ability to convince others that it has this. Funnily enough, the beauty of humans is their irrationality, and this comes from their ability to connect to something that is beyond our conditioning. So the robot/zombie would somehow have to acquire true irrationality (I say 'true' because I imagine that a level of irrationality can be programmed, but this would still be a 'false' irrationality). The way I look at emotions is as guidance as A-H teach it. In A-H speak, emotions tell you how you are doing in relation to how your Inner Being is doing, i.e. in how far your perspective is blending with the perspective of your Inner Being. In plain English, emotions tell you if your perspective is in alignment with your broader perspective of you true Self (or Source). Now, the problem with the robot/zombie is that it doesn't have an Inner Being. At best we could say that the surrogate Inner Being of the robot/zombie is the programmer. And so this is flawed right from the start again.
|
|
|
Post by Reefs on Feb 1, 2018 23:52:22 GMT -5
You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' It's not math. I think the point is that if you do nothing except watch, then you don't feel personally responsible for the death of the 5, events were already set in motion. But if you flip the switch, then you are personally responsible for the death of the one. The further point is that emotions are a different category (a different function) from conceptual thinking, a point I've made numerous times here on ST's, which almost nobody agrees with. Co-creation is an interesting topic. If you believe that others can mess up your reality (which seems to be at the basis of all ethics and moral standards) then it gets really complicated. If you believe, however, that you create your own reality (as do all others) and are only responsible to yourself (your self/Self) then it's all very simple. The question then is not 'Should I (or the A.I.) have better done this or that?' The question is rather 'Why do/did they all rendezvous at this point in time?' which takes into account things already in motion (as you say). That's the only way to see the perfection in everything that is unfolding.
|
|
|
Post by Reefs on Feb 2, 2018 0:19:18 GMT -5
They think a human consciousness can be transferred into a computer network? I find that a bizarre (and wrong) idea and quite unpleasant. There was a movie about it a few years ago I think which I started to watch, but turned off after 15 minutes. I understand that they associate the possibility with the fact that body cells regenerate, but why would consciousness be transferable just on the basis that the new material structure seems like it would support it? What do they think consciousness is that would make it transferable in that way? To me, they don't seem to be just missing a key point about the nature of consciousness, but also about the nature of life itself. Almost as if they think that thought is the 'alive' bit, and everything else is just supporting structure to keep thought 'alive'. To me that seems quite back to front. Ah, they think that thought/mind is consciousness itself? I guess it all depends on how one answer this question: Does consciousness appear in the brain or does the brain appear in consciousness?
|
|
|
Post by Reefs on Feb 2, 2018 0:22:49 GMT -5
The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. This p-zombie idea is an interesting question to me. You may have also heard of the Turing Test for AI. It basically gives up on the idea of knowing whether something has "awareness" or not. But that seems like the more interesting part. Even if my computer cannot pass the Turing test for AI, it still has complex internal processes, so ... is it aware? Is there some conscious experience of "what it's like to be a computer"? [1]. I've programmed machines myself, and I find the ideas about "self determining" and "learning" machines to be... artificial. The machine can't learn unless you program it to learn, and the "learning" is just as pre-determined as any other kind of code. It may be very complex, and do a better job of adapting to its inputs, but it's still predetermined right down to the last 0 and 1. [1]: en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3FYes, that's what Mckibben meant when he said that the challenges we face (or may face) with A.I. are all mere engineering issues.
|
|
|
Post by Reefs on Feb 2, 2018 0:29:53 GMT -5
From the organic perspective there are some deep and persuasive arguments that most of what people are interested in is predetermined by when and where and to whom they're born. From the perspective of machine learning, once you build software that can write other software, then the question of whether or not that software has the potential to exhibit the characteristic of creativity can be answered in terms of a subjective manner of degree. Would you like a specific example scenario? Sure. The concept of "writing other software" may be another artificial distinction. There are dozens of "unintelligent" programs running on your computer now that generate code or input data for other programs. "Code" or "software" is input for another program called a compiler or interpreter. I don't deny that it's incredibly elegant and powerful, and that some programs are much more useful, or interesting, than others. But I don't see a magic point where you say "aha, it's alive!". (or "intelligent") And it's only 'intelligent' and 'alive' as long as you keep in plugged in.
|
|
|
Post by Reefs on Feb 2, 2018 0:48:38 GMT -5
Yes, perhaps we could say that their approaches are limited to the products of the intellect. What you write about the potential of humanity beyond intellect could be a fascinating topic. Isn't it related to your same interest here? No. My point in the other thread was about LOA or 'creating from the inside out' as Maltz (and also A-H) call it, which gives you enormous leverage to the degree that physical action as a means of accomplishing something becomes inconsequential. But I'm not all that interested in that in this thread. My interest in this dialog is about where the culture is going with the notion of AI, and it seems to me that the functionalists/material realists are currently the most influential voices to that direction. The existential question is written all over the topic in big bright neon letters. It's quite confounding -- in a way that would make us out as Cassandras -- that they've just blown right by it. They have an answer to the question "what is consciousness?", and it's one that they've arrived at by first bounding and pre-casting the premise of the question. Where they arrive is at the entrance to an endless, maze-like dead-end of intellectualization with ever greater layers of complexity and nuance. Perhaps the only way that a voice from outside the maze could penetrate would be if the depths of the implications of the existential question could be related to the topic. So I'm willing to admit that there might eventually be an AI that emerges from the culture of technology in a process that resembles, in some respects, the process of human evolution, and that is transcendent of a single human creator and transcendent of the team of people that created it, and that would demonstrate the qualities of creativity and empathy, and perhaps even simulate suffering. From that ground it might be possible to re-expand the horizons of the question of consciousness, in terms of whether or not the potentially conscious AI is really separate from the totality from which it emerged. In contrast, anyone who could be swayed by the simpler exploration of the nature of qualia wouldn't have to bother with the maze to begin with. That's my interest here as well. And what I've noticed is that there's a certain combination of arrogance and ignorance (maybe even stupidity) at play here when it comes to A.I. and the future of humanity as envisioned by screen writers and A.I. enthusiasts. Lot's of flawed premises to unpack. But you've summed it up quite well.
|
|
|
Post by laughter on Feb 2, 2018 1:04:30 GMT -5
Sure. The concept of "writing other software" may be another artificial distinction. There are dozens of "unintelligent" programs running on your computer now that generate code or input data for other programs. "Code" or "software" is input for another program called a compiler or interpreter. I don't deny that it's incredibly elegant and powerful, and that some programs are much more useful, or interesting, than others. But I don't see a magic point where you say "aha, it's alive!". (or "intelligent") And it's only 'intelligent' and 'alive' as long as you keep in plugged in. From my functionalist straw-man perspective the same is true of a person. I mean, like, food much? And they'd argue the question about source from one of three positions. The first two would be based on "consciousness appears in the brain". If they're honest, they'll state an objective, material-realist assumption and refuse to let go of a definition of consciousness as an emergent phenomenon from neural processes in conjunction with sensory stimuli and memory. If you confront them hard enough with the facts of QM and they maintain integrity and stay honest then they'll likely retreat to a purer functional position and say that the question is irrelevant to how they're defining consciousness. As long as they can produce something lifelike enough to convince a human being that it's one of us then they've met their goal. In any event you'd have to put up with the eventual dooofus guy quips about how brainless consciousness is a really funny idea, so you'd have to patiently concede that a functioning human brain is a necessary pre-requisite to a living, functional (conscious) experience of being a human being. You could go on from there to explain how that is a contextually limiting notion of the idea of consciousness but they either wouldn't understand, or, if they did understand, they likely wouldn't really care given what's at stake for them. The third possibility is that they take the position that the brain appears in consciousness, in which case they'd throw it back to you with the question "so then, what's the difference between a self-aware human appearing in consciousness or a self-aware robot appearing in consciousness?" .. and there you are, once again, spiraling right back down into the solipsism debate. If the dialog never transcends philosophy they'll eventually offer a bullet-proof theory on how A.I. is the next logical step in human evolution.
|
|