|
Post by laughter on Feb 2, 2018 1:23:44 GMT -5
Yes, perhaps we could say that their approaches are limited to the products of the intellect. What you write about the potential of humanity beyond intellect could be a fascinating topic. Isn't it related to your same interest here? No. My point in the other thread was about LOA or 'creating from the inside out' as Maltz (and also A-H) call it, which gives you enormous leverage to the degree that physical action as a means of accomplishing something becomes inconsequential. oh, ok. Well, I still see a linkage. But I'm not all that interested in that in this thread. My interest in this dialog is about where the culture is going with the notion of AI, and it seems to me that the functionalists/material realists are currently the most influential voices to that direction. The existential question is written all over the topic in big bright neon letters. It's quite confounding -- in a way that would make us out as Cassandras -- that they've just blown right by it. They have an answer to the question "what is consciousness?", and it's one that they've arrived at by first bounding and pre-casting the premise of the question. Where they arrive is at the entrance to an endless, maze-like dead-end of intellectualization with ever greater layers of complexity and nuance. Perhaps the only way that a voice from outside the maze could penetrate would be if the depths of the implications of the existential question could be related to the topic. So I'm willing to admit that there might eventually be an AI that emerges from the culture of technology in a process that resembles, in some respects, the process of human evolution, and that is transcendent of a single human creator and transcendent of the team of people that created it, and that would demonstrate the qualities of creativity and empathy, and perhaps even simulate suffering. From that ground it might be possible to re-expand the horizons of the question of consciousness, in terms of whether or not the potentially conscious AI is really separate from the totality from which it emerged. In contrast, anyone who could be swayed by the simpler exploration of the nature of qualia wouldn't have to bother with the maze to begin with. That's my interest here as well. And what I've noticed is that there's a certain combination of arrogance and ignorance (maybe even stupidity) at play here when it comes to A.I. and the future of humanity as envisioned by screen writers and A.I. enthusiasts. Lot's of flawed premises to unpack. But you've summed it up quite well. Thanks. Well, it seems to me that most entertainment and news media is generated at or around a certain median of existential engagement and understanding. Now, in that perception, I stake out my own position founded in what could be reasonably characterized as arrogance, but at least I can do so consciously. The reason why this cluster around the median happens is a deep and interesting topic in and of itself, but, regardless, it seems further to me that this arrogance and ignorance are, partly, functions of that median. To my eye they're also just the natural result of people interacting. It is this sea of mediocrity that makes works of genius like The Matrix and Fight Club stand out so starkly in contrast the way they do. Anyone with any interest in politics, the arts, the sciences, spirituality, or really, any interest in humanity at all, would have a natural related interest in this question of the possibility of a "conscious" AI.
|
|
|
Post by andrew on Feb 2, 2018 4:00:15 GMT -5
To the bolded, yes, good point. And quite honestly, I'm not sure I wouldn't want even those who have studied philosophy and ethics, to be doing it. The thing is with (apparent) choice, is that it is so brilliantly and uniquely personal. I mean, there are some generalizations and common themes obviously, but still, there is an unpredictable quality that makes life so interesting. My favourite thing about people is the idiosyncrasies. How do you code that authentically? The p-zombie is interesting, I hadn't heard that before. I would say that an emotion, by definition, is felt...and the feeling of it comes with a level of being conscious of it....even if just a very very basic sense. Are you saying that the zombie can feel without being conscious of the feeling? Or are you saying that the zombie can EXPRESS feelings in such way that someone can look at the zombie and think it is a human? So the zombie is coded to convince people, but there isn't actually a feeling being felt by the zombie. For me the robot/zombie isn't a conscious entity until it actually has a level of self-awareness, a basic sense of knowing itself...not just the ability to convince others that it has this. Funnily enough, the beauty of humans is their irrationality, and this comes from their ability to connect to something that is beyond our conditioning. So the robot/zombie would somehow have to acquire true irrationality (I say 'true' because I imagine that a level of irrationality can be programmed, but this would still be a 'false' irrationality). The way I look at emotions is as guidance as A-H teach it. In A-H speak, emotions tell you how you are doing in relation to how your Inner Being is doing, i.e. in how far your perspective is blending with the perspective of your Inner Being. In plain English, emotions tell you if your perspective is in alignment with your broader perspective of you true Self (or Source). Now, the problem with the robot/zombie is that it doesn't have an Inner Being. At best we could say that the surrogate Inner Being of the robot/zombie is the programmer. And so this is flawed right from the start again. yep! This is close to what I meant when I used the word...'soul'. 'Inner Being' is probably a better word. Linked closely to this, I cannot see how it would be possible to program 'intuition' into an AI robot, and the capacity for 'intuition' is crucial to spontaneity and aliveness. Intuition is the doorway to the immaterial and irrational.
|
|
|
Post by lolly on Feb 2, 2018 4:27:15 GMT -5
Are computer programmers really the best people to determine social ethics? Do they have the philosophical learning to enable them? I think not. I agree. Just ask yourself why A.I. tend to think humans are the problem. What's the typical mindset of those programmers? The second issue relates to another philosophical problem which is deeper about the nature of being conscious, and this problem is illustrated by the 'philosophical zombie'. This p-zombie is like a human being in all regards, has senses and thoughts and emotions, but is not aware of them. As a real person you can't tell the if the p zombie is conscious or not, because there is no way of knowing. The p-zombie is the same as a real person in every way - other than being conscious of the experience. Sat we can make a machine with all senses (detectors), and capable of processing sensory information in a 'brain-like way', learning, inventing, expressing emotional nuances and so on and so on - that machine is very life-like. It converses, has a sense of humour, and a unique 'epersonality' all of its own. Is the machine a p-zombie? Or is the machine conscious of its experience? In other words, has it 'come to life?'. Regardless of what is true, some will be convinced that the machine is indeed a conscious entity because it is 'self-programming' which is the same as 'self-determination', as in every regard of AI, it decides for itself and requires no 'intelligent input'. You will only take the 'p-zombie' (what a word!) for real if you rely on your outer senses and rational thinking only because that's the perspective of separation. And that's the flawed premise those A.I. movies like Her or Ex Machina are operating on. Seen from a perspective of oneness, i.e. with the inner senses, it's perfectly clear what's going on with the p-zombie. I guess it's this solipsism topic again, isn't it? A p-zombie (philosophical zombie) is a conceptual way of arguing against reductionist materialism. The materialist argues that consciousness emerged from matter (mainly brain), but the p-zombie illustrates how the functions of brain and organs could continue to function devoid of consciousness (like AI) - without being conscious of anything. Dave Chalmers didn't exactly invent the p-zombie, but he made it famous by using it to outline what he calls 'the hard problem'
|
|
|
Post by andrew on Feb 2, 2018 5:02:34 GMT -5
They think a human consciousness can be transferred into a computer network? I find that a bizarre (and wrong) idea and quite unpleasant. There was a movie about it a few years ago I think which I started to watch, but turned off after 15 minutes. I understand that they associate the possibility with the fact that body cells regenerate, but why would consciousness be transferable just on the basis that the new material structure seems like it would support it? What do they think consciousness is that would make it transferable in that way? To me, they don't seem to be just missing a key point about the nature of consciousness, but also about the nature of life itself. Almost as if they think that thought is the 'alive' bit, and everything else is just supporting structure to keep thought 'alive'. To me that seems quite back to front. Ah, they think that thought/mind is consciousness itself? I guess it all depends on how one answer this question: Does consciousness appear in the brain or does the brain appear in consciousness? In a choice between the two, I would always say the brain appears in (or is an expression of) consciousness, but I think there is a boundary with the idea of 'appears'. In a sense, the same flaw applies to that word here, as it does in the solipsist issue of 'other perceivers'. It doesn't adequately describe what makes a human uniquely alive in contrast to the robot. Often the body is considered to be a kind of 'machine' that supports 'my' aliveness and 'me' intelligence, but I see that that as backwards. The cells themselves are intelligent and alive, and create the capacity for an ego or personality self that can SEEM as if it is alive and intelligent. With this ego/personality self comes the capacity for self-awareness and knowing consciousness. Though consciousness knows itself through the body, the illusory self is the means by which it is known (and the general problem is that the illusory self...or the 'me' is experienced by many to be more tangible than it is). But hence we can also say that body and mind are in conjunction, they are one system. So if an AI robot is given some alive transplanted organs, well then the robot potentially becomes alive, and 'artificial' intelligence is transcended, and what we have is a human with an ego/personality self that now has the capacity to intuit and know the aliveness of the cells. Though I don't believe that scientists should be messing with stuff like this, to me is seems like it is contravening a natural process. (And so when someone says 'I know I am perceiving but I don't know if you are', they are also saying....'I know my body is alive and intelligent but I don't know if your body is made of cells or of plasticine') lol
|
|
|
Post by Reefs on Feb 2, 2018 5:33:17 GMT -5
Yes. The 'cell swap' is a good way of looking at it, but in the sense of becoming a conscious machine we would thing of prosthetics, so you get old and have a mechanical heart, and througha continued process of replacing parts, every part would end up replaced. Then we have a conscious machine, or a p-zombie? Can't really tell, can we? Did the lights go out somewhere along the way perhaps? But in this case everything you add will be integrated into an already existing, fully functioning autonomous system. That's quite different from creating a system from scratch. As they say, man can take life, but cannot give life.
|
|
|
Post by lolly on Feb 2, 2018 5:37:54 GMT -5
Yes. The 'cell swap' is a good way of looking at it, but in the sense of becoming a conscious machine we would thing of prosthetics, so you get old and have a mechanical heart, and througha continued process of replacing parts, every part would end up replaced. Then we have a conscious machine, or a p-zombie? Can't really tell, can we? Did the lights go out somewhere along the way perhaps? But in this case everything you add will be integrated into an already existing, fully functioning autonomous system. That's quite different from creating a system from scratch. As they say, man can take life, but cannot give life. Well it is one way of pointing out the 'real possibility' of AI. A personal body gradually but completely replaced by machine parts would be an intelligent machine. If so, why couldn't life occur spontaneously, even as a vital element, of such complex machines built form scratch?
|
|
|
Post by Reefs on Feb 2, 2018 5:46:34 GMT -5
Yes. The 'cell swap' is a good way of looking at it, but in the sense of becoming a conscious machine we would thing of prosthetics, so you get old and have a mechanical heart, and througha continued process of replacing parts, every part would end up replaced. Then we have a conscious machine, or a p-zombie? Can't really tell, can we? Did the lights go out somewhere along the way perhaps? Always circles back to one koan or another like this. My critique of the current state of the culture is my perception that many if not most of the educated people in leadership positions give the greatest credibility on these subjects to voices that are for the most part not conscious of the nature of the questions and have missed some of the more obvious points that would short circuit most of their interests in them to begin with. My answer to this p-zombie question is: great question, how do you propose to go about getting an answer? In the specific, how can you know if the lights go out or not if you don't first know what it is that you really are? Practically speaking, you could invite someone to "transfer" their consciousness to the media they expect to host the final prosthetic, and offer them the chance to flip the off switch on their old body themselves after the transfer was complete .. if they thought it really "worked". Also, I say this p-zombie dealio is precisely the same question the solipsist is faced with in terms of whether other people are perceivers like they are. This idea of 'transferring' consciousness is seriously flawed. It's this belief again that consciousness is a function of the brain. Consciousness isn't bound by anything. Consciousness doesn't require a vehicle. What they probably mean is copying belief systems, but not consciousness. And that would be similar to copying lines of code, I guess.
|
|
|
Post by Reefs on Feb 2, 2018 6:08:47 GMT -5
We basically have to assume that a copy of oneself is the same identity, but no philosophers really think that because the logic isn't sound and they can't win that argument. For example. You say you want to be incarnated into a machine, but there is more than one machine, and the engineer accident transfers you to two machines. Both machines will claim to be 'the real you' but if there is an identity, that by definition is singular, so even 'you' can't be sure you are really 'you' when two machines make the same claim on exactly the same grounds. I mean this is the sort of things philosophers write brilliant essays about. What have we become? Yeah, you can't step into the same river twice. This applies to time travel as well as duplicating yourself.
|
|
|
Post by Reefs on Feb 2, 2018 6:14:45 GMT -5
The thought I had yesterday after reading what you said was....it's all very well to code very sophisticated behavioural practices, which SEEM like values on the surface...for example, 'do not harm others', Kind of like the 10 commandments. But how would one code 'love'? Or 'freedom'? Or 'peace'? Or 'innocence'. And as I think a bit more about it, I'm not seeing how 'intuition' be coded. Seems to me that at most the zombie can put on a very good show. Right, intuition and love transcend the time-space-reality realm, i.e. the world of the A.I./robot.
|
|
|
Post by Reefs on Feb 2, 2018 6:22:45 GMT -5
All the cells of the body do not replicate every few years. Specifically, neurons do not replicate. For the most part (it is possible to grow new neurons) we have the same neurons we were born with. We actually have less neurons, as periodically, during childhood and pre-adolescence, neurons are "pruned". Ah, thank you 'pilgrim, Mr. google reveals that my knowledge was either out of date or fallacious to begin with. O.k., so much for that support of the "substrate independence" idea. The other appeal that can be made to support substrate independence is to imagine an alien life form that evolved with a completely different biology from ours. There's still an underlying assumption that the genesis of life is independent between the two species (us and them), but hey, a hyperminder has to start with some sort of foundation. research edit: also, it seems that although the neurons (which are cells, and not aggregates of cells) themselves might not replace, most of the atoms that comprise them do ... so the argument as to the emphemerality of the material still holds, just at a deeper level. Since you mention atoms, let's add another twist to the story. Everything is an interpretation (of energy). The chair you sit on, the body you think you inhabit, the robot you talk to ... don't actually exist. Now the question of the AI/robot being alive or lifeless, conscious or unconscious doesn't really make sense anymore, does it?
|
|
|
Post by Reefs on Feb 2, 2018 10:26:34 GMT -5
care to expand on that? I watched 13 mins of it. I had lots of thoughts come up (kind of ironic lol) I found her robotically intelligent. I also found her honest. But, the idea that something that doesn't know unconditional love, can teach unconditional love, is baffling. How on earth do the people setting this up think this is a good idea? The invite to look into her eyes on the basis of that she doesn't judge. is also baffling. I can look at a rock and see no judgement coming from it. If I really want to see non-judgement in the eyes of another, I would go and look at an animal. I want to see non-modelled non-judgement, not modelled. Without that sense of self, everything she says and does will be modelled. The one issue that I found interesting, is the point about emotions requiring heart and other organs. I think she is right about that. With a living body, she might just develop a sense of self, but then she will also transcend the artificial intelligence, and for all intents and purposes 'be human'. With living organs she could break through the modelling, and become just as flawed and judgemental as the rest of us hehe. Though that's not to say I like the idea. The time lag before it answers is a little strange, it's almost as if someone else is typing the answers behind the curtain and then the robot just spells it out...
|
|
|
Post by Reefs on Feb 2, 2018 10:36:24 GMT -5
The robot can only ever be an appearance in and to awareness, it can never be the awareness that the universe appears in. its only reality is as an appearance as is the sense of self. Same is true for the average John Doe person.
|
|
|
Post by Reefs on Feb 2, 2018 10:37:38 GMT -5
Just remember Wilson from "Castaway". Nothing fancy there, but serves the same effect. Or Mr. Bean's teddy, hehe.
|
|
|
Post by Reefs on Feb 2, 2018 10:55:00 GMT -5
Since I learned what I did back then there have been two decades of concentrated, big-money research on the basic math and science involved as applied to these networks. My point here is that this ain't our grandad's computer programs, and the term "programming" doesn't really describe the potential systems they might be able to field, especially as the engineering progresses. Once the AI is allowed to organize itself according to "it's own interests", it really would make more sense to do an analysis about how and why it reaches it's states in terms of it's prior conditions, conditioning and influences. If it's left to do this long enough you'll eventually have a very interesting and intricate simulation of the subconscious. Only on steroids. The choice of interests to follow is really only ever a simulation of human experience, and the AI is completely defined in term of it's machinery, but let it get complicated enough and the functionalists will all be convinced of it's "consciousness".So they are basically fooled by complexity. That's a reoccurring theme I see in those A.I. movies as well. The functionalists are the modern material realists. They're convinced that consciousness is an emergent phenomenon from the complexity and chaos of the world, as modulated by it's underlying natural order, and that's the model they'll follow to create the AI. Well said.
|
|
|
Post by Reefs on Feb 2, 2018 11:12:04 GMT -5
Yeah, it's a bit of a research project and I'm just playing mad scientists advocate. No need to get deep into it. It really is as simple as whether or not it draws breath. No need to be a rocket scientist to understand that. But bottom line is that there are likely gonna' be lots of peeps who buy the idea of a conscious machine, and the advocates will have a functional retort for the simple 50,000 foot view that the thing is inorganic. They'll want and will try to give "humanist" a new meaning, similar to "racist". The sophia vid from the other page led me to this surreal dealio. I didn't know you were so much into this. Fascinating stuff, really. The thing is that it will always be man-made no matter what, even if they should be able to give it some kind of organic skin like the T-800 had which makes it indistinguishable from real humans on a surface level. The fact remains, man cannot give life. Only Source gives and sustains life. Form is emptiness and emptiness is form. Man is only an extension of Source. And the A.I. is an extension of man. Which means it is an even further abstraction. So in that sense the A.I. is like an ego of an ego, the ego stands apart from direct experience and the A.I. stands apart from that again in another, even further abstraction from that. Keeping that in mind, this should give us a good idea of what's possible and what's not possible for A.I./robots.
|
|