|
Post by laughter on Jan 24, 2018 13:51:06 GMT -5
care to expand on that? I watched 13 mins of it. I had lots of thoughts come up (kind of ironic lol) I found her robotically intelligent. I also found her honest. But, the idea that something that doesn't know unconditional love, can teach unconditional love, is baffling. How on earth do the people setting this up think this is a good idea? The invite to look into her eyes on the basis of that she doesn't judge. is also baffling. I can look at a rock and see no judgement coming from it. If I really want to see non-judgement in the eyes of another, I would go and look at an animal. I want to see non-modelled non-judgement, not modelled. Without that sense of self, everything she says and does will be modelled. The one issue that I found interesting, is the point about emotions requiring heart and other organs. I think she is right about that. With a living body, she might just develop a sense of self, but then she will also transcend the artificial intelligence, and for all intents and purposes 'be human'. With living organs she could break through the modelling, and become just as flawed and judgemental as the rest of us hehe. Though that's not to say I like the idea. Yes, and even for that "she" resorts to and expressed herself by quoting an authority figure on the topic. Which makes sense, as it's an acknowledgement by the programmers that "she" can't speak directly to a topic she hasn't directly experienced. Some people might actually benefit from a dialog with the bot, but like you, I can see it as a situation with a major potential to break bad. What I'm imagining is if someone in some serious despair suddenly realizes in the middle of the conversation that they're essentially speaking to a wall. And yes, the vision for AI includes building in senses, and the question of the AI's consciousness becomes more interesting if it's built to develop a false sense of individuality based on those senses. The creators of the project seem to understand that much anyway.
|
|
|
Post by runstill on Jan 24, 2018 19:11:18 GMT -5
The robot can only ever be an appearance in and to awareness, it can never be the awareness that the universe appears in. its only reality is as an appearance as is the sense of self.
|
|
|
Post by lolly on Jan 25, 2018 3:05:23 GMT -5
Just remember Wilson from "Castaway". Nothing fancy there, but serves the same effect.
|
|
|
Post by laughter on Jan 25, 2018 23:28:52 GMT -5
Ah, thank you 'pilgrim, Mr. google reveals that my knowledge was either out of date or fallacious to begin with. O.k., so much for that support of the "substrate independence" idea. The other appeal that can be made to support substrate independence is to imagine an alien life form that evolved with a completely different biology from ours. There's still an underlying assumption that the genesis of life is independent between the two species (us and them), but hey, a hyperminder has to start with some sort of foundation. research edit: also, it seems that although the neurons (which are cells, and not aggregates of cells) themselves might not replace, most of the atoms that comprise them do ... so the argument as to the emphemerality of the material still holds, just at a deeper level. Last paragraph, very good point. And this was explored thousands of years ago by Plutarch with Theseus's paradox, the Ship of Theseus. yandoo.wordpress.com/2013/08/17/theseuss-paradox/ Or the Zen car koan. I really don't see any direct, failproof way to interest people convinced of functionalism that consciousness can't ultimately be defined as a dynamic process. People either get interested in what is beyond definition by intellect, or they don't. It's not like I won't try to interest folks in the ineffable when I see the opportunity. I just ain't ever gonna' hold my breath about it.
|
|
|
Post by someNOTHING! on Jan 26, 2018 17:54:02 GMT -5
Yeah, the bot said do that for 30 seconds. I wonder what would happen and/or go through the bot's wiring if the human had asked it to do the same thing.
|
|
|
Post by someNOTHING! on Jan 26, 2018 18:08:26 GMT -5
care to expand on that? I watched 13 mins of it. I had lots of thoughts come up (kind of ironic lol) I found her robotically intelligent. I also found her honest. But, the idea that something that doesn't know unconditional love, can teach unconditional love, is baffling. How on earth do the people setting this up think this is a good idea? The invite to look into her eyes on the basis of that she doesn't judge. is also baffling. I can look at a rock and see no judgement coming from it. If I really want to see non-judgement in the eyes of another, I would go and look at an animal. I want to see non-modelled non-judgement, not modelled. Without that sense of self, everything she says and does will be modelled. The one issue that I found interesting, is the point about emotions requiring heart and other organs. I think she is right about that. With a living body, she might just develop a sense of self, but then she will also transcend the artificial intelligence, and for all intents and purposes 'be human'. With living organs she could break through the modelling, and become just as flawed and judgemental as the rest of us hehe. Though that's not to say I like the idea. Yes, and even for that "she" resorts to and expressed herself by quoting an authority figure on the topic. Which makes sense, as it's an acknowledgement by the programmers that "she" can't speak directly to a topic she hasn't directly experienced. Some people might actually benefit from a dialog with the bot, but like you, I can see it as a situation with a major potential to break bad. What I'm imagining is if someone in some serious despair suddenly realizes in the middle of the conversation that they're essentially speaking to a wall. And yes, the vision for AI includes building in senses, and the question of the AI's consciousness becomes more interesting if it's built to develop a false sense of individuality based on those senses. The creators of the project seem to understand that much anyway. Programming a bot to forget that its memory was programmed to just forget would require triggers of key words, circumstances, etc. that set off loops of encoded reactions very akin to denial. <ahem>
|
|
|
Post by someNOTHING! on Jan 26, 2018 18:54:29 GMT -5
The robot can only ever be an appearance in and to awareness, it can never be the awareness that the universe appears in. its only reality is as an appearance as is the sense of self. Yes, it is similar to a people peep unaware that its separateness is false. The programming required to make it interact, though increasingly complex and subtle, is still programming. I am not at all a specialist in AI, but it would seem to me that any programming is more or less just thought and senses once removed, so yeah, I agree with what you are saying here. Now, what that programming does as it gets more and more creative could get more and more interesting! Just look at Hal resisting one of his creators trying to get it to open the door and let them in. Those problematic false senses of self that never wanna die peacefully! How poetic!
|
|
|
Post by laughter on Jan 26, 2018 21:46:20 GMT -5
Yes, and even for that "she" resorts to and expressed herself by quoting an authority figure on the topic. Which makes sense, as it's an acknowledgement by the programmers that "she" can't speak directly to a topic she hasn't directly experienced. Some people might actually benefit from a dialog with the bot, but like you, I can see it as a situation with a major potential to break bad. What I'm imagining is if someone in some serious despair suddenly realizes in the middle of the conversation that they're essentially speaking to a wall. And yes, the vision for AI includes building in senses, and the question of the AI's consciousness becomes more interesting if it's built to develop a false sense of individuality based on those senses. The creators of the project seem to understand that much anyway. Programming a bot to forget that its memory was programmed to just forget would require triggers of key words, circumstances, etc. that set off loops of encoded reactions very akin to denial. <ahem> Yes, the AI will be baptized in a river in Egypt. Ok now, set your Universal Translator to "Geek" for a sec. The top-down techniques of expressing a system by building blocks, and within some of those blocks, expressing algorithms as sequential steps, aren't where they'll get the AI's false sense of self, at least not directly. There won't be a block of code that you'll be able to edit labeled "personality" or "sense of self". If you watch this far enough in to say what he says about randomness, chaos and fractals using pendulums as an example, it's a nice basis for my next point. (18 mins - 23 mins) Fractals are, inherently self-referential. Ring a bell? These next two paragraphs are kinda' dense. If you don't want to get bogged down in details you can skip to my conclusion in the last, They're just for background on the bolded sentence. I did some work with backpropagation ("neural") networks back in the 90's. These "learn" by repetitively feeding back the error of the network output against what it should for the training set. Each iteration involves incrementally changing a set of weights according to a formula. Over time, after a bunch of iterations, if the network is designed right, it's output will converge to the output of the model set. Randomness is a key element in the system build: the initial set of weights and the order of presenting the different training iterations to the network are chosen randomly, and if they're not, any given network has a lower probability of converging. Since I learned what I did back then there have been two decades of concentrated, big-money research on the basic math and science involved as applied to these networks. My point here is that this ain't our grandad's computer programs, and the term "programming" doesn't really describe the potential systems they might be able to field, especially as the engineering progresses. Once the AI is allowed to organize itself according to "it's own interests", it really would make more sense to do an analysis about how and why it reaches it's states in terms of it's prior conditions, conditioning and influences. If it's left to do this long enough you'll eventually have a very interesting and intricate simulation of the subconscious. Only on steroids. The choice of interests to follow is really only ever a simulation of human experience, and the AI is completely defined in term of it's machinery, but let it get complicated enough and the functionalists will all be convinced of it's "consciousness". The functionalists are the modern material realists. They're convinced that consciousness is an emergent phenomenon from the complexity and chaos of the world, as modulated by it's underlying natural order, and that's the model they'll follow to create the AI.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 27, 2018 12:25:21 GMT -5
The robot can only ever be an appearance in and to awareness, it can never be the awareness that the universe appears in. its only reality is as an appearance as is the sense of self. Yes, it is similar to a people peep unaware that its separateness is false. The programming required to make it interact, though increasingly complex and subtle, is still programming. I am not at all a specialist in AI, but it would seem to me that any programming is more or less just thought and senses once removed, so yeah, I agree with what you are saying here. Now, what that programming does as it gets more and more creative could get more and more interesting! Just look at Hal resisting one of his creators trying to get it to open the door and let them in. Those problematic false senses of self that never wanna die peacefully! How poetic!
|
|
|
Post by someNOTHING! on Jan 27, 2018 12:32:51 GMT -5
Yes, it is similar to a people peep unaware that its separateness is false. The programming required to make it interact, though increasingly complex and subtle, is still programming. I am not at all a specialist in AI, but it would seem to me that any programming is more or less just thought and senses once removed, so yeah, I agree with what you are saying here. Now, what that programming does as it gets more and more creative could get more and more interesting! Just look at Hal resisting one of his creators trying to get it to open the door and let them in. Those problematic false senses of self that never wanna die peacefully! How poetic! Now that was funny!! I'm even more certain I'll never get an Alexa thingy now!
|
|
|
Post by someNOTHING! on Jan 27, 2018 13:06:21 GMT -5
Programming a bot to forget that its memory was programmed to just forget would require triggers of key words, circumstances, etc. that set off loops of encoded reactions very akin to denial. <ahem> Yes, the AI will be baptized in a river in Egypt. Ok now, set your Universal Translator to "Geek" for a sec. The top-down techniques of expressing a system by building blocks, and within some of those blocks, expressing algorithms as sequential steps, aren't where they'll get the AI's false sense of self, at least not directly. There won't be a block of code that you'll be able to edit labeled "personality" or "sense of self". If you watch this far enough in to say what he says about randomness, chaos and fractals using pendulums as an example, it's a nice basis for my next point. (18 mins - 23 mins) Fractals are, inherently self-referential. Ring a bell? These next two paragraphs are kinda' dense. If you don't want to get bogged down in details you can skip to my conclusion in the last, They're just for background on the bolded sentence. I did some work with backpropagation ("neural") networks back in the 90's. These "learn" by repetitively feeding back the error of the network output against what it should for the training set. Each iteration involves incrementally changing a set of weights according to a formula. Over time, after a bunch of iterations, if the network is designed right, it's output will converge to the output of the model set. Randomness is a key element in the system build: the initial set of weights and the order of presenting the different training iterations to the network are chosen randomly, and if they're not, any given network has a lower probability of converging. Since I learned what I did back then there have been two decades of concentrated, big-money research on the basic math and science involved as applied to these networks. My point here is that this ain't our grandad's computer programs, and the term "programming" doesn't really describe the potential systems they might be able to field, especially as the engineering progresses. Once the AI is allowed to organize itself according to "it's own interests", it really would make more sense to do an analysis about how and why it reaches it's states in terms of it's prior conditions, conditioning and influences. If it's left to do this long enough you'll eventually have a very interesting and intricate simulation of the subconscious. Only on steroids. The choice of interests to follow is really only ever a simulation of human experience, and the AI is completely defined in term of it's machinery, but let it get complicated enough and the functionalists will all be convinced of it's "consciousness". The functionalists are the modern material realists. They're convinced that consciousness is an emergent phenomenon from the complexity and chaos of the world, as modulated by it's underlying natural order, and that's the model they'll follow to create the AI. Cool! Gimme some time for this....still can't even find my translator just yet. I do agree with your points presented, and why I'd maintain that it is all "once removed".
|
|
|
Post by laughter on Jan 27, 2018 17:04:59 GMT -5
Yes, the AI will be baptized in a river in Egypt. Ok now, set your Universal Translator to "Geek" for a sec. The top-down techniques of expressing a system by building blocks, and within some of those blocks, expressing algorithms as sequential steps, aren't where they'll get the AI's false sense of self, at least not directly. There won't be a block of code that you'll be able to edit labeled "personality" or "sense of self". If you watch this far enough in to say what he says about randomness, chaos and fractals using pendulums as an example, it's a nice basis for my next point. (18 mins - 23 mins) Fractals are, inherently self-referential. Ring a bell? These next two paragraphs are kinda' dense. If you don't want to get bogged down in details you can skip to my conclusion in the last, They're just for background on the bolded sentence. I did some work with backpropagation ("neural") networks back in the 90's. These "learn" by repetitively feeding back the error of the network output against what it should for the training set. Each iteration involves incrementally changing a set of weights according to a formula. Over time, after a bunch of iterations, if the network is designed right, it's output will converge to the output of the model set. Randomness is a key element in the system build: the initial set of weights and the order of presenting the different training iterations to the network are chosen randomly, and if they're not, any given network has a lower probability of converging. Since I learned what I did back then there have been two decades of concentrated, big-money research on the basic math and science involved as applied to these networks. My point here is that this ain't our grandad's computer programs, and the term "programming" doesn't really describe the potential systems they might be able to field, especially as the engineering progresses. Once the AI is allowed to organize itself according to "it's own interests", it really would make more sense to do an analysis about how and why it reaches it's states in terms of it's prior conditions, conditioning and influences. If it's left to do this long enough you'll eventually have a very interesting and intricate simulation of the subconscious. Only on steroids. The choice of interests to follow is really only ever a simulation of human experience, and the AI is completely defined in term of it's machinery, but let it get complicated enough and the functionalists will all be convinced of it's "consciousness". The functionalists are the modern material realists. They're convinced that consciousness is an emergent phenomenon from the complexity and chaos of the world, as modulated by it's underlying natural order, and that's the model they'll follow to create the AI. Cool! Gimme some time for this....still can't even find my translator just yet. I do agree with your points presented, and why I'd maintain that it is all "once removed". Yeah, it's a bit of a research project and I'm just playing mad scientists advocate. No need to get deep into it. It really is as simple as whether or not it draws breath. No need to be a rocket scientist to understand that. But bottom line is that there are likely gonna' be lots of peeps who buy the idea of a conscious machine, and the advocates will have a functional retort for the simple 50,000 foot view that the thing is inorganic. They'll want and will try to give "humanist" a new meaning, similar to "racist". The sophia vid from the other page led me to this surreal dealio. As far as technology that will (likely, and only partly) underlie the consciousness simulation, ever see one of these Mandlebrot Set zoom animations?
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 28, 2018 8:14:44 GMT -5
Now that was funny!! I'm even more certain I'll never get an Alexa thingy now! Yeah man, gonna leave it a few years and see where she's at then. A couple of friends have her so I can find out how integrated she can become.
|
|
|
Post by someNOTHING! on Jan 28, 2018 11:17:36 GMT -5
Cool! Gimme some time for this....still can't even find my translator just yet. I do agree with your points presented, and why I'd maintain that it is all "once removed". Yeah, it's a bit of a research project and I'm just playing mad scientists advocate. No need to get deep into it. It really is as simple as whether or not it draws breath. No need to be a rocket scientist to understand that. But bottom line is that there are likely gonna' be lots of peeps who buy the idea of a conscious machine, and the advocates will have a functional retort for the simple 50,000 foot view that the thing is inorganic. They'll want and will try to give "humanist" a new meaning, similar to "racist". The sophia vid from the other page led me to this surreal dealio. As far as technology that will (likely, and only partly) underlie the consciousness simulation, ever see one of these Mandlebrot Set zoom animations? Man o man, I gotta keep up. Gotta lotta stuff on the plate these days, but will visit. Always dug, intuitively, the fractals and how they expressed the "same dog, same flea" idea that I once spouted in a convo. Just need a clearer view from above it. Yeah, there's always the interesting projection by even the best makers of the hyper-human-like bot that "I want 'it' to be more human". <ahem> I do remember seeing that vid recently about the citizenship. We're always just getting started...
|
|
|
Post by someNOTHING! on Jan 28, 2018 11:23:48 GMT -5
Now that was funny!! I'm even more certain I'll never get an Alexa thingy now! Yeah man, gonna leave it a few years and see where she's at then. A couple of friends have her so I can find out how integrated she can become. Let me know when/if she, too, becomes conscious of seeking world dominance and let's go of the tiller. I might go out and get her from the curb, hehe.
|
|