|
Post by Reefs on Jan 12, 2018 23:22:02 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system.
Any thoughts?
|
|
|
Post by laughter on Jan 13, 2018 4:39:37 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system. Any thoughts? If you want the best from the other side of the fence I'd recommend this guy. I remember our dialogs on the topic from a few years back, and agreed with your point then that there's no meaningful equivalence between consciousness and a simulation of consciousness. It's just that when and if we engineer a good enough simulation of misidentification with form, the results might wind up as rather unpredictable. Nick and his crew don't put the idea of a singularity in those terms, but that's what they're referring to, indirectly. It's just that their positions are premised on material realism, so they never reach the underlying existential issue. The solution to a runaway singularity would be quite obvious from the highest level top-down design goal: simulate SR.
|
|
|
Post by zendancer on Jan 13, 2018 9:21:20 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system. Any thoughts? Totally agree. What we call "AI" is something like a cartoon of intelligence in the same way as an image/idea of a tree is a cartoon representing what a "tree" is. AI is not artificial intelligence; it is artificial cognition/cogitation.
|
|
|
Post by stardustpilgrim on Jan 13, 2018 15:37:20 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system. Any thoughts? There are a couple of different issues involved here. One is that Moore's law has yet to fail. So there is the question will it eventually fail? (And therefore will AI fail to be achieved?) AI is not really a good name for the questionable phenomenon. The question is, can computer intelligence ever become conscious? (This involves a secondary question, to what extent are even humans conscious?) What's inevitable is that computers will eventually become autonomous, that is, they will eventually be able to reproduce themselves, without human help (see below*). {Our drive and arrogance will not let us cease playing with computers until it's too late}. This also means their capacity to increase intelligence will no longer depend upon human programming, that is, computers will eventually learn how to program themselves (*This will necessitate incorporating "sensory organs", and self-learning, both of which are taking place today. Will this take place in 20 years?, 50 years?, or a hundred years?) So the real question is, if computer-robots become conscious, will they be(come) more ethical than their creators? The answer has to be no, because computer-robots will never have the capacity for emotions (if humans can eventually program emotions in [allowing for compassion and empathy], when computer-robots become conscious, will they eliminate their emotional programs?) All these question might revolve around the question, will quantum computing ever really get off the ground? But the question remains, what does it mean to be conscious? Interesting times we live in. (The best book I've read exploring some of these question is The Quantum Brain by Jeffrey Satinover, but that was years ago. I think Ray Kurzwell is a bit ambitious). www.wired.com/story/how-to-build-a-self-conscious-ai-machine/
|
|
|
Post by Reefs on Jan 15, 2018 10:35:14 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system. Any thoughts? If you want the best from the other side of the fence I'd recommend this guy. I remember our dialogs on the topic from a few years back, and agreed with your point then that there's no meaningful equivalence between consciousness and a simulation of consciousness. It's just that when and if we engineer a good enough simulation of misidentification with form, the results might wind up as rather unpredictable. Nick and his crew don't put the idea of a singularity in those terms, but that's what they're referring to, indirectly. It's just that their positions are premised on material realism, so they never reach the underlying existential issue. The solution to a runaway singularity would be quite obvious from the highest level top-down design goal: simulate SR. Okay, I'll check it out. Yes, I was actually referring to those dialogs. The A.I. is an extension of the programmer. So GIGO applies here.
|
|
|
Post by Reefs on Jan 15, 2018 10:40:22 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system. Any thoughts? Totally agree. What we call "AI" is something like a cartoon of intelligence in the same way as an image/idea of a tree is a cartoon representing what a "tree" is. AI is not artificial intelligence; it is artificial cognition/cogitation. Yes, it's one-dimensional. If they think that A.I. is about to replace humans, then they must think that humans are only made of a body plus intellect. That's hilarious and sad at the same time.
|
|
|
Post by Reefs on Jan 15, 2018 10:58:51 GMT -5
Recently I've been listening to some talks and interviews with Michael Mckibben (founder of Leader Technologies) whose view on A.I. pretty much dovetails with my own version which contradicts the officially promoted dystopian version of A.I. that is a threat to humanity and is going to replace humans. He basically says that there is no such thing as artificial intelligence. And all those issues with so-called A.I. that haven been envisioned dystopian movies are all solvable because they are nothing else but engineering problems. Another key point re A.I replacing humans is that A.I. is just mimicking the nervous system. Any thoughts? There are a couple of different issues involved here. One is that Moore's law has yet to fail. So there is the question will it eventually fail? (And therefore will AI fail to be achieved?) AI is not really a good name for the questionable phenomenon. The question is, can computer intelligence ever become conscious? (This involves a secondary question, to what extent are even humans conscious?) What's inevitable is that computers will eventually become autonomous, that is, they will eventually be able to reproduce themselves, without human help (see below*). {Our drive and arrogance will not let us cease playing with computers until it's too late}. This also means their capacity to increase intelligence will no longer depend upon human programming, that is, computers will eventually learn how to program themselves (*This will necessitate incorporating "sensory organs", and self-learning, both of which are taking place today. Will this take place in 20 years?, 50 years?, or a hundred years?) So the real question is, if computer-robots become conscious, will they be(come) more ethical than their creators? The answer has to be no, because computer-robots will never have the capacity for emotions (if humans can eventually program emotions in [allowing for compassion and empathy], when computer-robots become conscious, will they eliminate their emotional programs?) All these question might revolve around the question, will quantum computing ever really get off the ground? But the question remains, what does it mean to be conscious? Interesting times we live in. (The best book I've read exploring some of these question is The Quantum Brain by Jeffrey Satinover, but that was years ago. I think Ray Kurzwell is a bit ambitious). www.wired.com/story/how-to-build-a-self-conscious-ai-machine/ I don't understand the big deal about A.I. having emotions or not. To me that is totally irrelevant. The real question is will A.I. be able to go beyond the limits of the intellect? And the answer is no. And did those A.I. enthusiasts and programmers ever go beyond the limits of the intellect? And the answer here seems to be no as well.
|
|
|
Post by laughter on Jan 15, 2018 14:32:41 GMT -5
If you want the best from the other side of the fence I'd recommend this guy. I remember our dialogs on the topic from a few years back, and agreed with your point then that there's no meaningful equivalence between consciousness and a simulation of consciousness. It's just that when and if we engineer a good enough simulation of misidentification with form, the results might wind up as rather unpredictable. Nick and his crew don't put the idea of a singularity in those terms, but that's what they're referring to, indirectly. It's just that their positions are premised on material realism, so they never reach the underlying existential issue. The solution to a runaway singularity would be quite obvious from the highest level top-down design goal: simulate SR. Okay, I'll check it out. Yes, I was actually referring to those dialogs. The A.I. is an extension of the programmer. So GIGO applies here. The one facet of their fascination that is relevant is relative to this. For one thing, some of those extensions have outsized boundaries. For example, the works of the guys at AT&T who designed the Unix system calls back in the 60's or the people who designed TCP/IP at DARPA in the '70's and the guy who crafted the first version of HTML at CERN in the '80's are all basic building blocks that everyone who uses a PC or a phone or a TV or the Internet relies on, constantly. For another thing, most new software these days is produced by teams and rests on top of some assembly of stacked-up foundations like these ... so it's not that the A.I. is the extension of just one programmer, but in a very real sense, the A.I. is the extension of a continuous culture of software development that will continue to span lifetimes out into the future. A third and final thing is that the systems these guys are worried about are the ones that have the capability to learn autonomously. For example, neural networks based on backpropagation set their internal parameters based on a process of repeated trial and error applied to a training set. So the programmer in this instance might set the goal for the A.I., but it can't be said that he programs the machine in any sort of top-down meaning of the word "program". The resulting networks are so complex that the current theories about why a given instance solves a given problem as well as it does can't be used to generate the same network from the outside in. So even at this point, the current state of A.I. is a level of complex functionality that no one person could ever re-create from the ground-up on their own, and that even the best teams would have trouble explaining, even though they could engineer a functional system on top of all these building blocks with the known techniques. The vision for the singularity involves one more step from here: allowing the A.I. to determine it's next interest and assemble a functional training set, without human intervention. Yes, it's always only ever a simulation of consciousness, but one that could be made to look superficially as if it wasn't a simulation, and one that has already surpassed human capability to compete with based on the rapidity of "thought" and the capacity for memory.
|
|
|
Post by stardustpilgrim on Jan 15, 2018 15:36:02 GMT -5
There are a couple of different issues involved here. One is that Moore's law has yet to fail. So there is the question will it eventually fail? (And therefore will AI fail to be achieved?) AI is not really a good name for the questionable phenomenon. The question is, can computer intelligence ever become conscious? (This involves a secondary question, to what extent are even humans conscious?) What's inevitable is that computers will eventually become autonomous, that is, they will eventually be able to reproduce themselves, without human help (see below*). {Our drive and arrogance will not let us cease playing with computers until it's too late}. This also means their capacity to increase intelligence will no longer depend upon human programming, that is, computers will eventually learn how to program themselves (*This will necessitate incorporating "sensory organs", and self-learning, both of which are taking place today. Will this take place in 20 years?, 50 years?, or a hundred years?) So the real question is, if computer-robots become conscious, will they be(come) more ethical than their creators? The answer has to be no, because computer-robots will never have the capacity for emotions (if humans can eventually program emotions in [allowing for compassion and empathy], when computer-robots become conscious, will they eliminate their emotional programs?) All these question might revolve around the question, will quantum computing ever really get off the ground? But the question remains, what does it mean to be conscious? Interesting times we live in. (The best book I've read exploring some of these question is The Quantum Brain by Jeffrey Satinover, but that was years ago. I think Ray Kurzwell is a bit ambitious). www.wired.com/story/how-to-build-a-self-conscious-ai-machine/ I don't understand the big deal about A.I. having emotions or not. To me that is totally irrelevant. The real question is will A.I. be able to go beyond the limits of the intellect? And the answer is no. And did those A.I. enthusiasts and programmers ever go beyond the limits of the intellect? And the answer here seems to be no as well. There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious?
|
|
|
Post by zendancer on Jan 16, 2018 6:46:37 GMT -5
I don't understand the big deal about A.I. having emotions or not. To me that is totally irrelevant. The real question is will A.I. be able to go beyond the limits of the intellect? And the answer is no. And did those A.I. enthusiasts and programmers ever go beyond the limits of the intellect? And the answer here seems to be no as well. There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? The question is misconceived. Do human beings have awareness or does awareness have human beings? That which is aware is both infinite and unitary. All universes appear within THAT. From this POV one might ask, "Can humans create something (some thing) that can become either infinite or aware?" IOW, what Reefs is pointing to is a fundamental misunderstanding regarding both awareness and intelligence. One CC experience reveals the difference between what we call "intelligence" and the intelligence of THAT. Another question analogous to "Can AI ever become aware?" is "Can a human ever create God?" Both questions become laughable to anyone who has ever encountered what the word "God" points to. The important question to contemplate is, "What is it that sees or thinks?"
|
|
|
Post by Reefs on Jan 19, 2018 8:27:11 GMT -5
There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? The question is misconceived. Do human beings have awareness or does awareness have human beings? That which is aware is both infinite and unitary. All universes appear within THAT. From this POV one might ask, "Can humans create something (some thing) that can become either infinite or aware?" I OW, what Reefs is pointing to is a fundamental misunderstanding regarding both awareness and intelligence. One CC experience reveals the difference between what we call "intelligence" and the intelligence of THAT. Another question analogous to "Can AI ever become aware?" is "Can a human ever create God?" Both questions become laughable to anyone who has ever encountered what the word "God" points to. The important question to contemplate is, "What is it that sees or thinks?" Precisely.
|
|
|
Post by laughter on Jan 19, 2018 8:42:37 GMT -5
The question is misconceived. Do human beings have awareness or does awareness have human beings? That which is aware is both infinite and unitary. All universes appear within THAT. From this POV one might ask, "Can humans create something (some thing) that can become either infinite or aware?" I OW, what Reefs is pointing to is a fundamental misunderstanding regarding both awareness and intelligence. One CC experience reveals the difference between what we call "intelligence" and the intelligence of THAT. Another question analogous to "Can AI ever become aware?" is "Can a human ever create God?" Both questions become laughable to anyone who has ever encountered what the word "God" points to. The important question to contemplate is, "What is it that sees or thinks?" Precisely. I stumbled onto to some of Nick's early work that was interesting. It made it very clear what the term is for their assumption of a material reality: "substrate independence".
|
|
|
Post by Reefs on Jan 19, 2018 8:52:58 GMT -5
Okay, I'll check it out. Yes, I was actually referring to those dialogs. The A.I. is an extension of the programmer. So GIGO applies here. The one facet of their fascination that is relevant is relative to this. For one thing, some of those extensions have outsized boundaries. For example, the works of the guys at AT&T who designed the Unix system calls back in the 60's or the people who designed TCP/IP at DARPA in the '70's and the guy who crafted the first version of HTML at CERN in the '80's are all basic building blocks that everyone who uses a PC or a phone or a TV or the Internet relies on, constantly. For another thing, most new software these days is produced by teams and rests on top of some assembly of stacked-up foundations like these ... so it's not that the A.I. is the extension of just one programmer, but in a very real sense, the A.I. is the extension of a continuous culture of software development that will continue to span lifetimes out into the future. A third and final thing is that the systems these guys are worried about are the ones that have the capability to learn autonomously. For example, neural networks based on backpropagation set their internal parameters based on a process of repeated trial and error applied to a training set. So the programmer in this instance might set the goal for the A.I., but it can't be said that he programs the machine in any sort of top-down meaning of the word "program". The resulting networks are so complex that the current theories about why a given instance solves a given problem as well as it does can't be used to generate the same network from the outside in. So even at this point, the current state of A.I. is a level of complex functionality that no one person could ever re-create from the ground-up on their own, and that even the best teams would have trouble explaining, even though they could engineer a functional system on top of all these building blocks with the known techniques. The vision for the singularity involves one more step from here: allowing the A.I. to determine it's next interest and assemble a functional training set, without human intervention. Yes, it's always only ever a simulation of consciousness, but one that could be made to look superficially as if it wasn't a simulation, and one that has already surpassed human capability to compete with based on the rapidity of "thought" and the capacity for memory. Yes, impressive results. But it never leaves the realm of the intellect. Humans don't have that built-in limitation, even though for most on a day to day basis this limitation is very real because they mostly live in their heads. But scientists also have been telling us that we only use a fraction of our brains and DNA.
|
|
|
Post by lolly on Jan 19, 2018 22:14:00 GMT -5
I don't understand the big deal about A.I. having emotions or not. To me that is totally irrelevant. The real question is will A.I. be able to go beyond the limits of the intellect? And the answer is no. And did those A.I. enthusiasts and programmers ever go beyond the limits of the intellect? And the answer here seems to be no as well. There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics'
|
|
|
Post by andrew on Jan 20, 2018 4:23:41 GMT -5
There is an ethical/philosophical thought problem called the trolley question. You are standing at a track switch and see a trolley coming down the track. You see a problem, if the trolley stays on course, five people are going to die. You can switch the track the trolley is on, but if you do, it will kill one person on the new track. What do you do? Do nothing, five people will die. Do something, you will be responsible for killing the one person. Without emotions, the AI computer/robot will always make the decision to save the most people ("cold" calculated decision). The question remains, does super intelligence, AI, ever become conscious? You missed the point because you didn;t finish story. Your query here is easy, you throw the switch and save the 5, killing the one. Hardly an ethical dilemma there. The rest of the story goes on the query it differently. There is only one track with 5 people tied on it, and you stand on the bridge over the track with another person. You have a choice. Do you throw the person off the bridge in front of the trolly-car to stop the train and save the five. The point is, the math is all the same, one guy dies so five can live, but in the case of just throwing a switch, you only throw a switch. In the second case you actually throw someone in front of a train. This illustrates that the complexity of ethics is not simple math, which is quite the quandry in terms of AI, or even just robotics, which will indeed algorithmically (or perhaps quantum compute) the same sort of decision. This is the basic problem in essence, 'artificial ethics' That's caught my attention. Yes, interesting dilemma. The AI would have to learn conscience, guilt or shame. Equally if it can self-hate, then it would have to be able to self-love. The word 'soul' isn't very popular these days, but I suspect that only if AI became so advanced that the robot was imbued by 'soul', could we then say it is now alive, conscious, sentient etc. I don't see 'soul' as something that could be programmed though. In a way, AI would have to come to a point where its own programmed evolution is such that it transcends itself so that the 'artificial' and the 'programmed' is lost. I don't know if it could. Maybe.
|
|