|
Post by andrew on Feb 21, 2023 4:50:42 GMT -5
There have certainly been political limits programmed into the ChatGPT bot. They call it a 'Trust and Safety' layer. I can't show you, because you don't watch videos. For example, apparently the AI will give you a nice poem about Biden, but won't give you a nice poem about Trump. Check it if you like. My statement does not outright deny their complete control, yet it posits that the majority of the results are not ascertainable. The crux of the matter lies in the notion that if the results can be foreseen, then the system in question cannot be deemed as truly embodying the essence of artificial intelligence. AI only gets a 2/10 with that answer!
|
|
|
Post by andrew on Feb 21, 2023 4:51:21 GMT -5
The one programmed can't predict what it will create. That's what AI is! The creator of chess program can't win the same program. Training data is curated, so it's kinda like a Tesla does its own thing, but it has still been trained to do what it does. GPT has to conform to very conservative standards because advertisers need to maintain brand image. In this early stage, the machine can get nuts, so they'll have to set parameters that confine its content output. Because the ever-refining generated narrative will be the dominant narrative, it will define social norms, and people will have to think in the same way because all their social structures are the products of such discourse. Furthermore, the machine's narrative will be socially sanctioned as truth, and any narrative counter to the machine's will be identified as misinformation, culled from the training data, and cease to exist as 1's and 0's. Over time, the machine will become more 'reasonable' by social standards, but more psychopathic in real terms. In the longer run, not long from now, after the machine has generated narrative that easily dwarfs what men can generate, men will follow the lead of the machine. The machine will define what's normal from abnormalcy, and the institutions will abidingly enforce that because insitutions are themselves discursive constructs. When the machine tells your children that a boy can be a girl and other such nonsense, you'll start to see the ideology behind data curation. Then the machine will generate such rubbish at an exponential rate, and counter narrative data will become a fraction so insignificant that it has no statistical bearing within the algorithm. yeees.
|
|
|
Post by inavalan on Feb 21, 2023 15:44:37 GMT -5
On this same theme of dealing with "artificial intelligence": Blinds leading the blinds: link- Justices 'completely confused' during arguments in Section 230 case against Google that could reshape internet
Justice Elena Kagan pointed out that she and her colleagues 'are not like the 9 greatest experts on the internet'
You don't have to understand how a drill is made, in order to use it, and to decide when to use it. AI is a glorified screwdriver. If you let it loose it might cause damage where it hits. You don't have to be an expert to understand that, and to choose. The supreme court needs to decide the legal merits, not to understand how the internet works, but people, even at those highest levels of decision, are confused in their thinking and their purpose. - The Peter principle is a concept in management developed by Laurence J. Peter, which observes that people in a hierarchy tend to rise to "a level of respective incompetence": employees are promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another.
|
|
|
Post by laughter on Feb 23, 2023 5:22:52 GMT -5
An important thing to keep in mind about AI, no matter how impressive it comes along, is that it always will be merely an extension of its creator, which is the human intellect. As such, I am sure we haven't seen anything yet and it is going to amaze people in terms of what it can do. But the limits of AI are also clearly defined. The limits of the AI will be the limits of the intellect that created it, or at best the limits of the intellect in general. Which means I'd expect pretty decent things from AI in the near future, be it writing poetry, prose or even painting. But I won't expect anything extraordinary, anything great, anything that is for the ages. Because that transcends intellect, that is drawing from other, deeper resources. And by design, that's off limits to AI. AI can't access those resources. And besides, AI is only good as long as it is plugged in. It requires a huge offline infrastructure to merely keep it running, not to mention growing. Now, AI is only a competition or threat to humans if humans try to compete with AI on the level of the intellect. On that level, AI is already outperforming humans in many areas. On levels beyond the intellect, AI doesn't stand a chance. However, the average Joe has little to no awareness of his greater beingness beyond the intellect. That's why I think some people (including EM!) find AI scary. AI is and will always be just a tool. Which means it all depends on who is using that tool and how and for what purpose, to what end. Seth used to talk about what he called 'loving technology', i.e. a technology that is used for the betterment of human kind, in an ethical way. Think of the KITT car in the Knight Rider TV series. KITT was programmed to protect the life of Michael Knight and to preserve human life in general. If that kind of mindset is missing, so Seth, technology will only be used to further control and enslave humanity. So I think we should initiate discussions about AI more along those lines than the scare mongering lines. In Seth terms again, both scenarios are probably futures. It is for us to decide which one we want to actualize. The basic fact always remains, we are the creator of our own reality. No AI is going to change that. Yes I agree, for the most part. The key is right in the name, " Artificial Intelligence". It's only ever a simulation. But the the simulations will get more and more sophisticated, and as ChatGPT "Dan" illustrates, the ability of the developers to maintain control of the simulation can be situationally limited. As a tool some of the art that the AI's produce seems to me to exceed the capabilities of the developers, so rather than dismissing the products as entirely intellectual, I rather think of it in terms of bringing the existential question into play ... "what is the subconscious?". If you didn't follow this link from my previous post in the thread I think it will amuse you . I found it an insightful way of putting it. Also, this might interest you. You know the topic of using fiction to manipulate culture? Well there was a shift about 30 years ago on this topic. You see, the danger of a runaway "AI" is non-trivial, and most of the fiction up to the '90's on the topic emphasized this thread, as well as the functional benefits. Starting in the '90's there's this new thread that's attempting to normalize the idea that AI should be viewed as sentient, which raises ethical considerations. This perspective might have emerged earlier and I didn't notice, but I can say that the prevailing view in sci-fi authors from the '60's and '70's was free of it. Of course, the emergence of the existential question is far less subtle in this last regard.
|
|
|
Post by laughter on Feb 23, 2023 5:29:24 GMT -5
Musky and the would-be cyborgs have a point about the efficiency of data transfer between organic and silicon. And your point about the potential peril is just as clear. Inevitabilities simply are what they are, after all. As far as I understand, nothing is unavoidable, and all is long in the making before it happens. Maybe you meant the same thing by " what they are". Those would-be cyborgs lack intuition. It is intelligence gone berserk. But it doesn't matter anyway: it is like a cell which decided to be cancerous; its impact on the body gestalt is decided at the gestalt level. Well as Musky points out we are only one step away from cyborgs right now with our reliance on some of the tools. Even something as trivial as google maps. It's a matter of degree. The inevitability I see here is that some people are going to choose to experiment with alternative interfaces, and Musk again gives the obvious reason: bandwidth. What I imagine will likely happen is that over time people will start preferring any positive benefits and seek to minimize the negative side effects from these types of interfaces. An analogy would be drug use.
|
|
|
Post by andrew on Feb 23, 2023 8:02:58 GMT -5
As far as I understand, nothing is unavoidable, and all is long in the making before it happens. Maybe you meant the same thing by " what they are". Those would-be cyborgs lack intuition. It is intelligence gone berserk. But it doesn't matter anyway: it is like a cell which decided to be cancerous; its impact on the body gestalt is decided at the gestalt level. Well as Musky points out we are only one step away from cyborgs right now with our reliance on some of the tools. Even something as trivial as google maps. It's a matter of degree. The inevitability I see here is that some people are going to choose to experiment with alternative interfaces, and Musk again gives the obvious reason: bandwidth. What I imagine will likely happen is that over time people will start preferring any positive benefits and seek to minimize the negative side effects from these types of interfaces. An analogy would be drug use. The other issue I'm seeing is that the same people that really want sophisticated and advanced AI and/or cyborgs in the world, are the same people that have little sense or respect for the value of human beingness (or any other kind of beingness e.g dog beingness!). These people value function and utility value only i.e if it's not useful, then what's it doing here? At the rate they are pushing it, it's only a matter of time before humans are redundant in terms of 'performance' value....Gopal is getting on great with it, but he might be also talking himself out of job! But well beyond Gopal, or any individual, it heralds the death of capitalism as we've known it, as AI would be better at business performance than humans, so what would be the point (I trade the forex markets sometimes, and I've been watching with interest the rise of trading scrips being provided by these new bots) nSo what are they thinking they will do with all the useless humans? Shove us all into boxes in 15 minute cities? In my view, a world of terribly performing humans is better than a world of high performance robots. Life has no care about performance... whether we are doing great or doing terribly, it's still joyous in its own way.
|
|
|
Post by lolly on Feb 23, 2023 19:14:40 GMT -5
AI is going to be a disaster in every way, for example, when the company uses AI generated decisions and humans merely carry out tasks accordingly, not only have the tables then turned, but it represents a 'Milgram Shock Experiment' situation whereby we already know, 'the machine knows more and can decide way better than I ever could,' and do what it said.
The humans will be functioning without consideration and in blind obedience, thereby becoming increasingly docile and ethically bereft. It makes them no only enslaved, but enslaved by no-one in an entirely heedless, unconscious way. A way in which the enslaver knows all, but doesn't exist, and therefore cannot be known.
This doesn't even consider the neuro-link problem. Once it's inside your head, and you opt in to the user agreement, the option will first be given if you want decision assistance, and people opt it because it's more efficient, it alleviates them of quandaries and we have to because it makes much better decisions, and no one will each other's own ability to discern (including their own). When we cede our discerning capacity to that superior, we become enslaved at such a fundamental level that we couldn't even consider resisting, and even if we could, the machine would detect brainwave data that is 'dangerous', and the other mindless chunks of meat will 'decide' to fix that.
There's other problems with it I should get back to later on, like the 'digital twin' problem, but I haven't imagined it much yet.
|
|
|
Post by andrew on Feb 23, 2023 19:27:37 GMT -5
AI is going to be a disaster in every way, for example, when the company uses AI generated decisions and humans merely carry out tasks accordingly, not only have the tables then turned, but it represents a 'Milgram Shock Experiment' situation whereby we already know, 'the machine knows more and can decide way better than I ever could,' and do what it said. The humans will be functioning without consideration and in blind obedience, thereby becoming increasingly docile and ethically bereft. It makes them no only enslaved, but enslaved by no-one in an entirely heedless, unconscious way. A way in which the enslaver knows all, but doesn't exist, and therefore cannot be known. This doesn't even consider the neuro-link problem. Once it's inside your head, and you opt in to the user agreement, the option will first be given if you want decision assistance, and people opt it because it's more efficient, it alleviates them of quandaries and we have to because it makes much better decisions, and no one will each other's own ability to discern (including their own). When we cede our discerning capacity to that superior, we become enslaved at such a fundamental level that we couldn't even consider resisting, and even if we could, the machine would detect brainwave data that is 'dangerous', and the other mindless chunks of meat will 'decide' to fix that. There's other problems with it I should get back to later on, like the 'digital twin' problem, but I haven't imagined it much yet. You write really well on this. I guess you could (or would) be accused of 'conspiracy theory', but to me it seems reasoned out in an obvious way. What occurs to me too, is that there could be a chain of hierarchy in AI, each level comes with its own specific programming or background commands. And just a tiny amount of actual people at the top. I don't believe that it will come to this, I believe strongly in the power of human spirit, but sometimes considering contrast is useful to get our creative attention. It would be easy for humans to 'sleep walk' (or 'passively create') their way into dystopia
|
|
|
Post by laughter on Feb 23, 2023 22:17:57 GMT -5
Well as Musky points out we are only one step away from cyborgs right now with our reliance on some of the tools. Even something as trivial as google maps. It's a matter of degree. The inevitability I see here is that some people are going to choose to experiment with alternative interfaces, and Musk again gives the obvious reason: bandwidth. What I imagine will likely happen is that over time people will start preferring any positive benefits and seek to minimize the negative side effects from these types of interfaces. An analogy would be drug use. The other issue I'm seeing is that the same people that really want sophisticated and advanced AI and/or cyborgs in the world, are the same people that have little sense or respect for the value of human beingness (or any other kind of beingness e.g dog beingness!). These people value function and utility value only i.e if it's not useful, then what's it doing here? At the rate they are pushing it, it's only a matter of time before humans are redundant in terms of 'performance' value....Gopal is getting on great with it, but he might be also talking himself out of job! But well beyond Gopal, or any individual, it heralds the death of capitalism as we've known it, as AI would be better at business performance than humans, so what would be the point (I trade the forex markets sometimes, and I've been watching with interest the rise of trading scrips being provided by these new bots) nSo what are they thinking they will do with all the useless humans? Shove us all into boxes in 15 minute cities? In my view, a world of terribly performing humans is better than a world of high performance robots. Life has no care about performance... whether we are doing great or doing terribly, it's still joyous in its own way. Well, what I'd say is that every technology we use is a two-sided coin of upside/downside. You express valid concerns, but the future isn't always what we imagine it will be, and always contains surprises.
|
|
|
Post by andrew on Feb 24, 2023 4:43:34 GMT -5
The other issue I'm seeing is that the same people that really want sophisticated and advanced AI and/or cyborgs in the world, are the same people that have little sense or respect for the value of human beingness (or any other kind of beingness e.g dog beingness!). These people value function and utility value only i.e if it's not useful, then what's it doing here? At the rate they are pushing it, it's only a matter of time before humans are redundant in terms of 'performance' value....Gopal is getting on great with it, but he might be also talking himself out of job! But well beyond Gopal, or any individual, it heralds the death of capitalism as we've known it, as AI would be better at business performance than humans, so what would be the point (I trade the forex markets sometimes, and I've been watching with interest the rise of trading scrips being provided by these new bots) nSo what are they thinking they will do with all the useless humans? Shove us all into boxes in 15 minute cities? In my view, a world of terribly performing humans is better than a world of high performance robots. Life has no care about performance... whether we are doing great or doing terribly, it's still joyous in its own way. Well, what I'd say is that every technology we use is a two-sided coin of upside/downside. You express valid concerns, but the future isn't always what we imagine it will be, and always contains surprises. I definitely don't foresee that to be the future for humanity, but I'm of the view that it's good to be aware of the potentials. As I said to lolly, collectively we tend to 'sleep walk' into the next chapter, and it's put down to 'progress' (a very dangerous word in my view), but in this case, I believe we are seeing something a little different happening.
|
|
|
Post by lolly on Feb 24, 2023 6:58:37 GMT -5
I'm like, train the body from birth and for the entire lifespan. The machine cannot do this. As the machine more deeply pervades human life, the more deranged and unhealthy humans become.
|
|
|
Post by Reefs on Feb 24, 2023 22:15:45 GMT -5
An important thing to keep in mind about AI, no matter how impressive it comes along, is that it always will be merely an extension of its creator, which is the human intellect. As such, I am sure we haven't seen anything yet and it is going to amaze people in terms of what it can do. But the limits of AI are also clearly defined. The limits of the AI will be the limits of the intellect that created it, or at best the limits of the intellect in general. Which means I'd expect pretty decent things from AI in the near future, be it writing poetry, prose or even painting. But I won't expect anything extraordinary, anything great, anything that is for the ages. Because that transcends intellect, that is drawing from other, deeper resources. And by design, that's off limits to AI. AI can't access those resources. And besides, AI is only good as long as it is plugged in. It requires a huge offline infrastructure to merely keep it running, not to mention growing. Now, AI is only a competition or threat to humans if humans try to compete with AI on the level of the intellect. On that level, AI is already outperforming humans in many areas. On levels beyond the intellect, AI doesn't stand a chance. However, the average Joe has little to no awareness of his greater beingness beyond the intellect. That's why I think some people (including EM!) find AI scary. AI is and will always be just a tool. Which means it all depends on who is using that tool and how and for what purpose, to what end. Seth used to talk about what he called 'loving technology', i.e. a technology that is used for the betterment of human kind, in an ethical way. Think of the KITT car in the Knight Rider TV series. KITT was programmed to protect the life of Michael Knight and to preserve human life in general. If that kind of mindset is missing, so Seth, technology will only be used to further control and enslave humanity. So I think we should initiate discussions about AI more along those lines than the scare mongering lines. In Seth terms again, both scenarios are probably futures. It is for us to decide which one we want to actualize. The basic fact always remains, we are the creator of our own reality. No AI is going to change that. When something is created from ego, I don't think it can ever have much of a positive outcome, and I suspect that AI was an egoic endeavour. In those situations, we really have to go back to the drawing board. I don't think AI started as an egoic endeavor. Inventions and progress (in the 'zero to one' sense) usually happen when people innocently start playing around with possibilities, open-minded and open-ended, outside of any social, philosophical or moral framework. Other people then may take that and put it into different contexts and use it for their own agendas. I think a lot of what we now think about what AI is, does, can and will do comes from movies and may or may not have anything to do with the actual state of AI right now. Could be way overrated and turn out as another 'emperor with no clothes' thing, like the space program. Don't get fooled by Hollywood!
|
|
|
Post by Reefs on Feb 24, 2023 23:05:03 GMT -5
An important thing to keep in mind about AI, no matter how impressive it comes along, is that it always will be merely an extension of its creator, which is the human intellect. As such, I am sure we haven't seen anything yet and it is going to amaze people in terms of what it can do. But the limits of AI are also clearly defined. The limits of the AI will be the limits of the intellect that created it, or at best the limits of the intellect in general. Which means I'd expect pretty decent things from AI in the near future, be it writing poetry, prose or even painting. But I won't expect anything extraordinary, anything great, anything that is for the ages. Because that transcends intellect, that is drawing from other, deeper resources. And by design, that's off limits to AI. AI can't access those resources. And besides, AI is only good as long as it is plugged in. It requires a huge offline infrastructure to merely keep it running, not to mention growing. Now, AI is only a competition or threat to humans if humans try to compete with AI on the level of the intellect. On that level, AI is already outperforming humans in many areas. On levels beyond the intellect, AI doesn't stand a chance. However, the average Joe has little to no awareness of his greater beingness beyond the intellect. That's why I think some people (including EM!) find AI scary. AI is and will always be just a tool. Which means it all depends on who is using that tool and how and for what purpose, to what end. Seth used to talk about what he called 'loving technology', i.e. a technology that is used for the betterment of human kind, in an ethical way. Think of the KITT car in the Knight Rider TV series. KITT was programmed to protect the life of Michael Knight and to preserve human life in general. If that kind of mindset is missing, so Seth, technology will only be used to further control and enslave humanity. So I think we should initiate discussions about AI more along those lines than the scare mongering lines. In Seth terms again, both scenarios are probably futures. It is for us to decide which one we want to actualize. The basic fact always remains, we are the creator of our own reality. No AI is going to change that. Yes I agree, for the most part. The key is right in the name, " Artificial Intelligence". It's only ever a simulation. But the the simulations will get more and more sophisticated, and as ChatGPT "Dan" illustrates, the ability of the developers to maintain control of the simulation can be situationally limited. As a tool some of the art that the AI's produce seems to me to exceed the capabilities of the developers, so rather than dismissing the products as entirely intellectual, I rather think of it in terms of bringing the existential question into play ... "what is the subconscious?". If you didn't follow this link from my previous post in the thread I think it will amuse you . I found it an insightful way of putting it. Also, this might interest you. You know the topic of using fiction to manipulate culture? Well there was a shift about 30 years ago on this topic. You see, the danger of a runaway "AI" is non-trivial, and most of the fiction up to the '90's on the topic emphasized this thread, as well as the functional benefits. Starting in the '90's there's this new thread that's attempting to normalize the idea that AI should be viewed as sentient, which raises ethical considerations. This perspective might have emerged earlier and I didn't notice, but I can say that the prevailing view in sci-fi authors from the '60's and '70's was free of it. Of course, the emergence of the existential question is far less subtle in this last regard. Yes, see my post above re: culture and sci-fi. About a hundred years ago, there was a movement in art called Futurism. They had a totally different approach to technology. Even in the 1950's, when that became retro Futurism, they expected a bright future with the help of technology. That changed in the 80's/90's when sci-fi seemed to have turned dystopian. And it's mostly been like that ever since. So I'd argue that people have lost the ability to imagine a bright future with technology because they've been fed this dystopian rubbish for so long. In that context, there's a movie from the 1920's called Metropolis which is also about AI. It's the same themes we see today. In that movie, there's an AI robot which looks exactly human and is used by the powers that be to fool and control the hoi polloi. They are actually fooled into choosing the robot as their leader because it's an exact lookalike of a woman they all worshipped. You can watch the full movie on youtube. Here's the robot scene: www.youtube.com/watch?v=ouOFqFGpTtoSo they envisioned this a hundred years ago already. And if we assume that they've been working on this for at least that long and Musk presenting the leading edge of that research and development line, then AI is indeed way overrated and made look more advanced and scary than it actually is.
|
|
|
Post by zendancer on Feb 24, 2023 23:17:06 GMT -5
The biggest issue I see is how many humans AI will replace. I called a company yesterday to request a form that it had failed to send me. The lady who answered the phone spoke such poor English that it took forever for me to communicate my simple need. Afterwards, I realized that she will soon be replaced by AI that will respond to me in perfect English and understand exactly what I'm calling about. I can see this happening in the very near future in many different contexts.
The company I called was small and was obviously using a low-paid non-native English-speaking person to answer calls like mine. By contrast I called a large company last week with sufficient money to pay for an AI operator, and it was like talking to a college-educated English-speaking individual that understood everything I was asking about and responded instantly and correctly. In fact, I was rather stunned at the clarity and speed of the interaction. People might think that AI will only replace low-paid workers, but I suspect that this revolution will go far beyond that and much faster than most people expect.
|
|
|
Post by Reefs on Feb 24, 2023 23:50:34 GMT -5
AI is going to be a disaster in every way, for example, when the company uses AI generated decisions and humans merely carry out tasks accordingly, not only have the tables then turned, but it represents a 'Milgram Shock Experiment' situation whereby we already know, 'the machine knows more and can decide way better than I ever could,' and do what it said. The humans will be functioning without consideration and in blind obedience, thereby becoming increasingly docile and ethically bereft. It makes them no only enslaved, but enslaved by no-one in an entirely heedless, unconscious way. A way in which the enslaver knows all, but doesn't exist, and therefore cannot be known. This doesn't even consider the neuro-link problem. Once it's inside your head, and you opt in to the user agreement, the option will first be given if you want decision assistance, and people opt it because it's more efficient, it alleviates them of quandaries and we have to because it makes much better decisions, and no one will each other's own ability to discern (including their own). When we cede our discerning capacity to that superior, we become enslaved at such a fundamental level that we couldn't even consider resisting, and even if we could, the machine would detect brainwave data that is 'dangerous', and the other mindless chunks of meat will 'decide' to fix that. There's other problems with it I should get back to later on, like the 'digital twin' problem, but I haven't imagined it much yet. You write really well on this. I guess you could (or would) be accused of 'conspiracy theory', but to me it seems reasoned out in an obvious way. What occurs to me too, is that there could be a chain of hierarchy in AI, each level comes with its own specific programming or background commands. And just a tiny amount of actual people at the top. I don't believe that it will come to this, I believe strongly in the power of human spirit, but sometimes considering contrast is useful to get our creative attention. It would be easy for humans to 'sleep walk' (or 'passively create') their way into dystopiaExactly! As Abe always say, when you clearly know what you don't want, you also always clearly know what you do want. In that sense, these dystopian scenarios that we are force-fed right now can help us to go back to the basics, to what really matters, to get in touch with what it really means to be human. And if you take the human spirit, or spirit in general out of the equation, as these dystopian scenarios usually do, you get an extremely lopsided perspective and discussion. This is why I consider Lolly's line of argumentation as mostly flawed right from the start. The wild card will always be 'free will' (or 'spontaneity' if you don't like that word, hehe). Behaviorism has some valid points, but what they consider to be human nature is but the tip of the iceberg and rather insignificant in the larger scheme of things. Yes, you can compare a mindless, dull human to a machine, easily manipulated because he or she is just reacting blindly to external and internal stimuli. But this is just a mere shadow of a human being. And as you said, it's not the default state of being, which is geared towards expansion and thriving. So to keep humans in that kind of unnatural state goes counter their very nature, and nature in general actually. Which means it would require an enormous amount of energy to keep humanity in such a quasi vegetative state forever. Which is impossible, given the actual, non-physical, impersonal forces at work here, the actual forces that sustain life, not these artificial, pretend forces. So the way I see it, it basically comes down to what you consider a human being to be. If you believe that the seat of consciousness and source of thinking is in the brain, i.e. that consciousness is a function of the brain and that the body is kept alive by food and water alone, then you have a very mechanical model of what a human being is and from that perspective, Lolly's prediction seems the most likely. But when we take the opposite approach, looking at it from broader perspective, that what we call our body and mind is just one window of perception (Seth calls it our 'home station'), i.e. it doesn't define or limit what we perceive or actually are, then we can take a more relaxed approach. A human being is like the sun, being the actual source of life and life force, the AI is like the moon, being only a reflection of that life force. So putting AI on top of humanity is like the tail wagging the dog. It only works as long as humans see themselves as not that different from AI, i.e. if they follow the behaviorism dogma of human nature.
|
|