|
Post by Reefs on Sept 28, 2024 23:51:31 GMT -5
The philosophy behind AI/OI is transhumanism (see The Matrix 1-3) and posthumanism (see The Matrix 4). In a sense, wearing glasses, a hearing aid or having dental implants or a hip replacement is already some form of primitive transhumanism. So transhumanism has been a reality for a very long time already. But in the near future it is going to get a lot more subtle, it will move to the micro and cellular level. The next step though - and in their minds the final step - is posthumanism, (see the Transcendence movie). The problem with these philosophies though is that they are based on several flawed premises that go back to a misunderstanding of what consciousness is. And that is, interestingly, very similar to what we encounter here on the forum all the time, i.e. people confusing consciousness with mind. And in this case also confusing mind with the intellect. When you do that, you will eventually arrive at solipsism and conclude that you are living in a simulation and that your identity and beingness could be reduced to a mere set of data. It's the inevitable conclusion if you follow the rational approach.
In terms of history, these models of reality are also very shortsighted. They seem to see the technological age as the final stage in human development. In terms of yuga theory though, the technological age is but a short transitory stage in the whole cycle, because the new abilities that technology seems to equip us with are actually our own natural abilities. So the technological age is a sort of bridge between not knowing and not using our own innate abilities to fully knowing and using our own innate abilities. Therefore, technology, as usual, is just mimicking nature again.
Now, as with all these developments, what we are dealing with is not some linear but exponential scenario, i.e. for a very long time there's not much happening and it sort of flies under the radar (see the past 50 years). But then there's a point when it suddenly starts to explode and we are right at this point right now, where progress with these technologies is exploding. Which means the world as we know it will cease to exist very soon.
And while the results are pretty impressive, it has to be pointed out, especially in public discourse, that the underlying model of reality is deeply flawed and also a bit naive. Nevertheless, it will transform our lives radically in the next 5 years already. It will throw a lot of people and entire societies into a deep identity crisis, including our models of governance and economy. The fact alone that AI very soon will replace any and all work/jobs is going to be a tough one for societies that identify with the work/jobs they do.
Interesting times ahead!
Yes, I agree about the misconceptions and the confusion. And it doesn't help that the Transcendence view (a restatement of the punchline from A.I.) fits a particular pointing .. the one that the solipsists went all D.G. on, and that ZD likes to joke about with the "Igor .. !" joke. The chip-neurons are no more conscious than a rock. Or a people-peep. But, of course, a people-peep, is not, a rock. The Large Language Models are definitely already impacting, but, in another sense, transhumanism is just an extension of one of the primary differentiators of human beings from other animals: tool usage. Now, we've come to understand and accept that many other animals also use tools, but, of course, it's all and always a matter of degree. Automation has been with us and transforming the work place since long before either of us were born, and the funny thing is, people seem to have this knack for making themselves useful, despite the inexorability of it all. Hey, you know, I was completely ignorant about "yugas" and got curious a few months back. Funny that there are unexplained skeletal remains of giants. As well as dozens of instances of massive artificial stone formations that defy conventional explanation. Another major flaw in their reasoning is that some of their goals go against the laws of creation, like eradicating all illnesses and overcoming physical death.
The way I see it, transhumanism is a product of scientism. And AI is a product of transhumanism. This goes back more than a hundred years, before the Huxley brothers even.
To get an idea what this is all about in their own words, here's the website of the World Transhumanist Association:
In the mission statement there's this interesting statement:
Zen = mental fitness
|
|
|
Post by inavalan on Sept 30, 2024 0:28:37 GMT -5
|
|
|
Post by laughter on Oct 1, 2024 15:14:23 GMT -5
Funny that there are unexplained skeletal remains of giants. As well as dozens of instances of massive artificial stone formations that defy conventional explanation. Hey. I finally get the grasshopper side eye thingy.
|
|
|
Post by laughter on Oct 1, 2024 15:19:57 GMT -5
Ok, first two sidetracks before I get to the substance. The first is comedic. I was searching for this vid on a different platform and found a really funny horror short. (not for the over-sensitive) The second side-track has to do with some thoughts about the tech. The vid concentrates entirely on the "how" of this. The only guy who gets into the "what", as in, "what can this stuff do?", is the guy who trained the flight simulator. It piques my interest. Chatgtp is quite transparent as to the big picture of its design, and at its core is tech that was invented in the 1960's. The idea is inspired by human neurons, but the building block, the artificial "neuron", is simplified relative to the organic version to the extent that calling it childlike would be a compliment. Point being, that the artificial neural networks that implement the large language models are something completely different from a human brain. They've developed along an results-driven arc, without regard to the original design impetus to replicate the original organic "prototype". The reason I bring this up is because I'm curious as to what the benefit of an O.I. approach would be. There were a few hints in this vid, but it left me with many more questions. Interesting stuff, thanks. I could get into quite a bit more detail about ANN design, but didn't want to get lost in th weeds unless someone expresses interest. One huge benefit of O.I. is the extremely low energy consumption compared to current A.I. as mentioned in the vid. The high energy consumption of A.I. data centers is apparently already a problem. That got me curiouser. A casual google indicates that's a training problem, so a one-time cost, but I don't know if I buy that, as usage has probably grown at a clip over the past year. That's another reason to satisfy the curiosity about the details of O.I.: how does it scale? The predictions of energy savings might be a thing, but the devil is always in those details. For instance, you can ask this question, what scale O.I. network would be required to field, in silicon A.I. terms is called a "Large Language Model"? Would this be able to run in parallel, or would you need as many implementations as users?
|
|
|
Post by laughter on Oct 1, 2024 15:33:45 GMT -5
Yes, I agree about the misconceptions and the confusion. And it doesn't help that the Transcendence view (a restatement of the punchline from A.I.) fits a particular pointing .. the one that the solipsists went all D.G. on, and that ZD likes to joke about with the "Igor .. !" joke. The chip-neurons are no more conscious than a rock. Or a people-peep. But, of course, a people-peep, is not, a rock. The Large Language Models are definitely already impacting, but, in another sense, transhumanism is just an extension of one of the primary differentiators of human beings from other animals: tool usage. Now, we've come to understand and accept that many other animals also use tools, but, of course, it's all and always a matter of degree. Automation has been with us and transforming the work place since long before either of us were born, and the funny thing is, people seem to have this knack for making themselves useful, despite the inexorability of it all. Hey, you know, I was completely ignorant about "yugas" and got curious a few months back. Funny that there are unexplained skeletal remains of giants. As well as dozens of instances of massive artificial stone formations that defy conventional explanation. Another major flaw in their reasoning is that some of their goals go against the laws of creation, like eradicating all illnesses and overcoming physical death.
The way I see it, transhumanism is a product of scientism. And AI is a product of transhumanism. This goes back more than a hundred years, before the Huxley brothers even.
To get an idea what this is all about in their own words, here's the website of the World Transhumanist Association:
In the mission statement there's this interesting statement:
Zen = mental fitness
I'd bet your last dollar that "Finding a balance between opportunity and risk" is a post-2020 catch-phrase. The front page is quite anodyne. There's really no suggestion there that they want to forcibly assimilate everyone. I get your humor about Zen, but, it's always that duck/bunny thingy though, ya' know? . It's funny, but you'd think the advocates of scientism would have got the memo by now from the 1920's (Bohr and Einstein) as to the flawed premise, or from the 1940's, as to the practical, end result. The competing murderous ideologies that drove that conflict were rooted quite firmly in rationality and rejecting anything other than the five senses. The " projects" page is just as anodyne. All puppies, rainbows, and lollipops. Sadly, these days, when people with lots of money say something, they almost always mean the opposite.
|
|
|
Post by justlikeyou on Oct 2, 2024 5:58:09 GMT -5
Hey. I finally get the grasshopper side eye thingy. Hmm. Do you mean the eye of Horace thingy?
|
|
|
Post by laughter on Oct 2, 2024 11:56:59 GMT -5
Hey. I finally get the grasshopper side eye thingy. Hmm. Do you mean the eye of Horace thingy? No, I meant it that the pic is such a shocker that it led to a reaction similar to seeing a really funny meme or hearing a really good joke: involuntarily laughing out loud, for real (not because the idea of giants is implausible, no, just the opposite). I've seen your profile pic hundreds of times, but in that moment, this guy, was in on the joke:
|
|
|
Post by justlikeyou on Oct 2, 2024 19:22:27 GMT -5
Hmm. Do you mean the eye of Horace thingy? No, I meant it that the pic is such a shocker that it led to a reaction similar to seeing a really funny meme or hearing a really good joke: involuntarily laughing out loud, for real (not because the idea of giants is implausible, no, just the opposite). I've seen your profile pic hundreds of times, but in that moment, this guy, was in on the joke: Ha! I feel confident that some of the coming revelations will show that human history is far different than what we've been told/sold. I thought maybe you had found your way to the Book of Enoch. lol I put that profile pic up a while ago during a discussion of Awareness. I've long marveled at the Praying Mantis ability to turn its head, look you in the eye and sometimes put up it's dukes when you get too close. www.shutterstock.com/video/clip-22965988-close-up-praying-mantis-turning-head-towards-camera
|
|
|
Post by inavalan on Oct 2, 2024 21:05:56 GMT -5
|
|
|
Post by justlikeyou on Oct 3, 2024 7:44:17 GMT -5
Cool.
|
|
|
Post by Reefs on Oct 5, 2024 21:41:53 GMT -5
One huge benefit of O.I. is the extremely low energy consumption compared to current A.I. as mentioned in the vid. The high energy consumption of A.I. data centers is apparently already a problem. That got me curiouser. A casual google indicates that's a training problem, so a one-time cost, but I don't know if I buy that, as usage has probably grown at a clip over the past year. That's another reason to satisfy the curiosity about the details of O.I.: how does it scale? The predictions of energy savings might be a thing, but the devil is always in those details. For instance, you can ask this question, what scale O.I. network would be required to field, in silicon A.I. terms is called a "Large Language Model"? Would this be able to run in parallel, or would you need as many implementations as users? Here's the link to the article in the video: www.tomshardware.com/pc-components/cpus/worlds-first-bioprocessor-uses-16-human-brain-organoids-for-a-million-times-less-power-consumption-than-a-digital-chipFrom the last paragraph:
|
|
|
Post by Reefs on Oct 5, 2024 22:02:27 GMT -5
Another major flaw in their reasoning is that some of their goals go against the laws of creation, like eradicating all illnesses and overcoming physical death.
The way I see it, transhumanism is a product of scientism. And AI is a product of transhumanism. This goes back more than a hundred years, before the Huxley brothers even.
To get an idea what this is all about in their own words, here's the website of the World Transhumanist Association:
In the mission statement there's this interesting statement: Zen = mental fitness
I'd bet your last dollar that "Finding a balance between opportunity and risk" is a post-2020 catch-phrase. The front page is quite anodyne. There's really no suggestion there that they want to forcibly assimilate everyone. I get your humor about Zen, but, it's always that duck/bunny thingy though, ya' know? . It's funny, but you'd think the advocates of scientism would have got the memo by now from the 1920's (Bohr and Einstein) as to the flawed premise, or from the 1940's, as to the practical, end result. The competing murderous ideologies that drove that conflict were rooted quite firmly in rationality and rejecting anything other than the five senses. The " projects" page is just as anodyne. All puppies, rainbows, and lollipops. Sadly, these days, when people with lots of money say something, they almost always mean the opposite. What I've noticed recently is that most people, including macro economic forcasters, have no clue what AI is and how it is going to reshape our entire way of life, globally. Most actually seem to confuse AI with robotics. That's a big mistake. It leads to wrong conclusions and wrong projections. An interesting buzzword in that context is "economic singularity".
The reason why I mentioned trans- and post-humanism is because it's the belief system that's behind the projects that Musk, Thiel, Zuck and others are running and where billions of dollars (or trillions?) are invested. And in order to understand where this is heading, we have to understand the basic beliefs and assumptions of it. Now, post-humanism (mind upload etc.) is nonsense, as already explained. But trans-humanism is already a reality. My take on the current state of AI and near future developments is that this could go both ways, either a "new golden age" or a "brave new 1984". But that's up to the mass consciousness, i.e. each individual, collectively. So it's important to keep in mind that there's always a flip side to all of this, be it the most positive or the most negative scenario.
Here's an interesting quote from the 1960's... On that note, Gates recently mentioned that AI is the first technology that has no limit. www.youtube.com/watch?v=DD4F5it7a5M(It's an interesting interview, notice how Gates, a transhumanist, doesn't seem to get the wire frame monkey analogy/concern)
|
|
|
Post by laughter on Oct 6, 2024 6:56:59 GMT -5
That got me curiouser. A casual google indicates that's a training problem, so a one-time cost, but I don't know if I buy that, as usage has probably grown at a clip over the past year. That's another reason to satisfy the curiosity about the details of O.I.: how does it scale? The predictions of energy savings might be a thing, but the devil is always in those details. For instance, you can ask this question, what scale O.I. network would be required to field, in silicon A.I. terms is called a "Large Language Model"? Would this be able to run in parallel, or would you need as many implementations as users? Here's the link to the article in the video: www.tomshardware.com/pc-components/cpus/worlds-first-bioprocessor-uses-16-human-brain-organoids-for-a-million-times-less-power-consumption-than-a-digital-chipFrom the last paragraph: So if they trained an LLM, I guess they'd have to perfect some sort of cloning process to maintain the continuity. Scale seems to be an issue for either the electronic or the biological approach. Just manifests in different ways.
|
|
|
Post by laughter on Oct 6, 2024 7:20:53 GMT -5
I'd bet your last dollar that "Finding a balance between opportunity and risk" is a post-2020 catch-phrase. The front page is quite anodyne. There's really no suggestion there that they want to forcibly assimilate everyone. I get your humor about Zen, but, it's always that duck/bunny thingy though, ya' know? . It's funny, but you'd think the advocates of scientism would have got the memo by now from the 1920's (Bohr and Einstein) as to the flawed premise, or from the 1940's, as to the practical, end result. The competing murderous ideologies that drove that conflict were rooted quite firmly in rationality and rejecting anything other than the five senses. The " projects" page is just as anodyne. All puppies, rainbows, and lollipops. Sadly, these days, when people with lots of money say something, they almost always mean the opposite. What I've noticed recently is that most people, including macro economic forcasters, have no clue what AI is and how it is going to reshape our entire way of life, globally. Most actually seem to confuse AI with robotics. That's a big mistake. It leads to wrong conclusions and wrong projections. An interesting buzzword in that context is "economic singularity".
The reason why I mentioned trans- and post-humanism is because it's the belief system that's behind the projects that Musk, Thiel, Zuck and others are running and where billions of dollars (or trillions?) are invested. And in order to understand where this is heading, we have to understand the basic beliefs and assumptions of it. Now, post-humanism (mind upload etc.) is nonsense, as already explained. But trans-humanism is already a reality. My take on the current state of AI and near future developments is that this could go both ways, either a "new golden age" or a "brave new 1984". But that's up to the mass consciousness, i.e. each individual, collectively. So it's important to keep in mind that there's always a flip side to all of this, be it the most positive or the most negative scenario.
Here's an interesting quote from the 1960's... On that note, Gates recently mentioned that AI is the first technology that has no limit. www.youtube.com/watch?v=DD4F5it7a5M(It's an interesting interview, notice how Gates, a transhumanist, doesn't seem to get the wire frame monkey analogy/concern) Like Corbett said, we're not wired to think in terms of exponential growth. Self-writing software has been a dream for some time know. Eventually this vision will be relevant, no doubt, but you have to differentiate between the substance and the hype. The chat bots capture our imagination because they reduce the power of a tech to an understandable and digestible result. They demonstrate to us the power of aggregating orders of magnitude more data than any one or even a team of people could ever imagine, and the application of a process that is similar to human insight to that mass of data. More obscure are some of the engineering break-troughs I remember reading about that were done using AI. And as I've related, the chat bots are invaluable as a new dimension on top of search that I've been using professionally now for months. Indispensable at this point. But still, so flawed as to be easily seen as, quite flawed. I could give you practical examples if you'd like. And do you remember the early trend of using AI to generate art? My guess is that the result is only ever to amplify and explore greater horizons of a given creative spark. Like, for example, maybe an AI could eventually generate a multiple of the works of Shakespeare some day. Even when the day comes that AI can redesign its next iteration, the discoveries and new creations will be defined relative to humanity in some way. The movie Her explored this limit, with Joaquin Pheonix falling in love with his AI (voice of Scarlett Johansson), and then (whited-out spoiler follows, highlight it to read it) that entity saying a tearful goodbye to him because she couldn't really explain "where she was going". There was a good point there, existential misconceptions aside: eventually, the only thing future humans will ever remember about a past or present AI is what they can understand from it. With, of course, the exception of the skynet scenario or some of even the more gruesome scenarios that were envisioned by Frank Herbert. That's really the line: giving up control to the system. In a sense, that's not a new dilemma.
|
|
|
Post by laughter on Oct 6, 2024 7:39:51 GMT -5
(It's an interesting interview, notice how Gates, a transhumanist, doesn't seem to get the wire frame monkey analogy/concern) I remember watching a doc that included those monkeys, back in the 70's. I was very young. It was literally sickening. Felt for those poor bastards as only a child can.
|
|