|
Post by melvin on Sept 13, 2024 16:53:14 GMT -5
I thought at the time andrew's views were AI. That's why I posted that. . I don't remember that. Are you sure? Maybe I just have a memory blank. That was only a joke and Reefs thought I was trolling.
|
|
|
Post by andrew on Sept 13, 2024 16:58:03 GMT -5
I don't remember that. Are you sure? Maybe I just have a memory blank. That was only a joke and Reefs thought I was trolling. no probs, it's cool. I find memory an interesting thing, my long term memory is generally very good, and my short term memory can be quite poor. It's not a problem for me, but yeah...I find memory an interesting thing.
|
|
|
Post by melvin on Sept 13, 2024 17:15:43 GMT -5
That was only a joke and Reefs thought I was trolling. no probs, it's cool. I find memory an interesting thing, my long term memory is generally very good, and my short term memory can be quite poor. It's not a problem for me, but yeah...I find memory an interesting thing. Thanks for understanding. I retrieve my memories from favorite songs I saved in spotify.
|
|
|
Post by Gopal on Sept 13, 2024 22:12:33 GMT -5
That MIGHT be an ego view of the matter (and my reply here might also be an ego perspective on the matter!) My experience is also that spiritual growth is ultimately a solitary thing. Any form of ' death' has to be solitary really. I don't think so. One may believe that, but as far as I know, everything is connected. There are free-will and individual-choice, but they don't amount to solitude. If you (impersonal use here) believe that you are alone (on your spiritual quest, in your death), then you'll experience that, but that's an illusion, and it is detrimental: it prevents you from consciously getting the guidance and knowledge that is provided to you. You'll still get them subconsciously, but you're slowed down. No freewill but choosing is happening but there is no chooser controlling is happening but there is no controller.
|
|
|
Post by Gopal on Sept 13, 2024 22:13:25 GMT -5
I have enjoyed joining this group briefly, but it has mainly focused my understanding that genuine spiritual endeavour is a solitary occupation. If the ego has dissolved, there is no need to discuss it with anybody else. I believe that I am here in the physical reality, in this place, moment, probability, for a reason. I may have participated in choosing this. So, I have to the best of my abilities to find out what that reason is, and to the best of my abilities work on it. "Discussing" is a way of interacting. I need to interact with the reality, otherwise I wouldn't be here. If one can choose to incarnate here, then why so much poor people suffering like hell?
|
|
|
Post by inavalan on Sept 13, 2024 22:29:45 GMT -5
I believe that I am here in the physical reality, in this place, moment, probability, for a reason. I may have participated in choosing this. So, I have to the best of my abilities to find out what that reason is, and to the best of my abilities work on it. "Discussing" is a way of interacting. I need to interact with the reality, otherwise I wouldn't be here. If one can choose to incarnate here, then why so much poor people suffering like hell? I don't know why every one of chose those roles. Most of them, I guess, believed they'll be able to handle it successfully. Maybe they lacked humility, and they'll learn something. I think that the better question for you, for me, and for others who don't suffer that much themselves, but who observe others' suffering, is why did we choose to join this kind of reality, with so much apparent suffering? I guess: to observe, interpret, learn, grow. Why would you ever read a tale or watch a movie with a sad story? Let's say most of Shakespeare, Andersen, Brothers Grimm, ... Most people have no idea.
|
|
|
Post by Gopal on Sept 13, 2024 22:32:38 GMT -5
If one can choose to incarnate here, then why so much poor people suffering like hell? I don't know why every one of chose those roles. Most of them, I guess, believed they'll be able to handle it successfully. Maybe they lacked humility, and they'll learn something. I think that the better question for you, for me, and for others who don't suffer that much themselves, but who observe others' suffering, is why did we choose to join this kind of reality, with so much apparent suffering? I guess: to observe, interpret, learn, grow. Why would you ever read a tale or watch a movie with a sad story? Let's say most of Shakespeare, Andersen, Brothers Grimm, ... Most people have no idea. It's not the question of why. The reality of people must tell you that no one choose to incarnate such a hellish life. Everyone wants to live peacefully but then why children born in such a hellish environment and suffer their whole life? Because it's not their choice.
|
|
|
Post by inavalan on Sept 13, 2024 23:19:13 GMT -5
I don't know why every one of chose those roles. Most of them, I guess, believed they'll be able to handle it successfully. Maybe they lacked humility, and they'll learn something. I think that the better question for you, for me, and for others who don't suffer that much themselves, but who observe others' suffering, is why did we choose to join this kind of reality, with so much apparent suffering? I guess: to observe, interpret, learn, grow. Why would you ever read a tale or watch a movie with a sad story? Let's say most of Shakespeare, Andersen, Brothers Grimm, ... Most people have no idea. It's not the question of why. The reality of people must tell you that no one choose to incarnate such a hellish life. Everyone wants to live peacefully but then why children born in such a hellish environment and suffer their whole life? Because it's not their choice. We have different hypotheses about what physical-reality is, what the awake-you is, what else there is in the non-physical, ... Those hypotheses determine our answers to those questions, and obstruct the understanding of others' answers and opinions. You wrote " Because it's not their choice" This seems to me like saying about you, as a student taking a test and failing it, that failing it isn't your choice. Surely it is, because you are the person who chose not to be prepared for the test, although you knew you're take it, you knew what material you have to cram on, you knew your current level of abilities, but you estimated that you'll be able to pass the test. You are the person who, when in school, is a student. You can't say that the student didn't choose to fail.
|
|
|
Post by Reefs on Sept 23, 2024 21:53:59 GMT -5
|
|
|
Post by laughter on Sept 26, 2024 10:36:54 GMT -5
We've had dialogs before where I've related various topics to the scifi I read when I was a kid. At some point the notion that technology would transform the human race became a central theme. This was back in the '70s. The next decade saw "cyberpunk" emerge. Somewhere, I don't recall exactly, some author or futurist/critic envisioned a struggle between three groups : (1) people that would modify themsleves (like the borg, or the six million dollar man), (2) transfer of consciousness to an electronic media or (3) genetic modification to keep up with the machines rather than merge with them in any way. Lost the thread at about that time, as my reading interests changed.
|
|
|
Post by Reefs on Sept 27, 2024 22:12:30 GMT -5
We've had dialogs before where I've related various topics to the scifi I read when I was a kid. At some point the notion that technology would transform the human race became a central theme. This was back in the '70s. The next decade saw "cyberpunk" emerge. Somewhere, I don't recall exactly, some author or futurist/critic envisioned a struggle between three groups : (1) people that would modify themsleves (like the borg, or the six million dollar man), (2) transfer of consciousness to an electronic media or (3) genetic modification to keep up with the machines rather than merge with them in any way. Lost the thread at about that time, as my reading interests changed. The philosophy behind AI/OI is transhumanism (see The Matrix 1-3) and posthumanism (see The Matrix 4). In a sense, wearing glasses, a hearing aid or having dental implants or a hip replacement is already some form of primitive transhumanism. So transhumanism has been a reality for a very long time already. But in the near future it is going to get a lot more subtle, it will move to the micro and cellular level. The next step though - and in their minds the final step - is posthumanism, (see the Transcendence movie). The problem with these philosophies though is that they are based on several flawed premises that go back to a misunderstanding of what consciousness is. And that is, interestingly, very similar to what we encounter here on the forum all the time, i.e. people confusing consciousness with mind. And in this case also confusing mind with the intellect. When you do that, you will eventually arrive at solipsism and conclude that you are living in a simulation and that your identity and beingness could be reduced to a mere set of data. It's the inevitable conclusion if you follow the rational approach.
In terms of history, these models of reality are also very shortsighted. They seem to see the technological age as the final stage in human development. In terms of yuga theory though, the technological age is but a short transitory stage in the whole cycle, because the new abilities that technology seems to equip us with are actually our own natural abilities. So the technological age is a sort of bridge between not knowing and not using our own innate abilities to fully knowing and using our own innate abilities. Therefore, technology, as usual, is just mimicking nature again.
Now, as with all these developments, what we are dealing with is not some linear but exponential scenario, i.e. for a very long time there's not much happening and it sort of flies under the radar (see the past 50 years). But then there's a point when it suddenly starts to explode and we are right at this point right now, where progress with these technologies is exploding. Which means the world as we know it will cease to exist very soon.
And while the results are pretty impressive, it has to be pointed out, especially in public discourse, that the underlying model of reality is deeply flawed and also a bit naive.
Nevertheless, it will transform our lives radically in the next 5 years already. It will throw a lot of people and entire societies into a deep identity crisis, including our models of governance and economy. The fact alone that AI very soon will replace any and all work/jobs is going to be a tough one for societies that identify with the work/jobs they do.
Interesting times ahead!
|
|
|
Post by laughter on Sept 28, 2024 17:08:37 GMT -5
We've had dialogs before where I've related various topics to the scifi I read when I was a kid. At some point the notion that technology would transform the human race became a central theme. This was back in the '70s. The next decade saw "cyberpunk" emerge. Somewhere, I don't recall exactly, some author or futurist/critic envisioned a struggle between three groups : (1) people that would modify themsleves (like the borg, or the six million dollar man), (2) transfer of consciousness to an electronic media or (3) genetic modification to keep up with the machines rather than merge with them in any way. Lost the thread at about that time, as my reading interests changed. The philosophy behind AI/OI is transhumanism (see The Matrix 1-3) and posthumanism (see The Matrix 4). In a sense, wearing glasses, a hearing aid or having dental implants or a hip replacement is already some form of primitive transhumanism. So transhumanism has been a reality for a very long time already. But in the near future it is going to get a lot more subtle, it will move to the micro and cellular level. The next step though - and in their minds the final step - is posthumanism, (see the Transcendence movie). The problem with these philosophies though is that they are based on several flawed premises that go back to a misunderstanding of what consciousness is. And that is, interestingly, very similar to what we encounter here on the forum all the time, i.e. people confusing consciousness with mind. And in this case also confusing mind with the intellect. When you do that, you will eventually arrive at solipsism and conclude that you are living in a simulation and that your identity and beingness could be reduced to a mere set of data. It's the inevitable conclusion if you follow the rational approach.
In terms of history, these models of reality are also very shortsighted. They seem to see the technological age as the final stage in human development. In terms of yuga theory though, the technological age is but a short transitory stage in the whole cycle, because the new abilities that technology seems to equip us with are actually our own natural abilities. So the technological age is a sort of bridge between not knowing and not using our own innate abilities to fully knowing and using our own innate abilities. Therefore, technology, as usual, is just mimicking nature again.
Now, as with all these developments, what we are dealing with is not some linear but exponential scenario, i.e. for a very long time there's not much happening and it sort of flies under the radar (see the past 50 years). But then there's a point when it suddenly starts to explode and we are right at this point right now, where progress with these technologies is exploding. Which means the world as we know it will cease to exist very soon.
And while the results are pretty impressive, it has to be pointed out, especially in public discourse, that the underlying model of reality is deeply flawed and also a bit naive. Nevertheless, it will transform our lives radically in the next 5 years already. It will throw a lot of people and entire societies into a deep identity crisis, including our models of governance and economy. The fact alone that AI very soon will replace any and all work/jobs is going to be a tough one for societies that identify with the work/jobs they do.
Interesting times ahead!
Ok, first two sidetracks before I get to the substance. The first is comedic. I was searching for this vid on a different platform and found a really funny horror short. (not for the over-sensitive) The second side-track has to do with some thoughts about the tech. The vid concentrates entirely on the "how" of this. The only guy who gets into the "what", as in, "what can this stuff do?", is the guy who trained the flight simulator. It piques my interest. Chatgtp is quite transparent as to the big picture of its design, and at its core is tech that was invented in the 1960's. The idea is inspired by human neurons, but the building block, the artificial "neuron", is simplified relative to the organic version to the extent that calling it childlike would be a compliment. Point being, that the artificial neural networks that implement the large language models are something completely different from a human brain. They've developed along an results-driven arc, without regard to the original design impetus to replicate the original organic "prototype". The reason I bring this up is because I'm curious as to what the benefit of an O.I. approach would be. There were a few hints in this vid, but it left me with many more questions. Interesting stuff, thanks. I could get into quite a bit more detail about ANN design, but didn't want to get lost in th weeds unless someone expresses interest.
|
|
|
Post by laughter on Sept 28, 2024 17:29:20 GMT -5
We've had dialogs before where I've related various topics to the scifi I read when I was a kid. At some point the notion that technology would transform the human race became a central theme. This was back in the '70s. The next decade saw "cyberpunk" emerge. Somewhere, I don't recall exactly, some author or futurist/critic envisioned a struggle between three groups : (1) people that would modify themsleves (like the borg, or the six million dollar man), (2) transfer of consciousness to an electronic media or (3) genetic modification to keep up with the machines rather than merge with them in any way. Lost the thread at about that time, as my reading interests changed. The philosophy behind AI/OI is transhumanism (see The Matrix 1-3) and posthumanism (see The Matrix 4). In a sense, wearing glasses, a hearing aid or having dental implants or a hip replacement is already some form of primitive transhumanism. So transhumanism has been a reality for a very long time already. But in the near future it is going to get a lot more subtle, it will move to the micro and cellular level. The next step though - and in their minds the final step - is posthumanism, (see the Transcendence movie). The problem with these philosophies though is that they are based on several flawed premises that go back to a misunderstanding of what consciousness is. And that is, interestingly, very similar to what we encounter here on the forum all the time, i.e. people confusing consciousness with mind. And in this case also confusing mind with the intellect. When you do that, you will eventually arrive at solipsism and conclude that you are living in a simulation and that your identity and beingness could be reduced to a mere set of data. It's the inevitable conclusion if you follow the rational approach.
In terms of history, these models of reality are also very shortsighted. They seem to see the technological age as the final stage in human development. In terms of yuga theory though, the technological age is but a short transitory stage in the whole cycle, because the new abilities that technology seems to equip us with are actually our own natural abilities. So the technological age is a sort of bridge between not knowing and not using our own innate abilities to fully knowing and using our own innate abilities. Therefore, technology, as usual, is just mimicking nature again.
Now, as with all these developments, what we are dealing with is not some linear but exponential scenario, i.e. for a very long time there's not much happening and it sort of flies under the radar (see the past 50 years). But then there's a point when it suddenly starts to explode and we are right at this point right now, where progress with these technologies is exploding. Which means the world as we know it will cease to exist very soon.
And while the results are pretty impressive, it has to be pointed out, especially in public discourse, that the underlying model of reality is deeply flawed and also a bit naive. Nevertheless, it will transform our lives radically in the next 5 years already. It will throw a lot of people and entire societies into a deep identity crisis, including our models of governance and economy. The fact alone that AI very soon will replace any and all work/jobs is going to be a tough one for societies that identify with the work/jobs they do.
Interesting times ahead!
Yes, I agree about the misconceptions and the confusion. And it doesn't help that the Transcendence view (a restatement of the punchline from A.I.) fits a particular pointing .. the one that the solipsists went all D.G. on, and that ZD likes to joke about with the "Igor .. !" joke. The chip-neurons are no more conscious than a rock. Or a people-peep. But, of course, a people-peep, is not, a rock. The Large Language Models are definitely already impacting, but, in another sense, transhumanism is just an extension of one of the primary differentiators of human beings from other animals: tool usage. Now, we've come to understand and accept that many other animals also use tools, but, of course, it's all and always a matter of degree. Automation has been with us and transforming the work place since long before either of us were born, and the funny thing is, people seem to have this knack for making themselves useful, despite the inexorability of it all. Hey, you know, I was completely ignorant about "yugas" and got curious a few months back. Funny that there are unexplained skeletal remains of giants. As well as dozens of instances of massive artificial stone formations that defy conventional explanation.
|
|
|
Post by justlikeyou on Sept 28, 2024 18:56:45 GMT -5
Funny that there are unexplained skeletal remains of giants. As well as dozens of instances of massive artificial stone formations that defy conventional explanation.
|
|
|
Post by Reefs on Sept 28, 2024 23:35:22 GMT -5
The philosophy behind AI/OI is transhumanism (see The Matrix 1-3) and posthumanism (see The Matrix 4). In a sense, wearing glasses, a hearing aid or having dental implants or a hip replacement is already some form of primitive transhumanism. So transhumanism has been a reality for a very long time already. But in the near future it is going to get a lot more subtle, it will move to the micro and cellular level. The next step though - and in their minds the final step - is posthumanism, (see the Transcendence movie). The problem with these philosophies though is that they are based on several flawed premises that go back to a misunderstanding of what consciousness is. And that is, interestingly, very similar to what we encounter here on the forum all the time, i.e. people confusing consciousness with mind. And in this case also confusing mind with the intellect. When you do that, you will eventually arrive at solipsism and conclude that you are living in a simulation and that your identity and beingness could be reduced to a mere set of data. It's the inevitable conclusion if you follow the rational approach.
In terms of history, these models of reality are also very shortsighted. They seem to see the technological age as the final stage in human development. In terms of yuga theory though, the technological age is but a short transitory stage in the whole cycle, because the new abilities that technology seems to equip us with are actually our own natural abilities. So the technological age is a sort of bridge between not knowing and not using our own innate abilities to fully knowing and using our own innate abilities. Therefore, technology, as usual, is just mimicking nature again.
Now, as with all these developments, what we are dealing with is not some linear but exponential scenario, i.e. for a very long time there's not much happening and it sort of flies under the radar (see the past 50 years). But then there's a point when it suddenly starts to explode and we are right at this point right now, where progress with these technologies is exploding. Which means the world as we know it will cease to exist very soon.
And while the results are pretty impressive, it has to be pointed out, especially in public discourse, that the underlying model of reality is deeply flawed and also a bit naive. Nevertheless, it will transform our lives radically in the next 5 years already. It will throw a lot of people and entire societies into a deep identity crisis, including our models of governance and economy. The fact alone that AI very soon will replace any and all work/jobs is going to be a tough one for societies that identify with the work/jobs they do.
Interesting times ahead!
Ok, first two sidetracks before I get to the substance. The first is comedic. I was searching for this vid on a different platform and found a really funny horror short. (not for the over-sensitive) The second side-track has to do with some thoughts about the tech. The vid concentrates entirely on the "how" of this. The only guy who gets into the "what", as in, "what can this stuff do?", is the guy who trained the flight simulator. It piques my interest. Chatgtp is quite transparent as to the big picture of its design, and at its core is tech that was invented in the 1960's. The idea is inspired by human neurons, but the building block, the artificial "neuron", is simplified relative to the organic version to the extent that calling it childlike would be a compliment. Point being, that the artificial neural networks that implement the large language models are something completely different from a human brain. They've developed along an results-driven arc, without regard to the original design impetus to replicate the original organic "prototype". The reason I bring this up is because I'm curious as to what the benefit of an O.I. approach would be. There were a few hints in this vid, but it left me with many more questions. Interesting stuff, thanks. I could get into quite a bit more detail about ANN design, but didn't want to get lost in th weeds unless someone expresses interest. One huge benefit of O.I. is the extremely low energy consumption compared to current A.I. as mentioned in the vid. The high energy consumption of A.I. data centers is apparently already a problem.
|
|