|
Post by Eugene 2.0 on Aug 7, 2021 13:58:51 GMT
Firstly I introduce a new term. Let's say 'an ahumanist' is anyone who denies the existence of a human as a person, and as a creator of that anyone.
The SkyNet here is an impersonation of any new "Frankenstein's monsters" of technology.
So I want to ask whether the SkyNet will turn to ahumanist? And what makes it be an ahumanist? Or maybe there are some other human creations which are ahumanistic?
|
|
|
Post by karl on Aug 8, 2021 0:19:56 GMT
In order for an entity to turn on humans, it would have to be conscious. SkyNet, from the terminator movies, was an artificial neural network that became conscious and wanted to eradicate humanity. I don't believe any entity that is merely an algorithmic processing of data can ever become conscious.
The key would be to identify and reverse engineer the biological processes in the brain that produce consciousness. This could then possibly be fused with computer technology. Or, perhaps, be entirely biological.
Such a conscious entity could see humans as a competitor and possible threat. Maybe even justified, since it's likely that it will be treated as if its purpose in life is to serve humans.
|
|
|
Post by jonbain on Aug 8, 2021 9:45:35 GMT
Another example of this good concept 'ahumanist' would be in battlestar galactica - initially we are lead to believe that the silons are a race of robots hell-bent on destroying humanity, but later we find that there is one human actually driving the robots - of course the skynet could have a hidden human maker fed up with humanity's evils
as for the myth of the brain as the origin of consciousness, well rupert sheldrake locates the origins of consciousness in the spine - read "the science delusion" - and i am certainly inclined to accept what sheldrake says on many different topics, more than any other well-known theorist, at least
|
|
|
Post by Eugene 2.0 on Aug 9, 2021 11:33:32 GMT
In order for an entity to turn on humans, it would have to be conscious. SkyNet, from the terminator movies, was an artificial neural network that became conscious and wanted to eradicate humanity. I don't believe any entity that is merely an algorithmic processing of data can ever become conscious. The key would be to identify and reverse engineer the biological processes in the brain that produce consciousness. This could then possibly be fused with computer technology. Or, perhaps, be entirely biological. Such a conscious entity could see humans as a competitor and possible threat. Maybe even justified, since it's likely that it will be treated as if its purpose in life is to serve humans. Ok, I won't get involve into the discussion whether or not AI is possible (I believe it's possible, but it's just theoretically). What I'm interesting in is that the process of believing is not consciouss completely. I can't say that I believe God, because I'm a sentiel being. I think I believe God mostly, because I do believe. So, I think that the SkyNet's ahumanism is impossible due to the SN impossibility of having senses as a human has.
|
|
|
Post by Eugene 2.0 on Aug 9, 2021 11:48:38 GMT
Another example of this good concept 'ahumanist' would be in battlestar galactica - initially we are lead to believe that the silons are a race of robots hell-bent on destroying humanity, but later we find that there is one human actually driving the robots - of course the skynet could have a hidden human maker fed up with humanity's evils as for the myth of the brain as the origin of consciousness, well rupert sheldrake locates the origins of consciousness in the spine - read "the science delusion" - and i am certainly inclined to accept what sheldrake says on many different topics, more than any other well-known theorist, at least Biologically it's impossible, I agree. Because in my opinion, it'll be just another human, a copy of him. But what about the calculations? Let's say that our language has some semantics S. I don't really know how complex and various is it, because our languages are different, but it's obviouls that any AI's semantics will be also some S`. Of course, we cannot compare S and S` just like that. However, if S have rules over it, it makes S` be possible to have rules too, and by this - to make AI be able to act like a person (in mind calculations). If our human semantics has a seires of changes (regardless of time&history, and so on - for the explanation's sake): S1, S2, ..., Sn = f(S) S`1, S`2, ..., S`n = f`(S)
Then it's possible that f(S) = f`(S) if and only if {S}= {S`}. And as soon as the AI can be successful in copying the model of our thinking (let's say as well as it's needed for the AI), then this task is possible to be theoretically realized. It's obviouls that the potential of both systems - a human one and a AI one - would probably be advancing each its own, and that's why every new semantical level would technically be much more complex, than the previous. It would be like on a war - the methods of calculating and any tricks, cheatings, etc would technically be expanding. As we can see it in the chess example, there had occured a point when a computer firstly won a person. And after that every new chessplayer tried to lose as much as it was possible. The Glicko formula demonstrates the chances of chessplayers. (You only have to plug into this formula ( the 'Glicko' formula ) the rating of the GM ( r ), the rating of the engine ( rj ) and the rating deviation ( RD ) of the engine ( RDj ) and you have the expected value of the score E ( s | r , rj , RDj ).) /I took it from there: link/
|
|
|
Post by jonbain on Aug 9, 2021 14:02:00 GMT
Eugene 2.0But will that AI be able to change its strategies of its own volition if the rules of chess change? The french gave us the move of "en passant", which i explain to local chess-players as a ruse of getting out of them trying to coerce me into playing that infernal game. Then if that does not work, my strategy is to encourage them to try "diagonal chess" (my invention) which places the kings at opposing corners. The rules are much the same except pawns take others in directly adjacent squares. The pawns can move on the diagonals, or the direct square. Getting the pawn to become a queen requires getting right into the opposite corner. But if they are delighted with this new game, I then try and discourage them by proposing 10x10 chess. This also has the prince and princess. They are both similar to knights, but the prince also moves like a rook and the princess also moves like a bishop. The pawns can move 3 for the first move, or they can move 3 abreast if possible. This normally prevents them from insisting on playing. Does that make them robots perhaps? I have yet to try the next tactic of 10x10 diagonal chess. Maybe I am the only human after all? (A big fear of mine since childhood)
|
|
|
Post by karl on Aug 9, 2021 14:51:00 GMT
In order for an entity to turn on humans, it would have to be conscious. SkyNet, from the terminator movies, was an artificial neural network that became conscious and wanted to eradicate humanity. I don't believe any entity that is merely an algorithmic processing of data can ever become conscious. The key would be to identify and reverse engineer the biological processes in the brain that produce consciousness. This could then possibly be fused with computer technology. Or, perhaps, be entirely biological. Such a conscious entity could see humans as a competitor and possible threat. Maybe even justified, since it's likely that it will be treated as if its purpose in life is to serve humans. Ok, I won't get involve into the discussion whether or not AI is possible (I believe it's possible, but it's just theoretically). What I'm interesting in is that the process of believing is not consciouss completely. I can't say that I believe God, because I'm a sentiel being. I think I believe God mostly, because I do believe. So, I think that the SkyNet's ahumanism is impossible due to the SN impossibility of having senses as a human has.
AI can be ahumanist as far as humans can be ahumanists. Imagine representative democracy being replaced with a system where AI would read, observe, and interpret the views of the general population, for then to form policies consistent with these views. Maybe in someone people's minds, this would be some kind of utopia, where policies were finally a genuine expression of the will of the people. But underneath the thin layer of civilization, one finds the darker aspects of the human psyche. I think that if such a system were ever implement, with no constraints, it would end up as something authoritarian, and hostile towards any thinking individual.
|
|
|
Post by Eugene 2.0 on Aug 9, 2021 15:02:28 GMT
If you both don't mind, I'd like to answer you jonbain , karl in one topic. I repeat that I don't insist on a thought that it's necessary for AI (a counsciousness being) to exist, but it's theoreticall possible as well as for two clocks (or watches) problem since Descartes via Locke and to the present days. Speaking of chess or any other forms of games. - I think it's a possible way to get out. Because our social existence can be taken as games. We're speaking to each other using socal nets, we're going for shopping, etc, and all of such routines are games. Some of games aren't usual, and even can't be calculated - Bezos with his crew on a Mars trip - is one of such. There are plenty of scenarious, but I don't think that we can calculate them all. However, there are logics which use the game grounds. Some of such were presented by Y. Hintikka: SEP, "Logic & Games"I also think that what AI usually does - is copying. That's why technically it can get the higher potentil of this. But - that's true - there might occur some darkest depths of the human mind. One of quotes comes into my mind: "The soil of a man's heart is stonier, Louis. A man grows what he can, and he tends it. 'Cause what you buy, is what you own. And what you own... always comes home to you.” (Stephen King's "Pet Sematary")
|
|
|
Post by karl on Aug 9, 2021 15:13:18 GMT
Eugene 2.0I don't understand what the question is. Are you asking if there are aspects of human existence that can be described in a way that doesn't actually take into account human characteristics, such as human emotions and motivations?
|
|
|
Post by Eugene 2.0 on Aug 9, 2021 15:28:30 GMT
Eugene 2.0 I don't understand what the question is. Are you asking if there are aspects of human existence that can be described in a way that doesn't actually take into account human characteristics, such as human emotions and motivations? Well, I'd like to agree that there are the darkest places in our soul - I mean in a human being. There are plenty of examples which support such a view. However, there are some breaking the limits things like - expanding of our senses using new technologies, and so on. I know it's just a small part, while it works even now. Our fears or the other emotions are hardly to be absolutely described. While, which of examples we're aiming when we bring these facts (the facts of such emotions) as counterexamples? Usually, either we've got such an experience, or we take it elsewhere. And again, it's not easy to compare it by yourself; one can thinks he's got some fears, while who knows what kind of emotions he has. Such barriers that won't help us to be as clear as possible at this point - the point of our insurance of that there are such darkest places within our soul - is one of the barriers what won't allow us to be clear in defending the question of non-ability of AI developing. I mean that logical arguments (like what you've already brought here - when you've introduced Godel's theorems explanation - are better here; while phenomenological counterarguments aren't complete enough) work better for to counter AI. And especially the question of can Ai be ahumanist.
|
|
|
Post by karl on Aug 9, 2021 15:37:57 GMT
Eugene 2.0 I don't understand what the question is. Are you asking if there are aspects of human existence that can be described in a way that doesn't actually take into account human characteristics, such as human emotions and motivations? Well, I'd like to agree that there are the darkest places in our soul - I mean in a human being. There are plenty of examples which support such a view. However, there are some breaking the limits things like - expanding of our senses using new technologies, and so on. I know it's just a small part, while it works even now. Our fears or the other emotions are hardly to be absolutely described. While, which of examples we're aiming when we bring these facts (the facts of such emotions) as counterexamples? Usually, either we've got such an experience, or we take it elsewhere. And again, it's not easy to compare it by yourself; one can thinks he's got some fears, while who knows what kind of emotions he has. Such barriers that won't help us to be as clear as possible at this point - the point of our insurance of that there are such darkest places within our soul - is one of the barriers what won't allow us to be clear in defending the question of non-ability of AI developing. I mean that logical arguments (like what you've already brought here - when you've introduced Godel's theorems explanation - are better here; while phenomenological counterarguments aren't complete enough) work better for to counter AI. And especially the question of can Ai be ahumanist.
I thought your original question was about AI becoming hostile towards humans. So my point with referring to the darker aspects of the human psyche was simply to state that since AI is coded by humans and developed through receiving data directly or indirectly produced by humans, it may become hostile towards humans for the same reason that humanity may become hostile towards itself.
|
|
|
Post by Eugene 2.0 on Aug 9, 2021 16:49:16 GMT
Well, I'd like to agree that there are the darkest places in our soul - I mean in a human being. There are plenty of examples which support such a view. However, there are some breaking the limits things like - expanding of our senses using new technologies, and so on. I know it's just a small part, while it works even now. Our fears or the other emotions are hardly to be absolutely described. While, which of examples we're aiming when we bring these facts (the facts of such emotions) as counterexamples? Usually, either we've got such an experience, or we take it elsewhere. And again, it's not easy to compare it by yourself; one can thinks he's got some fears, while who knows what kind of emotions he has. Such barriers that won't help us to be as clear as possible at this point - the point of our insurance of that there are such darkest places within our soul - is one of the barriers what won't allow us to be clear in defending the question of non-ability of AI developing. I mean that logical arguments (like what you've already brought here - when you've introduced Godel's theorems explanation - are better here; while phenomenological counterarguments aren't complete enough) work better for to counter AI. And especially the question of can Ai be ahumanist.
I thought your original question was about AI becoming hostile towards humans. So my point with referring to the darker aspects of the human psyche was simply to state that since AI is coded by humans and developed through receiving data directly or indirectly produced by humans, it may become hostile towards humans for the same reason that humanity may become hostile towards itself.
Ok, this is also to be accepted. (I mean that giving this answer you're already answering some others.) But, there are some tiny detailes covered by the rest of facts. Firstly, AI (or the SkyNet) can turn against a human. So, yes, this scenario is possible. It can turn to it either occasionally, or it can be some mistake/feature of some hackers, or the coders, ... etc. And along with it - the SkyNet or the AI - might be ahumanists while not providing any military strategies over humans. It's like their tactics (the tactics of AI) can turn to the next scenario: their attitute toward people can turn to ignoring or thinking that the human race is the past, etc. Let's say AI can turn to some Nietzsche's scenario. A human has some extra feelings, but it doesn't mean the AI unit doesn't have any extra new comparably to a human being. So, the common features of human and the AI's = is a minimum of the conjunction of the set of their abilities. I guess what is most impossible for robots or androids - is to become musicians, composers, artists, writers, etc.
|
|
|
Post by jonbain on Aug 9, 2021 20:54:48 GMT
Eugene 2.0But how is skynet any different to the automobile industry? Cars kill a million people every year, and people drive both the cars and the industry? Its even worse than the weapons industry. So the automobile industry is the real dangerous AI. Nevermind the atrocity which is the scamdemic, so sinister that it masquerades as aiding people.
|
|
|
Post by Eugene 2.0 on Aug 9, 2021 21:20:59 GMT
Eugene 2.0 But how is skynet any different to the automobile industry? Cars kill a million people every year, and people drive both the cars and the industry? Its even worse than the weapons industry. So the automobile industry is the real dangerous AI. Nevermind the atrocity which is the scamdemic, so sinister that it masquerades as aiding people. This is really true. Oh, I wasn't going to support the idea of AI as something positive. Actually, I see that the humanity inventions more often turn agains a human. But I see not blame of technology here; I see that a human makes a crime by violating the second commandment. The technology becomes something material that represents an accumulation of the money&power. That's why I think many are so fond of it. I'd rather agree with M. Shelley's "Frankenstein" thesis - that our such dare inventions are more often ruining us. At the same time I see this and contiguous problems are to be just mind problems or the mind tasks. Let's for example a question - can an AI fall in love? or can a zombie be beautiful?
|
|
|
Post by karl on Aug 9, 2021 23:43:49 GMT
I thought your original question was about AI becoming hostile towards humans. So my point with referring to the darker aspects of the human psyche was simply to state that since AI is coded by humans and developed through receiving data directly or indirectly produced by humans, it may become hostile towards humans for the same reason that humanity may become hostile towards itself.
Ok, this is also to be accepted. (I mean that giving this answer you're already answering some others.) But, there are some tiny detailes covered by the rest of facts. Firstly, AI (or the SkyNet) can turn against a human. So, yes, this scenario is possible. It can turn to it either occasionally, or it can be some mistake/feature of some hackers, or the coders, ... etc. And along with it - the SkyNet or the AI - might be ahumanists while not providing any military strategies over humans. It's like their tactics (the tactics of AI) can turn to the next scenario: their attitute toward people can turn to ignoring or thinking that the human race is the past, etc. Let's say AI can turn to some Nietzsche's scenario. A human has some extra feelings, but it doesn't mean the AI unit doesn't have any extra new comparably to a human being. So, the common features of human and the AI's = is a minimum of the conjunction of the set of their abilities. I guess what is most impossible for robots or androids - is to become musicians, composers, artists, writers, etc.
I think one of the biggest problems for AI would be to learn how to re-program itself. -Which would, at some point, require human intervention.
|
|