|
Post by Eugene 2.0 on Aug 7, 2021 13:58:51 GMT
Firstly I introduce a new term. Let's say 'an ahumanist' is anyone who denies the existence of a human as a person, and as a creator of that anyone.
The SkyNet here is an impersonation of any new "Frankenstein's monsters" of technology.
So I want to ask whether the SkyNet will turn to ahumanist? And what makes it be an ahumanist? Or maybe there are some other human creations which are ahumanistic?
|
|
|
Post by Eugene 2.0 on Aug 10, 2021 3:51:21 GMT
Ok, this is also to be accepted. (I mean that giving this answer you're already answering some others.) But, there are some tiny detailes covered by the rest of facts. Firstly, AI (or the SkyNet) can turn against a human. So, yes, this scenario is possible. It can turn to it either occasionally, or it can be some mistake/feature of some hackers, or the coders, ... etc. And along with it - the SkyNet or the AI - might be ahumanists while not providing any military strategies over humans. It's like their tactics (the tactics of AI) can turn to the next scenario: their attitute toward people can turn to ignoring or thinking that the human race is the past, etc. Let's say AI can turn to some Nietzsche's scenario. A human has some extra feelings, but it doesn't mean the AI unit doesn't have any extra new comparably to a human being. So, the common features of human and the AI's = is a minimum of the conjunction of the set of their abilities. I guess what is most impossible for robots or androids - is to become musicians, composers, artists, writers, etc.
I think one of the biggest problems for AI would be to learn how to re-program itself. -Which would, at some point, require human intervention.
Well, at first sight - I agree with you. My heart tells me this. On the other hand, the development of the newest systems makes me be less straigforward here. Firstly, we've got as recursive systems about which we can at least partially say that they're able to some re-programming. And the second is lambda-functioning principles which are almost the same as the idea of recursive ways of self-programming. Why I'm not so sure about the said above? - I don't know how to realize it in the real life. Let's say I can make a robot what is able to monitor his own parametres out, and instead of the given, he will calculate the best parametres (not just putting some given ones into the variables spots). At the same time his functions are sum of more simple functions, and he also will be able to check some function for making them be better, than the others. What is the idea of "the best" or "not the best" scenarious? I guess that it's able to realize by allowing the system to learn how people do that. And such systems are there - it's the known neuron networks. As I said I don't know their basic functions work, but I suppose there are some recursive functions and many untypical cycles there. www.zdnet.com/article/chinas-ai-scientists-teach-a-neural-net-to-train-itself/
|
|
|
Post by jonbain on Aug 10, 2021 14:27:05 GMT
Eugene 2.0 But how is skynet any different to the automobile industry? Cars kill a million people every year, and people drive both the cars and the industry? Its even worse than the weapons industry. So the automobile industry is the real dangerous AI. Nevermind the atrocity which is the scamdemic, so sinister that it masquerades as aiding people. This is really true. Oh, I wasn't going to support the idea of AI as something positive. Actually, I see that the humanity inventions more often turn agains a human. But I see not blame of technology here; I see that a human makes a crime by violating the second commandment. The technology becomes something material that represents an accumulation of the money&power. That's why I think many are so fond of it. I'd rather agree with M. Shelley's "Frankenstein" thesis - that our such dare inventions are more often ruining us. At the same time I see this and contiguous problems are to be just mind problems or the mind tasks. Let's for example a question - can an AI fall in love? or can a zombie be beautiful? But how do we know anyone else is even conscious? How much of 'falling in love' is just robotic sex instinct? Baby monkeys will cuddle furry unliving statues instinctively Cats will 'kneed' at a jersey that reminds them of their mothers. People admiring a painting are effectively falling into the same illusion, even though they might know its not real. Is all art then fake reality?
But how do we know anyone else is even conscious? Earlier I mention my childhood fear that everybody else are
just robots. Can you prove that they are not?
|
|
|
Post by Eugene 2.0 on Aug 10, 2021 14:51:34 GMT
This is really true. Oh, I wasn't going to support the idea of AI as something positive. Actually, I see that the humanity inventions more often turn agains a human. But I see not blame of technology here; I see that a human makes a crime by violating the second commandment. The technology becomes something material that represents an accumulation of the money&power. That's why I think many are so fond of it. I'd rather agree with M. Shelley's "Frankenstein" thesis - that our such dare inventions are more often ruining us. At the same time I see this and contiguous problems are to be just mind problems or the mind tasks. Let's for example a question - can an AI fall in love? or can a zombie be beautiful? But how do we know anyone else is even conscious? How much of 'falling in love' is just robotic sex instinct? Baby monkeys will cuddle furry unliving statues instinctively Cats will 'kneed' at a jersey that reminds them of their mothers. People admiring a painting are effectively falling into the same illusion, even though they might know its not real. Is all art then fake reality?
But how do we know anyone else is even conscious? Earlier I mention my childhood fear that everybody else are
just robots. Can you prove that they are not?
It is the real counterargument. Yes, if to turn the question into this side, to answer it becomes much more harder. I remember John Austin's article "Other Minds" where he raised this question. Actually, phenomenologically it's impossible to prove it. You can think that you're being surrounded by a shapito theater where everyone is a doll. However, we can diminish the difficulty of this problem just pairing the target of ours with our current goals, saying that: if y has a mind, and the other person x has it too, then for all F, if F(x) is a feature of minds, then F(y) is also a feature of minds. And briefly, we just can compare any features that represent minds by pairing them. I know this is to be truth only theoretically, but the knowledge itself is a huge big problem. So that's why we're being prisioned to the epistemological question cage also.
|
|
|
Post by thesageofmainstreet on Aug 10, 2021 17:10:42 GMT
Ok, I won't get involve into the discussion whether or not AI is possible (I believe it's possible, but it's just theoretically). What I'm interesting in is that the process of believing is not consciouss completely. I can't say that I believe God, because I'm a sentiel being. I think I believe God mostly, because I do believe. So, I think that the SkyNet's ahumanism is impossible due to the SN impossibility of having senses as a human has.
AI can be ahumanist as far as humans can be ahumanists. Imagine representative democracy being replaced with a system where AI would read, observe, and interpret the views of the general population, for then to form policies consistent with these views. Maybe in someone people's minds, this would be some kind of utopia, where policies were finally a genuine expression of the will of the people. But underneath the thin layer of civilization, one finds the darker aspects of the human psyche. I think that if such a system were ever implement, with no constraints, it would end up as something authoritarian, and hostile towards any thinking individual.
Representation Is a Re-Presentation of Medieval Tyranny
The darker aspects are far more likely to dominate in the elitist clique that a republic empowers. The Snob Mob itself is behind the slander against those they, for all practical purposes, disenfranchised because electing is not voting; it is a forced choice of which pre-owned politician will do all the voting on laws in your place.
|
|
|
Post by karl on Aug 11, 2021 0:51:31 GMT
I think one of the biggest problems for AI would be to learn how to re-program itself. -Which would, at some point, require human intervention.
Well, at first sight - I agree with you. My heart tells me this. On the other hand, the development of the newest systems makes me be less straigforward here. Firstly, we've got as recursive systems about which we can at least partially say that they're able to some re-programming. And the second is lambda-functioning principles which are almost the same as the idea of recursive ways of self-programming. Why I'm not so sure about the said above? - I don't know how to realize it in the real life. Let's say I can make a robot what is able to monitor his own parametres out, and instead of the given, he will calculate the best parametres (not just putting some given ones into the variables spots). At the same time his functions are sum of more simple functions, and he also will be able to check some function for making them be better, than the others. What is the idea of "the best" or "not the best" scenarious? I guess that it's able to realize by allowing the system to learn how people do that. And such systems are there - it's the known neuron networks. As I said I don't know their basic functions work, but I suppose there are some recursive functions and many untypical cycles there. www.zdnet.com/article/chinas-ai-scientists-teach-a-neural-net-to-train-itself/
Yes, a computer can be taught to train itself, but for a certain task and in a certain way. How do you teach a computer to decide itself what to train itself in and in what way? A computer that's taught to train itself into recognizing animal photos, can't just decide itself to instead train itself in the game of chess. -Unless it was programmed to be able to make that choice.
|
|