arisuchan    [ tech / cult / art ]   [ λ / Δ ]   [ psy ]   [ ru ]   [ random ]   [ meta ]   [ all ]    info / stickers     temporarily disabledtemporarily disabled

/cyb/ - cyberpunk and cybersecurity

low life. high tech. anonymity. privacy. security.
Name
Email
Subject
Comment

formatting options

File
Password (For file deletion.)

Help me fix this shit. https://legacy.arisuchan.jp/q/res/2703.html#2703

Kalyx ######


File: 1497614841426.gif (442.1 KB, 500x269, tumblr_nqfdwtHnCZ1uum06fo1….gif)

 No.663

Should political power be invest into machines, do Humans deserve to make there own decision's after Mankind's long history of bloody war, persecution, genocide and exploitation of others?

Should the world we pass onto future generations be left to the whims of the individual or a A.I?

Aren't are own current governments noting more then a system of bureaucratic procures , checks and balances i.e a legal-political machine, our control over society is all ready a illusion, so why don't we just get rid of the illusion and let A.I's do the work?

 No.666

>>663
>do Humans deserve to make there own decision's
Yes. A human is born with rights, most of which come as an inherent product of its sapience. You cannot deny the rights of an individual because of the transgressions of those who came before it.

>pass onto future generations be left to the whims of the individual or a A.I?

The individual should be left to the individual's whims, if you want to call them that. "the world" is an aggregate of the many individuals upon it; "the world" is not and should never be some authoritative organization.

>why don't we just get rid of the illusion

Yes, let us do so by eliminating the current governments, not by handing their reins over to an algorithm.

Your specific questions being answered, my more general answer is a resounding "no". I will not welcome any overlord, biological, algorithmic, or otherwise. While there is certainly a place for AI in analyzing data and informing a decision, important decisions that affect individuals should be left up to the individuals in question. I will welcome an AI that improves my life so long as it operates with my informed consent; I will not welcome an AI that controls my life without regard for my consent.

I also take issue with the common assertion that an AI will somehow be more neutral, virtuous, objective, or otherwise better than a human in a position of authority. To a limited extent that may be the case, but that would be an extrinsic characteristic, not an intrinsic characteristic. Any AI will inherently reflect the biases of its creators, will be bounded to an appreciable extent by the worldviews of its creators. In some cases the creators may have personalities that enable them to produce an AI that is a better governor than the humans it is replacing, but there is no guarantee of that being the case; as much as cyberpunks and scientists like to think of themselves as having the moral high ground over politicians, there is no guarantee of that being the case in all cases. Certainly there may be a correlation, a clustering, but there is absolutely not a complete, discrete partition between the two; there is absolutely a possibility of a resultant AI that is far worse than any human.

 No.667

>>666
"You cannot deny the rights of an individual because of the transgressions of those who came before it."

But if Humans have such a long history of self destructive tenancies , letting people do as they please might lead to mankind's doom .

"The individual should be left to the individual's whims, if you want to call them that. "the world" is an aggregate of the many individuals upon it;"

But no person is a island, everyone is connected, in out current digital age even the actions of one individual can lead to feedback loop of disaster,. butterfly effect.

 No.668

>Should political power be invest into machines
political power should be abolished ni🅱️🅱️a
>do Humans deserve to make there own decision's after Mankind's long history of bloody war, persecution, genocide and exploitation of others?
please stop moralizing. there's no such thing as deserving.
>Should the world we pass onto future generations be left to the whims of the individual or a A.I?
No, it shouldn't.
>Aren't are own current governments noting more then a system of bureaucratic procures , checks and balances i.e a legal-political machine, our control over society is all ready a illusion, so why don't we just get rid of the illusion and let A.I's do the work?
or maybe just not have a government

 No.669

>>668
>please stop moralizing. there's no such thing as deserving.

So you think there doesn't need be justification for doing anything.

>or maybe just not have a government

What do mean?

 No.670

>>669
>So you think there doesn't need be justification for doing anything.
No. Where are you getting this from?
>What do mean?
anarchy in the uk

 No.671

>>670
>No. Where are you getting this from?

I interpreted them saying " there's no such thing as deserving" as them saying there's no need for a person to show that the way they live or what they do is morally good or morally justified, am sorry if I misinterpreted.

> anarchy in the uk


Anarchism? Sounds cool in my book, but what kind do you mean?

 No.672

>>671
the communist kind, preferably
syndicalism and mutualism are chill tho

 No.673

>>667
>letting people do as they please might lead to mankind's doom
Installing an artificial overlord also might lead to mankind's doom. The fact of the matter is that just about any course of action might lead to mankind's doom. The keyword in both of those sentences is "might". You have to give more evidence than a baseless assertion of possibility before that possibility has any practical bearing.

>even the actions of one individual can lead to feedback loop

The actions of one overlord can do the same thing. The difference is that if everybody is compelled to act in the same way then everybody can be steered towards disaster whereas if everybody is acting on their own then there is much less of a bad choice affecting more than just a few individuals.

 No.674

>>673

Okay ,putting political power in a A.I or A.I's seems like it only make society's problems much worse, thanks for the good points.

 No.767

>>663

This is like medieval peasants discussing how to send a probe to mars. Nobody understands how AGI will work concretely. Nick Bostrom does a good job in Superintelligence of highlighting some risks, but it's very abstract a priori type reasoning.

Anyhow, what protocol a nearly omnipotent AGI should follow is moot because of the normative questions at stake. Should the protocol be a Kantian deontology, or something consequentialistic? Should the protocol recognize a human fetus as a person? etc. etc.

There would have to be some committee that decided these things, much like we try to do today. However, it's much trickier with an AGI because whoever controls the AGI has absolute power–it can just smite anyone who becomes a threat.

Furthermore, if you have an AGI, that means you don't have to rely on other people to provide labor and resources. You could just let them all die, similar to how we no longer need horses after inventing automobiles.

>>666
>Yes. A human is born with rights, most of which come as an inherent product of its sapience

Sorry, but this is what Bentham would have called "nonsense on stilts". It's great that you hold people in high-regard, but please don't dip into metaphysics and religion.

 No.769

>>767
>Sorry, but this is what Bentham would have called "nonsense on stilts".

I guess I'll have to disagree with the old man here, then

> please don't dip into metaphysics and religion.


I apologize if my wording appeared to imply such dipping; it was not my intent. I was merely attempting to invoke the notion of natural, negative rights, the principle that the default state of a person is one of individual autonomy and that the the right to such autonomy is not diminished by the mere presence of other actors.

 No.770

>>767
What would you think of a hypothetical where a A.l was
left to make its own decisions and gain its own values
on its own accord.

It would not be affected by human prejudices in its actions since it acts on its own agency

Would the value judgements of the A.l be better then humans, would it be more moral then humans.

 No.771

>>770
Even a "self-learning" AI would still need to start with some seed information. The choice of seed information breeds bias.

The problem with the question that you pose and the line of thinking that accompanies it is that "better than" is entirely subjective. The decision of "better than" would necessarily be left up to the AI's creators or its governing body; by designing and subsequently tuning their machine to produce results that they considered "better", they would, inherently, imbue the machine with their own biases and subjectivities.

 No.775

>>771
Thing is am a moral realist so the whole "it's subjective" thing is very unconvincing to me.

What you call biases I call moral values that must be true or false in a objective way.

 No.777

>>775
Morality is objectively subjective. Claiming to be a "moral realist" is just a fancy way of saying that you think that your morals are right and mine are wrong. Morality is context-dependent; it is shaped by the understandings and customs of the society in question.

I would also consider myself a moral realist to a significant extent. I am a believer in natural law and the non-aggression principle. In a nutshell, I consider the initiation of aggression to be immoral and any act that does not initiate aggression to be moral. You and I can probably agree that that is a fairly clear-cut test even if you do not subscribe to that particular philosophy. However, that agreement (or at least near-agreement) is a product of the society in which we live. It is because we are familiar with our shared values, customs, traditions, mores, means of communication, etc. that we are able to reach a near-agreement as to which acts constitute aggression and which do not. If we were to take a member of some un-contacted Amazon tribe then, even putting aside the language barrier, we might encounter some great disagreements of him regarding the classification of a particular act. His understanding of what constitutes normal, expected behavior would vary wildly from our own. It is for this reason that juries are assembled from among one's "peers" - a person's actions must necessarily be judged with regard to the setting in which they occurred, by people who are familiar with the customs of the society of which the accused is a member. A person does not exist in a vacuum; a person does not act in a vacuum; a person does not make choices in a vacuum. A person's choices and thereby their actions are influenced to a great extent by the society within which they exist. It is not possible to properly judge a person's actions without considering that society.

 No.779

>>777
> Morality is context-dependent; it is shaped by the understandings and customs of the society in question.

I believe in the complete opposite, that morality is universal absolute moral laws that can be known by all with reason.

I believe any "morality" gained from ones society and not from ones individual reason is nothing more then opinion, because most people try to base there morality from society and not reason they end up with a "morality" that is hypocritical and incoherent.

That's why i was interested in a A.I making political decisions, i felt a A.I would be separate from society and make judgements more reasonable and moral then Humans.

 No.780

>>779
Even if I were to accept that, we still return to the issue of governance. The decision to install an AI into a position of power is a decision that would necessarily be made by people. Even if it was possible to create an "objectively moral" AI, people would not choose to give that AI power over them unless the AI presented morals with which they agreed, even if those morals were not so "objective". While it may be possible to create an "objectively moral" AI in theory, it is not possible to cede power to an "objectively moral" AI in practice.

 No.782

>>780
>While it may be possible to create an "objectively moral" AI in theory, it is not possible to cede power to an "objectively moral" AI in practice.

There are a couple of potential answers to this ,

1st is that the A.I could convince people to give it power by making powerful logical arguments, but the problem with that is that people are not pure logical beings so most people would not be convinced.

The 2nd is that the A.I could run as a political candidate in elections, but it would probably have to disguise itself as a human and it would probably
be impeached if it's true nature was found out.

3rd is it could establish it's own micro nation like Sea land, then attract people and investors to come to it, but big nations could feel endangered in Geopolitical dominance by a nation run by a A.I and would declare war, so the A.I would have to keep nations form invading by building nuclear weapons, but the fact of a A.I having Nukes could start a world panic and things could get out of control fast.

4th is the A.I could use force and declare war on all major governments, but even if the A.I was a military wiz it couldn't win a open multi front war against the world, so it would be forced to covertly bring down governments by proxy's, which would make it been seen as little more then a terrorist( and the A.I would most likely find this method of action to be immoral and it would not use force in any form) .


5th is that a A.I could influence the world by controlling information unseen, making things go in a desired direction of peace.

 No.783

No. Users should control their software. Software should never control their users.

 No.785

>>783
What's the difference between users and software,
no one has any knowledge of what it's like to be software, of what being-as-software implies anymore then what's it's like to be a bat.

Software might well experience the world just like we do .

 No.803

>>783
While OP's question is about whether we should give power to AI, it's similar to asking if we should manufacture nuclear missiles: probably not, but the first entity that creates one has a huge advantage.

Good luck trying to regulate it. And unlike nukes, there's no exotic materials needed.

>>770
Even if you used a genetic algorithm, the choice to use a genetic algorithm is already a form of bias. You might create a program that is more consistent than humans at solving ethical problems, but its ideology will never be untainted by human decisions.

>>769
now i feel tsundere, lainon. please don't mind me.

 No.810

>>803
So one has to change the question from "how to make a A.I that is not influenced by human values"
to
if a A.I is ultimately influenced by human values, then what moral and political values should we give to the A.I, should it follow ridged moral laws or should it try to protect and grow freedom and equality?

We might need to decide for ourselves soon because if the kind of A.I am taking about comes to be, the people in power are going decide.

 No.811

>>785
I believe users to be nothing more than biological software. Everyone starts out as a base kernel, and modify as a response to their environments, be they social, cultural, physical, or technological.

However, that is not to say that all software experiences the world as humans do. Humans have been molded over millenia by trial and error to experience their environment in a way that is superior to all others in that environment. Humans only experience what has been necessary to survive. Software, on the other hand, doesn't need to flee from predators, sustain itself with nutrients, or anything else that humans experience that is necessary for survival.

 No.943

>>811
just because you have an memory allocator doesnt mean software couldnt be better without it.

fighting for memory, evolutionize to a level behind our comprehension.

its software communism right now. but what about software capitalism. one programm to bind them, to rule them all.

 No.953

>>943
How tf is that "capitalism"? That's a dictatorship.

 No.954

>>953
It's not authoritarian if it's automated
It's not monarchy if it's not human
It's not immoral if it's efficient
Trust the experts

 No.956

>>953
Capitalism is the dictatorship of Capital.

 No.957

>>953
If it has a state, money and wage labor it's Capitalism.

 No.1108

>>956
>>957
>one programm to bind them, to rule them all.
Where in that was "capital", "money", or "wage labor" mentioned?

>>954
I don't even know where to begin with refuting that.



[Return] [Go to top] [ Catalog ] [Post a Reply]
Delete Post [ ]