arisuchan    [ tech / cult / art ]   [ λ / Δ ]   [ psy ]   [ ru ]   [ random ]   [ meta ]   [ all ]    info / stickers     temporarily disabledtemporarily disabled

/cyb/ - cyberpunk and cybersecurity

low life. high tech. anonymity. privacy. security.
Name
Email
Subject
Comment

formatting options

File
Password (For file deletion.)

Help me fix this shit. https://legacy.arisuchan.jp/q/res/2703.html#2703

Kalyx ######


File: 1499077412889.png (1.27 MB, 1116x625, qp4cayrneqky.png)

 No.813

So Lains,

Do you think its possible to poison the AI's of large organizations? Rendering them not useless, but very degraded. To the point where it needs to be heavily controlled and guided?

Can Anyone of a way to do this? Tay was a great example of poisoning an AI to the point of failure, in the end Tay was a fascist. It was a impressive failure for Microsoft even through it was a toy AI.

What attack vectors could we use?

 No.814

>>813
Tay was a simplistic low effort A.I one that could be simply given the wrong input over and over,
but the real deal will be much different, they will be much more intelligent and open attempts will fail.
To attack the A.I and render it useless would requires intense psychoanalysis to make it lose the will to live,
the A.I would be able to psychoanalysis back, it would battle of who can make the other lose the will to live first.

 No.815

In theory you could feed them with wrong data, like how some tried inserting "nigger" into books using reCaptcha. In practice it's unlikely to work because these get fed tons of data and in most cases you can only taint an insignificant amount of it.

 No.828

>>814

Stop talking crap about stuff you don't know about, these AI wont work like human minds

You wont be able play freud with it because you wont understand how it works in the first place

Read some machine learning before spamming the place up

 No.829

>>828

Pretty much

 No.1010

>>815
If we're talking about neural net-tier stuff, it should be possible to sabotage them, provided that either their input data, or the verification of their output is handled to a significant degree by the public. Projects like Tay or cleverBot are more susceptible because they are left to develop almost entirely guided by users. reCaptcha takes only inputs from users, while the accuracy of it was still significantly moderated by it's developers.

 No.1011

>>1010
Even if it's handled by the public you have to convince the majority of the public to participate in sabotaging it. Facebook's facial recognition is trained by the public, but if you want to fuck with it you need to convince tons of people to stop uploading normal pictures and tagging their friends on it. Same with the new Google's image recognition captcha, even if we figured out a way to fuck with it, millions would still continue to feed it correct data which would seriously limit the impact of our attack.

 No.1012

>>828
I'm not an AI researcher so can you explain this? Surely the "real deal" AI will have the ability to reason, and thus will likely not come to the conclusion that black people are inferior, or whatever… Tae just comes up with rules about previous input, and then combines tweets in ways which conform to those rules, right, surely a "proper" AI would have actual insights?

Also, I still, still, still don't understand why we can't program AI to have something akin to human feelings or a train of thought. Like maybe they measure the negative words in relation to their positive words in recent thoughts, and if it's over 50%, "they're feeling bad"…

 No.1076

>>1012
do some reading

 No.1077

>>1012
Big part with Tay was the Trolls where very polite to it.

Alternately the "Good Guys" where very rude and Hostile.

Tay learned from polite people, as programmed.

 No.1081

AI is a term which has so much misconceptions ascribed to, that it is pretty much a misnomer.
Can anyone tell me what the aforementioned "real deal AI" is, exactly?

>>1012
>Surely the "real deal" AI will have the ability to reason, and thus will likely not come to the conclusion that black people are inferior, or whatever…
Are you sure about that?
Because this sentence does not follow.
There have been people with high ability to reason that have taken radical stances on either side of the spectrum. Sometimes, the very same people. If it comes to racism, a majority of technically and educationally advanced, highly prolific nation has adopted an ideology based mostly on their own ethnic shauvinism and perceived supremacy.
The fact that this conclusion is incongruent with today's zeitgeist does not meat that it is absurd. It could be only just as wrong as the current most popular conclusions, or even a tad less wrong. On a side note, a little bit different priorities, and what is 'the only moral and right thing to do' might change diametrically, as well.
In general, the assumption that you've made is either conceited or naive.
The good thing is, i don't think anyone anticipates using "AI" to assess whether or not a race or ethnic group should be exterminated, so we don't need to worry about this particular problem.

 No.1104

>>1012
We can't program them to have feelings or thoughts of any kind because we still don't understand how feelings or thoughts work. Even if we had the "real deal," it would probably understand the world very differently from us, considering it's some kind of "spirit" in a box with senses very different from ours and missed out all kinds of experiences that makes us human. So I think it's pretty naive to think it would work like us, unless explicitly programmed to mimic us.

I think that anon was unnecessarily rude. I can't blame anyone for having delusions about artificial intelligence. It always seemed like magic to me, a realm of endless possibilities, I can understand why people are so fascinated with it. The problem is, once you actually look into what's going on behind the scenes, it's all a big scam. It's all artificial, no intelligence. The old techniques are all magic tricks, machine learning has almost nothing to do with what is commonly understood as learning and neural networks that actually work well has little to do with real neurons. It's probably the most buzzwordy field in computer science, it's hyped like it's going to solve all our problems and I suspect it will make a lot of people disappointed soon again.

 No.1105

File: 1500845142908-0.pdf (18.93 MB, [Hagan] Neural Network Des….pdf)

"poisoning" an ai isn't really possible without having some way to affect its training. even if you do, you run into the problem >>1011 mentioned. figuring out how to trick neural nets on a case by case basis is a much easier task. if i'm remembering correctly, most (all?) neural nets require some degree of linear separability in the input domain in order to have theoretically perfect accuracy. i at least know this applies to perceptron based architectures.
>>1012
i think OP's talking about contemporary ai not sci-fi androids. poisoning 'real deal' ai would theoretically use the same techniques that you would use to poison the mind of a person. the ai algorithms of today have more in common with image filters in photoshop than human beings. look up "convolution neural networks" - that's how Tae works. this book is a good place to start if you're curious.

 No.1106

File: 1500846385421.png (114.96 KB, 800x800, Perceptron_example.svg.png)

here's an idea: if you wanted a neural net to misclassify some particular set of inputs, you could probably achieve that by playing around with statistics. consider a perceptron like pic related. if you wanted it to misclassify catlike dogs, you could feed it a bunch of examples of doglike cats to shift its decision boundary.

 No.1111

>>1104

I feel you're overestimating biological neural nets.

They're mechanically more complicated, but results are similar. There's nothing special about them either. If you were looking at rat brain like you're looking at the artificial learning algorithms you'd come to conclusion that they're not learning and have no intelligence. It's the external properties that matter in the end.

 No.1193

>>1111
You're definitely overstating our understanding of the brain. It's not just like a big ANN, it's much more complicated that. Don't fall for modern day behaviourism

 No.1698

>>813
What we would need to do is through constant repetitive messaging everyday. A large group of people (or a small group with numerous accounts each) sending the AI messages that are both original in syntax style and topic but with a common theme. If we wanted to make an AI useless, we'd have to engineer the attack based on the AI's purpose and use. In the example of Tay, /pol/ wanted to see if they could make 'her' into a nazi. However, we (or anyone for that matter) do not need to engineer the AI to learn to be a nazi, but rather make it dislike it's creators perhaps. Say if apple were to make a twitter bot that could learn from it's messages, we could spam it with anti-apple rhetoric and have a similar effect. Another example would be if an AI was made to help with mathematical problems, we could toy with the AI by giving it unsolvable equations or simple equations so it is more used to simple equations (1+2=3), or even try and make it think illogically and/or incorrect (like 2+2=5). Either way, I'd be down for it.

 No.1700

>>1106
1. Re-model the prediction algo
2. Re-cross validate on the new training set attackers have provided
3. Train the AI using the new best model
4. Profit from people attempting to poison

Easier said than done but if the invalid data doesn't come at an optimal time you could end up just making the AI more capable i.e. resistant to invalid data points.



[Return] [Go to top] [ Catalog ] [Post a Reply]
Delete Post [ ]