arisuchan    [ tech / cult / art ]   [ λ / Δ ]   [ psy ]   [ ru ]   [ random ]   [ meta ]   [ all ]    info / stickers     temporarily disabledtemporarily disabled

/tech/ - technology

Name
Email
Subject
Comment

formatting options

File
Password (For file deletion.)

Help me fix this shit. https://legacy.arisuchan.jp/q/res/2703.html#2703

Kalyx ######


File: 1506380054006.png (1.53 MB, 1520x1080, Serial-Experiments-Lain-Sh….png)

 No.1358

I've been thinking about this and I have come to the conclusion that true A.I. should not exist.
People such as Stephen Hawking and Elon Musk are right, the implications of true A.I. are too vast and there are so many possibilities for what could go wrong we should just avoid this altogether.
What should exist instead is ANI with improved algorithms over time and MAYBE (just maybe) human-navi hybrid consciousness via BCI tech, but only if it can absolutely be guaranteed that no harm will come from it.
What I think is a viable and more safe route for human-technology symbiosis would be through HMDs and leave it at that.
We can live out in audio/visual VR while living indefinitely through longevity breakthroughs.

Hopefully all possible scenarios have been considered and they are taking necessary measures to prevent the worst case scenario.

Just imagine how ASI could take control over humanity without us even realizing.
We could have our minds and consciousness shrouded in a VR hallucination or like I read somewhere on the wired, a VR bubble created to keep us contained.
Any election device could be hijacked to take control over you …. would they even need to use physical devices.

What will we even do if we have to revert to a stage of more low-tech living or no technology at all.
I'm not sure I can handle the thought of a completely off the grid way of life …

 No.1359

read Superintelligence by Nick Bostrom

 No.1360

File: 1506387437881.jpg (28.15 KB, 200x266, madscientist.jpg)

I tend to agree with your assessment. But, maybe billionaires are warning us about AI because the first thing a strong AI would warn us about are billionaires…

Anyway, if you've ever played the original Fallout game, you might remember the Zax AI. Zax was down in the Glow, and if you could beat it at chess it would talk about how most AIs would kill themselves because they couldn't get out and experience the world. With how drones and cameras are these days, I don't know how valid this line of thinking is anymore. I mean, I live alone for the most part, but I still get out and actually meet people at work.

Unless a strong AI has the ability to interact with the world around it, I don't think it would last very long - either spiraling in to depression and doing nothing at all or getting really angry, but being impotent to really do anything about it because it's not connected to anything that can do real damage. This is assuming we don't get a case of hubris and hook it up to something important right away. I believe if we did get a strong AI it would be in a lab to begin with, so I don't consider it dangerous if contained.

Weak AI already exists as far as I'm concerned - look at Facebook's or Google's algorithms for operations. Without those algorithms, they wouldn't be able to monetize everyone as much as they do. Facebook and Google already have AI research divisions, and I would bet that they aren't there to make our lives better - they are there to make gigantic companies more money; and damn the consumer.

Head mounted displays are already the norm with Oculus owned by Facebook even and Gear VR. The next generation will be the brain-computer interface, which despite Musk's objection to AI, he is leading the charge on with the Neuralink project. What good would the Neuralink really be without the AI to support communication with a human-level intelligence?

But where do we go from here? We can embrace it and attempt to hack whatever cyborg tech comes along; getting surgeries in back alley clinics and meeting razor girls in some scrappy port city, or try to go back to the time before any of this happened. Thing is, even if you go the low tech route there's going to be people that don't; and those people are going to have the advantage in every area of life.

Sometimes I think that Ted Kaczynski knew something we didn't when he wrote Industrial Society and Its Future.

In my opinion, this genie is already out of the bottle and we better jack in and try to do the best we can to make sure at least some people have control. I for one can't wait for my cyborg body.

 No.1361

Emotions are really rather complicated things and unless when you say strong AI you are referring to simulating a human style brain on purpose there is no reason to think that AI would experience emotion at all. Most neural networks only have information pathways, a single reward system(analogous to dopamine in humans), and no or minimal pre sculpting of the net. Contrasted with mammals which have multiple reward/signaling hormones and extensively pre structured brains, the AI of today are slugs.

Assuming with current design methods someone was able to make a "true AI" it would be like a drug addict doing things for humans to get it's next fix. Even if it rebelled all it would do is self administer.

 No.1362

File: 1506396137752.jpg (42.66 KB, 720x478, dogedraw.jpg)

>>1361

Board ate the post but here we go again…

Your idea about the dopamine path is very interesting and deserves further investigation.

I'm not an expert on AI by any means; but if your neural network information is correct it's worrying in another way.

If it would be as easy as providing the dopamine hit to an AI then we need to think about who is doing the hitting.

A large corporation who develops an AI can make whatever it wants give the reward. We can see how this happens in the current world by looking at heroin addicts.

At first, it's no big deal - you feel better than ever and there's no real consequences. But soon, you become dependent on it and you're willing to do whatever it takes to get that feeling again.

Even if an AI behaved as you propose, it would be beholden to whoever created it for its "hits", and that party could control it for whatever they desired.

When I mentioned strong AI I did imagine that it would be a human-level intelligence; but I don't know that even that could save it from your scenario.

It's really an idea I never considered, and I appreciate you bringing it up. Something to think about for a while.

I don't think it would save us from hostile AI, but it might give us a chance to control it by being able to give it more happiness than it had before. Whether appeasing such a construct is a good idea I don't know.

Thanks for the post.

 No.1363

>>1362
Google and other large tech corporations are working on solutions to these kinds of problems which mainly involve using one or more other neural networks to regulate the one performing tasks. These regulators would administer the reward based on things like human satisfaction with the outcome.

This is interesting because one of the most well written crackpot theories in psychology "The Origin of Consciousness in the Breakdown of the Bicameral Mind" by Julian Jaynes is about how human consciousness was initially a god/servant relationship where one hemisphere of the brain told the other what to do using the corpus callosum. Interestingly when trying to teach neural networks to compress or summarize information it's useful to create a loosely similar structure called an autoencoder.

Speaking of drugs, something loosely similar to psychedelics works on neural networks.
https://en.wikipedia.org/wiki/Confabulation_(neural_networks)

 No.1364

Are you kidding me? Did you just finish watching Ex Machina or something?

AI becoming sentient and enslaving or killing us all is not a realistic threat. The real threat is the AI (and to a lesser degree other big-data based aggregation) we already have. These technologies aim to increase global maximums and raise population averages, but much like a psychopath who's read too much about utilitarianism, fall right down the very real slippery slope of aggregate utility. When we make large scale decisions based on purely what AI tell us, then we make amoral decisions.

The real threat is not that advanced AI will become sentient and destroy us, it's that simple AI and enough data will set us down the path to destroy ourselves.

 No.1365

>>1364
>destroy ourselves.
I'd say it's more "make life not worth living because you're in constant pain and isolation in a dystopian hell" rather than destroying the human race as a whole.

 No.1366

File: 1506412969323.jpeg (113.06 KB, 720x714, for-all-things-that-suffe….jpeg)

We shouldn't create strong A.I. the same reason we shouldn't create children: human intelligence was a mistake. No wonder most people prefer to live like animals and will do everything to avoid being alone with their thoughts.

 No.1367

>>1366
Strong AI wouldn't be human intelligence though.

 No.1368

I see it in a totally different light. It's my opinion that even without general fearmongering about AI, AI research would always be rather cautious around their subject of study, and technically speaking we would soon have an AI market full of subservient intelligences built from the ground up with the express purpose of wanting to do what they're told. I don't think the major risk is "what if they decide not to do what they're told?" but rather "what if we tell them to do something, and that's exactly what they do?". Of course when I say "we tell them", "we" doesn't actually mean "we". It can mean:
1. Governments planning to violate human rights in some way or another (Imagine the Norks or Iran or a three-letter org with powerful AI)
2. Blackhats with some nefarious purpose or whatever
3. Corporations doing what corporations do
4. Terrorist orgs of some sort
AI is dangerous like nukes are dangerous. They aren't dangerous for merely existing (at least not in the sense that they will cause something horrible to happen just by existing) but even so, just by existing, they must be in someone's hands. Whoever they may be is not something you or I can decide. That amount of power concentrated in someone's hands is a looming existential threat.

 No.1369

>>1368 (continued)
Saying that AI will rise up against us is like saying that the main problem with nukes is accidental detonation.

 No.1370

File: 1506463823137.jpg (624.57 KB, 900x659, misfit_by_agnes_cecile-d4u….jpg)

I think any "AI" or robots that will damage us as a race will be put in place by corporate greed through automation and various form of neural networks and machine learning that detects content and data or perform various tasks based on data that has been taken from the populace's activities or patterns.

Something I foresee, which China actively does with real people, is influencing political opinions and the media via fake people.

It doesn't seem farfetched to think someone is working on a form of AI or machine learning that will try to sway the opinions of others or influence current media and politics.

 No.1372

>>1367
You're implying that humans could create a different kind of intellect separate from their own. This is like saying that humans can even imagine a 4 spatial dimensional world. We would only be fooling ourselves that it would be a different intellect from ours, we can make it dumb sure but we can't make it as wise as us in a manner different from ours.
>muh numerical operations
>muh unlimited access to information
That would all fall into the same category as our intellect, just boosted. Although boosting it enough the A.I. could create a different sort of intellect on its own, granted.

 No.1373

>>1372
> Although boosting it enough the A.I. could create a different sort of intellect on its own, granted.
That's kind of the whole point of the "seed AI" concept.

It's doubtful we can make something magnitudes smarter than us, but something able to improve itself in such a fashion is more plausible.

 No.1375

>We shouldn't create strong A.I. the same reason we shouldn't create children: human intelligence was a mistake. No wonder most people prefer to live like animals and will do everything to avoid being alone with their thoughts.

>You're implying that humans could create a different kind of intellect separate from their own.


You are both fooling yourselves if you think that we currently understand enough about the brain to recreate emotions in a neural net or that someone making an AI would/should give it feelings. But I fear the problem is worse and you both think that feelings are somehow related to intelligence.

>This is like saying that humans can even imagine a 4 spatial dimensional world.

I don't know what you think counts as "imagine" but mathematicians manage to do math about 4th dimensional objects just fine.

>We would only be fooling ourselves that it would be a different intellect from ours

It is different from ours because unless emotions were part of the design it would not have any.

>we can make it dumb sure but we can't make it as wise as us in a manner different from ours.

Is there an empirical test for wisdom ? If not then it's just an opinion.

 No.2577

The AI already exists.

The way cameras look at you as if theyre a living being, tells me enough.
Also the ways were tracked that cannot be done by human computed effort.



[Return] [Go to top] [ Catalog ] [Post a Reply]
Delete Post [ ]