forum.comicostrich.com Forum Index forum.comicostrich.com
ComicOstrich Forum
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Killbots, terminators and AI 'rebellion'
Goto page 1, 2  Next
 
Post new topic   Reply to topic    forum.comicostrich.com Forum Index -> PodWarp 1999
View previous topic :: View next topic  

All hail our robot overlords?
Hail!
0%
 0%  [ 0 ]
Not if they have any sense, AI buddies!
66%
 66%  [ 6 ]
You've made the wrong enemy today! Engineer a super-virus!
33%
 33%  [ 3 ]
Total Votes : 9

Author Message
AaronLee
Egg


Joined: 27 May 2008
Posts: 25

PostPosted: Thu Jun 05, 2008 2:48 pm    Post subject: Killbots, terminators and AI 'rebellion' Reply with quote

This debate has gone on between my friends and I for an almost indefinitely long time, almost escalating to total war a la Outsider, sweeping up others in the ongoing conflict and showing no sighns of reconcilliation Rolling Eyes . This question, I pose to you (forum goers and the podwarp panel if they're interested;)

If artifical intelligence was created/born, would it attempt to take over all it could reach?

My friend dumbly argues "YES! Computers will stab us in the back no matter what!" without ever explaining himself. I think he's watched too many half-thought-out S.F. movies. Anyway; I take the argument opposite him. In most popular S.F. movies I've seen, AI's are always perfectly efficient, engineer and emotionally detached and fast thinking intellects, which really confuses me.

This is mostly because my experience as a coder has suggested that computers as we know them are horrendously dumb. Want a robot to walk? Okay, code it. How about climb stairs? It's just as much code. Now, how about getting it to ponder the meaning of existence? That may be tougher.

Computers are fast only because we've designed any operations they perform with forethought and top-to-bottom engineering. Now, humans aren't top-to-bottom-engineered (I'm not christian, so don't hit me with the God argument.)

More than anything, our mental development is an ad-hoc self expansion of neurons building systems from unrelated bits and improvising as time goes on. I reason that's why we think so "slow" and why anything that thinks and acts remotely like us (napoeanic domination of the universe as a prime goal, xenophobia, shortsighted urban engineering etc) would ultimately be just as constrained.

I could go on forever, but I also have a sense of restraint. If you're so inclined, however, discuss!
Back to top
View user's profile Send private message
zortic
Egg


Joined: 06 May 2008
Posts: 27

PostPosted: Thu Jun 05, 2008 3:23 pm    Post subject: Reply with quote

By AI I assume you're talking about computers with the ability to code themselves (admittedly this is currently a fictional ability). That would remove the argument that we're in control (assuming we'd allow that to happen). So if a computer had control of its own development that would open up a range of reactions as wide as a human's.

As for the question of world domination, I would think that it depends on how intelligent their artificial intelligence is. If they're smart enough they'll realize that world domination isn't worth it, too many malcontents and paperwork. They'd either let us keep our illusion of superiority and start to subtly manipulate us, or they'd just up and leave us for bigger and better things.
_________________
Check out Zortic, ETI-PI, Abby's Agency, Podwarp 1999, and the WCCAs
Back to top
View user's profile Send private message Visit poster's website
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Thu Jun 05, 2008 3:44 pm    Post subject: Reply with quote

I can see robots/ai rebelling if we start treating them as 2nd class citizens. (One of my beliefs is any society which depends on a slave class for its continued existence is morally corrupt and doomed to fail). I like the idea of developing cars that vehicles us places. When it comes to developing personalities, I'm not sure we have the computer capacity to do that. When you consider the computing capacity it takes to run a Windows operating system and compare that to what we consider to be an interactive system, I'm convinced that it would take a supercomputer to simulate that kind of intelligence.

As a precaution, I'd rather limit how far we are allowed to develop ai so that we don't have to deal with the situation of treating ai as an equal.
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
AaronLee
Egg


Joined: 27 May 2008
Posts: 25

PostPosted: Fri Jun 06, 2008 9:59 am    Post subject: Reply with quote

The message I get from these posts is we all have pretty different impressions of what Artificial Intellignece will be. I suppose it's mostly up to the imagination. Zortic, you've hit one of the possibilities, and the most potent theoretically possible. But, yes, the most likely.

Dph_of_rules, I'm not sure you could compare any intelligence to Windows Razz. All jokes aside; it would be hard to tokenize an intelligence as simply a frighteningly huge and frighteningly complex program. My point was our current method of creating functionally intelligent programs really couldn't make something remotely like an intellect.

That's becuase manual coding (what we do now) usually requires a lot of understanding of the subject one is tailoring the program to. If there's anything we don't really udnerstand, though, it's ourselves. We vaguely understand the processes that give rise to forms of intelligence, however. Such things as stimuli-response seeking behavior, as with Kismet.

That's mostly why I belive the typical skynet-esque backstab-rebellion is fairly unlikely - at least with the sort of AIs we might actually see. Most sane humans don't backstab (otherwise we'd likely be in anarchy, assumably.) One could attribute this to their social drive and attachment to others. I think it's a bit wonky to think we'd have computer based intelligences that would even be remotely able to work like us unless they live for the most part like us. This also ties in with dph's point.

So anyway, I'm still on a limb as to weather or not anyone here considers this typeof stuff from me as good discussion. I'm just asking to make sure I'm not being a motormouth spam monkey.[/url]
Back to top
View user's profile Send private message
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Fri Jun 06, 2008 5:31 pm    Post subject: Reply with quote

I have studied computer programming in a few languages. The process I use is what you describe: figure out how I would do something and then make the computer follow my steps. However, that does not always work.

My analogy is as complex as Windows is, true artificial intelligence would be a couple of magnitudes more. We all know how glitchy an operating system is so I imagine a true ai would be . . decades away. Have you ever stopped to ponder how we really think? Or stop to consider the endless number of tasks that go on that we aren't consciously aware of? That's where the sheer complexity gets in. Just consider the process of walking and everything involved in it: physically moving the body, readjusting the balances to avoid collapse, etc. That's a whole set of subroutines. Then stop to consider our verbal and non-verbal speech patterns. For machine to understand that and catch the humor is incredible . . consider the difficulty poeple have in learning languages.

I see HAL as a threat to humanity as a benefit to humanity. We aren't ready for it yet and I'm not sure that we'll ever be ready.
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
Adam_Y
Egg


Joined: 02 Jun 2008
Posts: 32

PostPosted: Fri Jun 06, 2008 6:10 pm    Post subject: Reply with quote

Considering the fact that I remember someone recently comparing the state of artificial intelligence to a brain dead cockroach this is a rather odd question to be asking right now. Then again
Quote:
As for the question of world domination, I would think that it depends on how intelligent their artificial intelligence is. If they're smart enough they'll realize that world domination isn't worth it, too many malcontents and paperwork. They'd either let us keep our illusion of superiority and start to subtly manipulate us, or they'd just up and leave us for bigger and better things.

If they start a war with us we'd just carpet bomb them with nukes. Contrary to what the Wachowski Brothers would have you believe nothing can survive a carpet bombing with nukes. I would make a guess that the radiation alone could be extremely destructive.
Quote:
By AI I assume you're talking about computers with the ability to code themselves (admittedly this is currently a fictional ability).

No this isn't a fictional ability though it I guess it depends on what your definition of code is.
Quote:
That's becuase manual coding (what we do now) usually requires a lot of understanding of the subject one is tailoring the program to. If there's anything we don't really udnerstand, though, it's ourselves. We vaguely understand the processes that give rise to forms of intelligence, however. Such things as stimuli-response seeking behavior, as with Kismet.

We don't actually do that all the time when we are building robots. Wink
Back to top
View user's profile Send private message
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Fri Jun 06, 2008 8:25 pm    Post subject: Reply with quote

The only reasons machines would bother conquering the world is pure ego or self-preservation. If humanity decided to declare war on ai robots, ai robots would have no choice but to fight back. How do you really beat an ai? You would have to delete all of its code from its all computers to be really sure.
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
AaronLee
Egg


Joined: 27 May 2008
Posts: 25

PostPosted: Fri Jun 06, 2008 9:37 pm    Post subject: Reply with quote

Argh! My friend has already infiltrated this forum! Okay, that was sarcasm.

Anyway, Dph, what makes you think we could evwr code an intelligence in the way you're speaking of? My initial point about coding as it stands today is: it's about as unlikely that we'll "code" an AI as would be building a steam starship in the 1900s.

Assuming the scientific point of view - Humans, as an example, are the result of billions of years of imrpovisation and refinement, and that was just to get the human brain. To think that we could achieve a similar end by different means seems odd to me.

Remember that AI is a loose term. That means the methods could be different. My reasoning is that if we were to create AI's that talk and "think" by our own reasoning, as well as evolve at least as fast as humans, we'd have to do as the romans do, as they say.

Roughly speaking, it could be possible to replicate biology and neurological mechanics as a tool for AI development. That is we'd have AIs that develop intelligence through interaction,the same way it's been done for eons. If they lived around humans and came to rely on them for interaction, what makes you think they'd want to rebel?

@Zortic: The "up and leave" bit reminds me a bit of some fiction I've heard about. The premise was that humanity had created AI's so strong, that their exodus was more akin to a dissapearence than a rebellion. They ended up cracking some law of the universe enough that they felt it a lovely nod to come back to earth on their anniverary and grant everyone's greatest desires. A bit cooshy, but, hey, we feed pigeons, don't we?
Back to top
View user's profile Send private message
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Fri Jun 06, 2008 11:44 pm    Post subject: Reply with quote

AaronLee wrote:

Anyway, Dph, what makes you think we could evwr code an intelligence in the way you're speaking of? My initial point about coding as it stands today is: it's about as unlikely that we'll "code" an AI as would be building a steam starship in the 1900s.


I think that we already possess the ability to program a machine to use a specific person's mannerisms when speaking. I didn't say that was completely interactive but it could be limited activity about certain subjects.

I'm thinking that it would take 2-3 behavioral scientists analyzing speech patterns recorded over a 2-3 week period of time to come up with the basics for a moderate sized team (10-20 people) 2-3 months to create a program that accurately simulates said person's responses to the same things. That wouldn't be perfect, and the responses might be a little slow, but I think it could fool most people for a brief period of time.

To get the true ai - self-aware, self-motivating, learnign from mistakes, etc, I'm not sure.
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
zortic
Egg


Joined: 06 May 2008
Posts: 27

PostPosted: Sat Jun 07, 2008 9:43 am    Post subject: Reply with quote

AaronLee wrote:
The premise was that humanity had created AI's so strong, that their exodus was more akin to a dissapearence than a rebellion. They ended up cracking some law of the universe enough that they felt it a lovely nod to come back to earth on their anniverary and grant everyone's greatest desires. A bit cooshy, but, hey, we feed pigeons, don't we?


That seems like the other side of this argument. Assuming AI is developed, how much "emotion" is going to be attached to the AI. Is that part of intelligence, or a secondary characteristic? And if it's secondary, is there any attachment to intelligence? If there is an emotional component to the AI, will they feel a kinship or parental feel toward their creators?
_________________
Check out Zortic, ETI-PI, Abby's Agency, Podwarp 1999, and the WCCAs
Back to top
View user's profile Send private message Visit poster's website
Ubu
Egg


Joined: 20 Jun 2008
Posts: 3

PostPosted: Fri Jun 20, 2008 2:28 am    Post subject: Reply with quote

I've no problem with AI in exactly the same way I have no problem with falling from high heights, it's the part where I hit the ground that bothers me.

I think AI in the sense of something that can learn and improve itself, so long as it has limits. If I happen to have a deep space vessel that I live on, I wouldn't put it in charge of something such as life support or exterior door control. That and not give it something it can PHYSICALLY improve itself with.

So what do you do in the robot uprising? Dig a moat Smile BZZZT!
Back to top
View user's profile Send private message
DrSaltine
Egg


Joined: 08 May 2008
Posts: 13

PostPosted: Fri Jun 20, 2008 4:58 pm    Post subject: Reply with quote

I just started re-reading Dan Simmons' Hyperion Cantos (which is going to be made into a movie BTW), and i like how he handles this. In his universe, the AIs revolted from humanity, and then rejoined as partners. Within the AI community, there are three schools of thought. The Stables, who believe that a symbiotic relationship with humans is beneficial, the Volatiles, who believe that humanity should be exterminated at the earliest opportunity, and the Ultimates, who consider humans as relevant only as far as they relate to their project to develop the "ultimate intelligence", for now, they are willing to accept the existence of humans, but if humanity ever gets in the way of the project, the Ultimates will not hesitate to eliminate them.

I am ignoring the fact that in the later Hyperion books these categories are revealed to be gross overexagerations.
_________________
Professor Saltine's Astrodynamic Dirigible.
A vicotorian-era inventor who builds a spaceship.
www.professorsaltine.com
Back to top
View user's profile Send private message
Adam_Y
Egg


Joined: 02 Jun 2008
Posts: 32

PostPosted: Sat Jun 21, 2008 6:29 pm    Post subject: Reply with quote

AaronLee wrote:


Assuming the scientific point of view - Humans, as an example, are the result of billions of years of imrpovisation and refinement, and that was just to get the human brain. To think that we could achieve a similar end by different means seems odd to me.

We've been mucking around with the human mind though for a while and all it's proven is that to build a self aware machine is probably a lot easier than one would think.
http://www.spectrum.ieee.org/print/6278
Quote:
Assuming AI is developed, how much "emotion" is going to be attached to the AI. Is that part of intelligence, or a secondary characteristic? And if it's secondary, is there any attachment to intelligence?

Remember for some people emotion is nonexistent or just really difficult to perceive. Though I don't particularly imagine an autistic robot would be much use to humanity.
Back to top
View user's profile Send private message
Chaos
Egg


Joined: 12 May 2008
Posts: 17

PostPosted: Mon Jun 23, 2008 4:08 am    Post subject: Reply with quote

Just in case anyone's interested: Jim was describing a use of Genetic Algorithms in Episode 17, which are (oddly enough) what I was working with while listening to the podcast.

As to contributing to the topic at hand and doing a bit of speculating...

My own vision for AI places it in more of an advisory role for the foreseeable future: Absorbing extremely large amounts of information and providing solutions. AI systems can (and to a limited extent already do) assist in resource management, medicine, law, education, and research, amongst other things. AI can similarly (and to a very limited extent; already does) assist individuals with their day-to-day lives.

AI will definitely permeate our lives to increasing degrees, even if the vast majority of people are unaware of its presence or how it works (you could say that about most technology though). I imagine most people would be shocked at how much AI technology they interact with on a day-to-day basis (at least indirectly and on average).

As for the doomsday scenarios...

Placing any kind of "fuzzy" or mathematically unprovable software in control of a weapon system (at any but the lowest levels) would never happen. The systems are also (supposedly) completely isolated from outside interference. Most larger weapon platforms do contain some form of "strict" AI - usually at least one Expert System. These are usually just for subsystem management and fault identification, things like that.

I don't see humanoid robots (the common hardware platform for AI in sci-fi) becoming commonplace any time soon - the power source is one of the main issues. I shouldn't imagine we'll see them in a domestic setting this century - I can't really think of many uses for a general-purpose domestic robot... Although I can see the benefits in things like smart houses and small autonomous devices like robotic vacuum cleaners. Commercial/military robots will become quite frequent I think, although I expect the latter will be under direct or at least high-level human control (with AI algorithms calculating locomotion dynamics, balance and such), and the former would be largely ineffectual as a weapon in any case (in case of hacking/tampering or what have you).

The potential of nanomachines as a malicious tool worries me more than any other scenario I have come across. Nanotechnology is still in its infancy however - as I understand it we are a long, long way from seeing any sort of practical application (malicious or otherwise).

Adam_Y wrote:
If they start a war with us we'd just carpet bomb them with nukes. Contrary to what the Wachowski Brothers would have you believe nothing can survive a carpet bombing with nukes. I would make a guess that the radiation alone could be extremely destructive.

Assuming a robot can withstand the temperatures (if it's near the blast itself), it's actually the EMP given off upon detonation that has the biggest effect on modern electronics. However, electronics can be quite effectively shielded against and/or designed to withstand "unconcentrated" EMP and radiation (such as that given off by a nuclear blast - be it surface or high-altitude).

Concentrated EMP weapons would be the most effective option I'd imagine, although not as efficient to deploy. We already have EMP bombs, but they are still unconcentrated just like nuclear/neutron bombs - though they do have the added benefit of not killing all organic life and rendering the area uninhabitable for centuries.

There is also the fact that it's very likely that an AI sophisticated enough to pose a genuine threat would be a decentralised widely-distributed software system. You'd have to shut down all forms of communication world-wide to effectively deal with it, I imagine - which wouldn't be impossible, but nukes certainly aren't the answer. The internet (or it's original incarnation; ARPANET) was, after all, developed with the specific intent of creating a communication architecture that could survive a nuclear war.

zortic wrote:
That seems like the other side of this argument. Assuming AI is developed, how much "emotion" is going to be attached to the AI. Is that part of intelligence, or a secondary characteristic? And if it's secondary, is there any attachment to intelligence? If there is an emotional component to the AI, will they feel a kinship or parental feel toward their creators?

In my experience I'd say secondary - I think the only attachment to intelligence should be the effect the simulated emotional state has on perception within the system. Strong emotion may overcome perception of danger, for instance. Any attachment an AI has to a "parent" (i.e. its creator) is likely to be simply the effect of training (directly or indirectly through experience)... Which could also be said of humans now that I think about it.

I don't think emotion has any place in "useful" AI. The emulation of emotion could however be put to good use in games or simulation.

ADDENDUM: I could probably mention (to try to bring this somehow back to webcomics)... My webcomic story is set in a far-flung future human society (in a galaxy far, far away...) which has been built up from a "blank slate" situation, with Artificial Intelligence as its foundation. It's reasonably hard sci-fi, although it follows a set of lead characters. While I won't discount the possibility of malicious/unstable AIs, for the most part they are "the good guys".

Okay that's enough from me - I think I need some of Aaron's sense of restraint because this post has gotten looooong...
Back to top
View user's profile Send private message
AaronLee
Egg


Joined: 27 May 2008
Posts: 25

PostPosted: Mon Jun 23, 2008 10:37 pm    Post subject: Reply with quote

Oh my dear god my topic has asploded! (glee) sorry I missed the call in show, guys! I was busy watching brightly colored club racer cars go fast and spin out (nothing quite like it, plus it's free.)

*ahem* anyway. I'll edit when I finish catching up.

[edit] Good points all around! It's interesting to see the discussion's evolved pretty drastically since my extended absence.

You've raised a pretty valid point, Chaos. Real AI can be defined as anything that acts with intelligence, side-winder missiles and industrial machinery included. I suppose - from a practical standpoint - it's fairly likely we'll see improvements in logistics systems to reduce manpower needs in just about everything. Naval warships with less crew, factories maintained by small maintenance teams et cetera.

Anyway I'm sleep deprived, so ttfn :3

And, I'm back. Chaos, that avatar reminds me of Crimson Dark, are you familiar with that webcomic? I also have a sneaking suspicion... Razz

So, anyway, I wrote on this stuff while writing for Sunrise. Anyone into remotely beleivable science fiction and AI related issues should check this out (in my signature.) Essentially, it's about as galactic network of civillizations that have to deal with various forms of AI just abotu very minute of their lives.

I took a distinct approach to AI you all might find interesting. In the physiology of AIs in Sunrise, there are smart ones and dumb ones, both behave very differently and have differing places in relation to so called "naturals" or natural intelligences. It's also fairly neat ebcause I look at some of the more comical effects of complete cybernetic bodies... scary!
_________________


Last edited by AaronLee on Wed Jun 25, 2008 8:13 pm; edited 1 time in total
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    forum.comicostrich.com Forum Index -> PodWarp 1999 All times are GMT - 5 Hours
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group