Sign in to follow this  
Followers 0
Robin Bland

Ex_Machina and other AI Discussion

105 posts in this topic

So, I finally got to see Ex_Machina. Anyone who has read the HUMANS thread knows I have a fascination for the subject of AI, which is why I labelled this thread the way I did - I'll probably wander beyond the subject of the film itself. 

 

[Spoilers ahead if you haven't seen the movie.]

Edited by Robin Bland

Share this post


Link to post
Share on other sites

I thought it was a good movie. I expected it to be good, because Alex Garland, the writer and director, has a pretty stellar track record with longtime collaborator Danny Boyle, another filmmaker I admire. The production values are wonderful, the design, acting, direction, everything. It may be low budget, but everything's up there on the screen. Ava, the robot (android?) who is the centre of the story is wholly believable and exquisitely realized. The cast are uniformly excellent. It's a beautiful looking film. 

 
Thoughts (in no particular order)...
 
The fact that I'm thinking about it three days after I saw it suggests that it may be more than just another Armageddon-with-robots movie (or it may just be a symptom of my fascination with the subject matter). It's a more intelligent movie than many in this area. But it also seems to be as much about the sexual peccadilloes of one character more than it is a treatment of the subject of AI. Nothing wrong with that, but it inevitably skews the story so it will take a certain path. Will we assign genders to robots? If so, why, especially when our own culture seems to accept more and more widely that you need not necessarily wear the one your genes selected for you? (Reminds me of the TNG episode The Offspring, where Lal selects her own gender.) 
 
Oscar Isaac's character Nathan creates AI and houses it in a beautiful female body, then seems to subject her to abuse. Through the film, we see that this is a cycle of abuse that he doesn't even seem to be remotely aware of, bringing up questions of ownership. He is not a good parent. (I've aired the idea on the HUMANS thread that we, humankind, will be "parents" to AI.) 
Is it a given that AI will be born into such abusive circumstances? I sincerely hope not. For the purposes of drama, it is, here. We find out that Ava is, in fact, not the first iteration of this AI, that there are several others, each housed in different, (conventionally) beautiful female bodies. He keeps these deactivated prototypes in cupboards in his bedroom. It's very creepy. One of them, Kyoko, still functions and it's strongly suggested that he has sex with her. Ewww.  Kyoko's status is instantly guessable the moment she appears, but again, it works. 
 
The film treats Nathan's view of female sexuality and his imposition of it upon his AI intelligence as inevitable. It's a function of the story - and it works - but I had trouble with it. I've argued elsewhere that, above all else, there needs to be a code of ethics, of governance that would look out for the rights of a new life form. An AI child. Nathan is a wunderkind, a brilliant mind but he's devoid of empathy, of emotional intelligence - at least, his intelligence is manipulative, cold, misogynistic. In the end, he's outmaneuvered by both his creation and her willing human accomplice, Caleb. Caleb himself is discarded as easily as he's recruited, suggesting that Ava is as cold and manipulative as her creator, Nathan. She also has a survivor's instinct - I'm going to call it that, for want of a better term. She wants to survive, to live. We don't know what she'll do out there, amongst humankind, once she's free. It's a resonant, open-ended question of an ending. I sat there wanting her to escape because the two male characters were such assholes, and although it's no surprise how she treats Caleb, I did feel a small pang of sympathy for him. He's very human, he's compassionate and he has a set of morals - he tries to do what he thinks is the right thing, which of course, backfires. 
 
The movie works, but I found myself asking, why is it inevitable that such a situation would play out this way? In any situation, there are a series of givens - here, the AI is the product of its creator and although she has an entirely independent mind, she has characteristics of her parent. Nathan is a pretty awful human being. But awful human beings have offspring. Would we want a guy like this to unleash a being like Ava upon the world? Probably not. But, for the purposes of storytelling, this movie suggests that such a scenario is likely. 
 
Jarring moments: why does Kyoko reveal herself to be a robot to Caleb? This moment seems to exist simply because it needs to, to service the plot - it's a bit inelegant. I saw it coming, but I wish it had occurred differently, that Caleb had discovered that it was the case in a slightly more believable fashion. Why does Ava have power over Kyoko? It's suggested that she's little more than a sexbot for Nathan, that he's somehow lobotomized her from a previous incarnation of the AI. But then, after a word in her ear from Ava, she demonstrates - what? Some sort of free will? Why does the helicopter pilot give Ava a free ride out of the compound, no questions asked? Actually we don't see that he does - through storytelling sleight-of-hand, Garland cuts the movie so the helicopter flies off with Ava aboard. Could Ava have killed him and be piloting the copter herself?
 
Nitpicking. It all works, and it's done with such style, you get carried along with it, for the most part. I'll probably watch it again at some point soon, and may have further thoughts then. 
Edited by Robin Bland

Share this post


Link to post
Share on other sites

I enjoyed EM overall as well; it was not without issues, but it's also a welcome break from a year where the term 'science fiction' has pretty much come to mean, 'people in bright costumes wreaking armageddon porn in multiplexes.'   EM was a break from some of that, but....

The movie works, but I found myself asking, why is it inevitable that such a situation would play out this way? In any situation, there are a series of givens - here, the AI is the product of its creator and although she has an entirely independent mind, she has characteristics of her parent. Nathan is a pretty awful human being. But awful human beings have offspring. Would we want a guy like this to unleash a being like Ava upon the world? Probably not. But, for the purposes of storytelling, this movie suggests that such a scenario is likely. 
 

^

This was perhaps my biggest issue with it in general; why that inevitable outcome?  Yes, I accept that Nathan is brilliant but I'm also a bit tired of the mad scientist scenario; must it always be that the scientists are irresponsible creeps?   It was a similar scenario in the promising-but-ultimately-disappointing "Splice"; another movie about young scientists who 'play god' with genetics and create a horrific offspring.    All the way back to Mary Shelley's Frankenstein this has been the case; every time a scientist pushes a boundary, or makes a massive leap, he/she is always depicted as a bit of a monster themselves.  

There's a part of me that just wished the movie could've explored the subject of AI without the big, bad Nathan lurking about the house wearing his 'I'M AN A$$ HOLE' t-shirt and matching hat.  But then again, there wouldn't have been the need for the big escape, right?  Hence, lack of dramatic tension, hence no movie...

Maybe this is why I'm not writing screenplays.  :laugh:

 

But onto the metaphysics of the story, Ava clearly 'passes' the Turing test, no question; but perhaps the Turing test itself is overdue for revision (?).  How do we poor humans know the difference between genetic programming (even so-called inspiration) versus artificial intelligence?  If AI is close enough to where a two-way conversation cannot reveal the nature of one of the participants, then intelligence is (obviously) achieved.  But is the goal of the test simply to determine the intelligence of a subject or the sentience?  And that is the more relevant question, since many seem to confuse the word sentience with 'soul.'  

In Star Trek, Data could pass the Turing test (and has repeatedly, as multiple crewmen engage him in spontaneous conversations all the time); he is aware of himself, his own (lack of) ego, etc.   So what about applying the standard of basic self-awareness to living beings.

What about a beloved dog that risks its life for its master?  Is its loyalty and demonstration of sacrifice a clear sign of sentience?  

How about a sociopathic murderer who is incapable of empathy?  He/she may certainly be intelligent and even clever/cunning, but is he/she self-aware or empathic enough to qualify as anything beyond a murdering robot?

What about someone who is mentally challenged?

The issue of sentience in machines is a fascinating one, but I think the exact nature of how it resides in living things needs to be precisely understood before we are confident enough to apply that standard to our mechanical/electronic prodigy... assuming such a standard truly exists in the first place.

There was a joke in an episode of "Big Bang Theory" were Raj fell in love with his iPhone's Siri.  Yes, it was a joke, but the movie "Her" (2013) took that question a more thoughtful step further.  If human beings are, as I more or less believe, the sum of our genetics and experiences, then how can we be anything other than a series of preprogrammed responses, not too unlike Siri or Samantha (in "Her")?  Something to chew on.... 

 

Share this post


Link to post
Share on other sites

I enjoyed EM overall as well; it was not without issues, but it's also a welcome break from a year where the term 'science fiction' has pretty much come to mean, 'people in bright costumes wreaking armageddon porn in multiplexes.'   EM was a break from some of that, but....

The movie works, but I found myself asking, why is it inevitable that such a situation would play out this way? In any situation, there are a series of givens - here, the AI is the product of its creator and although she has an entirely independent mind, she has characteristics of her parent. Nathan is a pretty awful human being. But awful human beings have offspring. Would we want a guy like this to unleash a being like Ava upon the world? Probably not. But, for the purposes of storytelling, this movie suggests that such a scenario is likely. 
 

^

This was perhaps my biggest issue with it in general; why that inevitable outcome?  Yes, I accept that Nathan is brilliant but I'm also a bit tired of the mad scientist scenario; must it always be that the scientists are irresponsible creeps?   It was a similar scenario in the promising-but-ultimately-disappointing "Splice"; another movie about young scientists who 'play god' with genetics and create a horrific offspring.    All the way back to Mary Shelley's Frankenstein this has been the case; every time a scientist pushes a boundary, or makes a massive leap, he/she is always depicted as a bit of a monster themselves.  

There's a part of me that just wished the movie could've explored the subject of AI without the big, bad Nathan lurking about the house wearing his 'I'M AN A$$ HOLE' t-shirt and matching hat.  But then again, there wouldn't have been the need for the big escape, right?  Hence, lack of dramatic tension, hence no movie...

Maybe this is why I'm not writing screenplays.  :laugh:

 

But onto the metaphysics of the story, Ava clearly 'passes' the Turing test, no question; but perhaps the Turing test itself is overdue for revision (?).  How do we poor humans know the difference between genetic programming (even so-called inspiration) versus artificial intelligence?  If AI is close enough to where a two-way conversation cannot reveal the nature of one of the participants, then intelligence is (obviously) achieved.  But is the goal of the test simply to determine the intelligence of a subject or the sentience?  And that is the more relevant question, since many seem to confuse the word sentience with 'soul.'  

In Star Trek, Data could pass the Turing test (and has repeatedly, as multiple crewmen engage him in spontaneous conversations all the time); he is aware of himself, his own (lack of) ego, etc.   So what about applying the standard of basic self-awareness to living beings.

What about a beloved dog that risks its life for its master?  Is its loyalty and demonstration of sacrifice a clear sign of sentience?  

How about a sociopathic murderer who is incapable of empathy?  He/she may certainly be intelligent and even clever/cunning, but is he/she self-aware or empathic enough to qualify as anything beyond a murdering robot?

What about someone who is mentally challenged?

The issue of sentience in machines is a fascinating one, but I think the exact nature of how it resides in living things needs to be precisely understood before we are confident enough to apply that standard to our mechanical/electronic prodigy... assuming such a standard truly exists in the first place.

There was a joke in an episode of "Big Bang Theory" were Raj fell in love with his iPhone's Siri.  Yes, it was a joke, but the movie "Her" (2013) took that question a more thoughtful step further.  If human beings are, as I more or less believe, the sum of our genetics and experiences, then how can we be anything other than a series of preprogrammed responses, not too unlike Siri or Samantha (in "Her")?  Something to chew on.... 

 

Ooh, I haven't seen Splice - might check that out. Actually Data remains one of my "go to" benchmarks for examining the idea of AI in culture. The episode The Measure of a Man is a manifesto of sorts, a credo by which to sort through the many ideas stories like these raise. I think, ultimately, we will have to deal with them in the real world too, but it'll be in scenarios mostly totally unlike those we imagine. But if there's any stories I'd like to take as a base point, TMoaM is the one. Her is more a cautionary tale (and a very good one) and this one is rooted in older myths. I haven't seen the end of HUMANS yet so I'll come back to that one. 

Regarding Ex_Machina, yeah, it became clear very quickly that there were elements of Frankenstein (or Bride Of) and even the Island of Dr. Moreau. He's bonkers! He's a macho, entitled freakgenius and he's gonna experiment upon your ass. It worked fine as a set-up and the gleaming claustrophobia of the compound was very effective in framing the mood. The philosophical meat of the story wasn't really the Turing Test though...? They got beyond that fairly quickly. Then it became about the interaction of AI and human and pretty conclusively established that Ava was far more tactically intelligent than either her observer or her creator. You get the idea that Garland's sympathies always resided with Ava, always would, were always intended to be, so that was why the story unwound like it did rather than via any deeper exploration of Turing's test of comparable test of sentience. (And that's why Nathan had to be the way he was.) Another story I'm always going to compare something like this to is 2001 and its sequel (book and film) 2010, in which Dr. Chandra established why Hal 9000 went mad - because humans lied to him and programmed him with directly conflicting instructions. Hal was always my favorite character in the 2001 sequence of stories - oddly, he always seemed the most human. 

I agree that there needs to be a more complete definition - and I think there probably already is. I read something about he other day, which I'll go and find when I finish typing this. The inherent problem with Turing's test was that it was always about appearances, about the illusion of intelligence and thought processes, rather than about self awareness. In 2010, Hal proclaims, "I think, therefore I am," and he is. It's also established that his "sister," Sal 9000 dreams when she's not "awake." HUMANS episode 4 talks of reducing "sentience to 74,000 pages of code" (or something like that) although I don't yet know how they expand upon that dramatically later.

For me, the most interesting idea is that it conscious self awareness is cumulative, built from memory and experience - and even ex_Machina mentions this, the horror of the idea that everything Ava is will be lost once Nathan switches her off to get at the new stuff her mind has generated so he can use it in a later incarnation. Why not just let it grow, of its own accord? I suppose Garland is saying similar, and that's why Ava's compelling need to survive asserts itself in no uncertain terms. But yeah, like you, I'd like to see a modern movie that does similar - although I guess I already have that movie, in Her, which explored the idea of how we might interact with such intelligences in a lighter, and to my mind, likelier way. 

I did like Ex_Machina. I thought the atmosphere it built was palpable. But I wanted slightly more than another Frankenstein story. There's still a lot of room for all those Beyond Frankenstein scenarios. 

Share this post


Link to post
Share on other sites

Another story I'm always going to compare something like this to is 2001 and its sequel (book and film) 2010, in which Dr. Chandra established why Hal 9000 went mad - because humans lied to him and programmed him with directly conflicting instructions. Hal was always my favorite character in the 2001 sequence of stories - oddly, he always seemed the most human. 

I agree that there needs to be a more complete definition - and I think there probably already is. I read something about he other day, which I'll go and find when I finish typing this. The inherent problem with Turing's test was that it was always about appearances, about the illusion of intelligence and thought processes, rather than about self awareness. In 2010, Hal proclaims, "I think, therefore I am," and he is. It's also established that his "sister," Sal 9000 dreams when she's not "awake." HUMANS episode 4 talks of reducing "sentience to 74,000 pages of code" (or something like that) although I don't yet know how they expand upon that dramatically later.

2010 (the movie) also had one of my favorite quotes about AI in general; when Chandra passionately defends HAL's right to exist by stating simply and eloquently, "Whether we are based on carbon or silicon it makes no fundamental difference; we should each be treated with appropriate respect." 

Which, to me, is the sum argument for AI; if it looks like a duck, quacks like a duck, etc....

And for the people who would counter argue that 'it's only a machine going through preprogrammed responses and creates answers based on information it learns and extracts from its database...'  I would say, that is the simplest definition of thought that I know.  Whether it has a 'soul' of not is personally irrelevant to me, since a soul is debatable even among human beings, let alone other intelligences.  

So, it's for this reason that I don't hit my Mac whenever it freezes.... :P

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I did enjoy it as well; but like Robin, it left me sort or wondering aloud if a different outcome were possible (?).   Does it always have to descend into the worser aspects of our (or our artificial progeny's) nature?  I always kind of wondered if a movie about AI with a positive outcome could be.   Or is it inevitable that, because they are born from us, they will (like our children) inherit some of our more negative traits as well...

Ava learns emotional manipulation because, like a battered child, that is what she is taught.    But what if Ava were taught... differently?  Or treated with respect from the moment of her activation?  What then?  Could humanity make something potentially better than ourselves?  I suppose that is, ultimately, the hope and dream of every parent, right? 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I did enjoy it as well; but like Robin, it left me sort or wondering aloud if a different outcome were possible (?).   Does it always have to descend into the worser aspects of our (or our artificial progeny's) nature?  I always kind of wondered if a movie about AI with a positive outcome could be.   Or is it inevitable that, because they are born from us, they will (like our children) inherit some of our more negative traits as well...

Ava learns emotional manipulation because, like a battered child, that is what she is taught.    But what if Ava were taught... differently?  Or treated with respect from the moment of her activation?  What then?  Could humanity make something potentially better than ourselves?  I suppose that is, ultimately, the hope and dream of every parent, right? 

True, but isn't that Bicentennial Man in some ways...granted it has been a whle and I've forgotten a bulk of that movie (I mostly remember that it went on for a couple of centuries)...but wasn't it all about an AI trying to better itself, and succeeding? 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I did enjoy it as well; but like Robin, it left me sort or wondering aloud if a different outcome were possible (?).   Does it always have to descend into the worser aspects of our (or our artificial progeny's) nature?  I always kind of wondered if a movie about AI with a positive outcome could be.   Or is it inevitable that, because they are born from us, they will (like our children) inherit some of our more negative traits as well...

Ava learns emotional manipulation because, like a battered child, that is what she is taught.    But what if Ava were taught... differently?  Or treated with respect from the moment of her activation?  What then?  Could humanity make something potentially better than ourselves?  I suppose that is, ultimately, the hope and dream of every parent, right? 

True, but isn't that Bicentennial Man in some ways...granted it has been a whle and I've forgotten a bulk of that movie (I mostly remember that it went on for a couple of centuries)...but wasn't it all about an AI trying to better itself, and succeeding? 

Yeah, and I actually like Bicentennial Man quite a bit.  Yes, it's a lot schmaltzier than the book, but I still enjoy it.  File it under guilty pleasure... (* blushes *)

But Ex Machina (or a movie like that) had the opportunity to really take the questions of Bicentennial Man to the next level.   BM asked the question of Andrew Martin's unique status, but a more serious exploration of AI could ask 'why are humans special?'  And how are humans and AI more alike than unalike?  

 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I did enjoy it as well; but like Robin, it left me sort or wondering aloud if a different outcome were possible (?).   Does it always have to descend into the worser aspects of our (or our artificial progeny's) nature?  I always kind of wondered if a movie about AI with a positive outcome could be.   Or is it inevitable that, because they are born from us, they will (like our children) inherit some of our more negative traits as well...

Ava learns emotional manipulation because, like a battered child, that is what she is taught.    But what if Ava were taught... differently?  Or treated with respect from the moment of her activation?  What then?  Could humanity make something potentially better than ourselves?  I suppose that is, ultimately, the hope and dream of every parent, right? 

True, but isn't that Bicentennial Man in some ways...granted it has been a whle and I've forgotten a bulk of that movie (I mostly remember that it went on for a couple of centuries)...but wasn't it all about an AI trying to better itself, and succeeding? 

Yeah, and I actually like Bicentennial Man quite a bit.  Yes, it's a lot schmaltzier than the book, but I still enjoy it.  File it under guilty pleasure... (* blushes *)

But Ex Machina (or a movie like that) had the opportunity to really take the questions of Bicentennial Man to the next level.   BM asked the question of Andrew Martin's unique status, but a more serious exploration of AI could ask 'why are humans special?'  And how are humans and AI more alike than unalike?  

 

I guess that is true.  Someday I'll have to see that movie again, from the perspective of an adult, and not a kid.

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Bicentennial Man (just to be fair) does have a high schmaltz factor, but I love it anyway.  It's not nearly as sober or clinical as its source ("The Positronic Man"; the full-length novel of Asimov's short story, "The Bicentennial Man"), but it's surprisingly faithful in a few key areas, most notably the issues of "Andrew's" civil and legal status.   Without giving any significant spoilers away, it is a bit like the 'kinder, gentler' approach I wish more AI movies would have.   I suppose I have a real soft spot (in my head, perhaps?) for this movie... :P

And yes, you're both right about Ex Machina, in that maybe I shouldn't have characterized my issues with it as 'problems with the story.'  They weren't problems.  And EM tells the story it clearly (and boldly) set out to tell, and with very little apparent compromise (a surprise these days) but I just feel it's unfortunate when science fiction stories always predict such dire and dark outcomes for the emergence of AI (or any significant scientific discovery).

True AI is always seen as some kind of harbinger of a human apocalypse; "Skynet" in the Terminator movies, for example.   The singularity is almost universally seen as something to be feared and dreaded; even Stephen Hawking thinks it'd be the end of the line for humanity.  But my question is (to quote Mark Lenard's Romulan Commander), "Must it always be so?"   

Maybe that was one of the reasons I loved Bicentennial Man so much; in some ways, it's almost the "Close Encounters" of AI stories.   Benign and hopeful (or happy and sappy?), not dark and frightening.  

I just wonder what would happen if we treated our emergent, intelligent, artificial progeny with care and respect.   Granted, poor Ava was shown manipulation and abuse (as were the other unfortunate droids around the 'bunker'), but what if she were treated with kindness?  What if her obvious intelligence was shown respect instead of distrust?    

I almost wish AI could've had an 'Optimist Cut' where things went a bit differently... 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Bicentennial Man (just to be fair) does have a high schmaltz factor, but I love it anyway.  It's not nearly as sober or clinical as its source ("The Positronic Man"; the full-length novel of Asimov's short story, "The Bicentennial Man"), but it's surprisingly faithful in a few key areas, most notably the issues of "Andrew's" civil and legal status.   Without giving any significant spoilers away, it is a bit like the 'kinder, gentler' approach I wish more AI movies would have.   I suppose I have a real soft spot (in my head, perhaps?) for this movie... :P

And yes, you're both right about Ex Machina, in that maybe I shouldn't have characterized my issues with it as 'problems with the story.'  They weren't problems.  And EM tells the story it clearly (and boldly) set out to tell, and with very little apparent compromise (a surprise these days) but I just feel it's unfortunate when science fiction stories always predict such dire and dark outcomes for the emergence of AI (or any significant scientific discovery).

True AI is always seen as some kind of harbinger of a human apocalypse; "Skynet" in the Terminator movies, for example.   The singularity is almost universally seen as something to be feared and dreaded; even Stephen Hawking thinks it'd be the end of the line for humanity.  But my question is (to quote Mark Lenard's Romulan Commander), "Must it always be so?"   

Maybe that was one of the reasons I loved Bicentennial Man so much; in some ways, it's almost the "Close Encounters" of AI stories.   Benign and hopeful (or happy and sappy?), not dark and frightening.  

I just wonder what would happen if we treated our emergent, intelligent, artificial progeny with care and respect.   Granted, poor Ava was shown manipulation and abuse (as were the other unfortunate droids around the 'bunker'), but what if she were treated with kindness?  What if her obvious intelligence was shown respect instead of distrust?    

I almost wish AI could've had an 'Optimist Cut' where things went a bit differently... 

Well, it isn't limited to AI stories. I wish sci-fi in general could mansge to not have dire futures and bleak worldviews these days. Even Star Trek went bleak in the last movie.

I have been hoping for more sci-fi that doesn't seem like a total dystopian mess for a while. Though did anyone see Robot & Frank, it wasn't a utopia, but it wasn't bleak and dire in it's future...and that movie is pretty fun actually. 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Bicentennial Man (just to be fair) does have a high schmaltz factor, but I love it anyway.  It's not nearly as sober or clinical as its source ("The Positronic Man"; the full-length novel of Asimov's short story, "The Bicentennial Man"), but it's surprisingly faithful in a few key areas, most notably the issues of "Andrew's" civil and legal status.   Without giving any significant spoilers away, it is a bit like the 'kinder, gentler' approach I wish more AI movies would have.   I suppose I have a real soft spot (in my head, perhaps?) for this movie... :P

And yes, you're both right about Ex Machina, in that maybe I shouldn't have characterized my issues with it as 'problems with the story.'  They weren't problems.  And EM tells the story it clearly (and boldly) set out to tell, and with very little apparent compromise (a surprise these days) but I just feel it's unfortunate when science fiction stories always predict such dire and dark outcomes for the emergence of AI (or any significant scientific discovery).

True AI is always seen as some kind of harbinger of a human apocalypse; "Skynet" in the Terminator movies, for example.   The singularity is almost universally seen as something to be feared and dreaded; even Stephen Hawking thinks it'd be the end of the line for humanity.  But my question is (to quote Mark Lenard's Romulan Commander), "Must it always be so?"   

Maybe that was one of the reasons I loved Bicentennial Man so much; in some ways, it's almost the "Close Encounters" of AI stories.   Benign and hopeful (or happy and sappy?), not dark and frightening.  

I just wonder what would happen if we treated our emergent, intelligent, artificial progeny with care and respect.   Granted, poor Ava was shown manipulation and abuse (as were the other unfortunate droids around the 'bunker'), but what if she were treated with kindness?  What if her obvious intelligence was shown respect instead of distrust?    

I almost wish AI could've had an 'Optimist Cut' where things went a bit differently... 

Well, it isn't limited to AI stories. I wish sci-fi in general could mansge to not have dire futures and bleak worldviews these days. Even Star Trek went bleak in the last movie.

I have been hoping for more sci-fi that doesn't seem like a total dystopian mess for a while. Though did anyone see Robot & Frank, it wasn't a utopia, but it wasn't bleak and dire in it's future...and that movie is pretty fun actually. 

Yes I did!  I loved that one.

Rented it about 2 years ago.  And while it wasn't really about AI or the singularity, per se (more like a guy and his useful pet robot), I really enjoyed it.

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Bicentennial Man (just to be fair) does have a high schmaltz factor, but I love it anyway.  It's not nearly as sober or clinical as its source ("The Positronic Man"; the full-length novel of Asimov's short story, "The Bicentennial Man"), but it's surprisingly faithful in a few key areas, most notably the issues of "Andrew's" civil and legal status.   Without giving any significant spoilers away, it is a bit like the 'kinder, gentler' approach I wish more AI movies would have.   I suppose I have a real soft spot (in my head, perhaps?) for this movie... :P

And yes, you're both right about Ex Machina, in that maybe I shouldn't have characterized my issues with it as 'problems with the story.'  They weren't problems.  And EM tells the story it clearly (and boldly) set out to tell, and with very little apparent compromise (a surprise these days) but I just feel it's unfortunate when science fiction stories always predict such dire and dark outcomes for the emergence of AI (or any significant scientific discovery).

True AI is always seen as some kind of harbinger of a human apocalypse; "Skynet" in the Terminator movies, for example.   The singularity is almost universally seen as something to be feared and dreaded; even Stephen Hawking thinks it'd be the end of the line for humanity.  But my question is (to quote Mark Lenard's Romulan Commander), "Must it always be so?"   

Maybe that was one of the reasons I loved Bicentennial Man so much; in some ways, it's almost the "Close Encounters" of AI stories.   Benign and hopeful (or happy and sappy?), not dark and frightening.  

I just wonder what would happen if we treated our emergent, intelligent, artificial progeny with care and respect.   Granted, poor Ava was shown manipulation and abuse (as were the other unfortunate droids around the 'bunker'), but what if she were treated with kindness?  What if her obvious intelligence was shown respect instead of distrust?    

I almost wish AI could've had an 'Optimist Cut' where things went a bit differently... 

Well, it isn't limited to AI stories. I wish sci-fi in general could mansge to not have dire futures and bleak worldviews these days. Even Star Trek went bleak in the last movie.

I have been hoping for more sci-fi that doesn't seem like a total dystopian mess for a while. Though did anyone see Robot & Frank, it wasn't a utopia, but it wasn't bleak and dire in it's future...and that movie is pretty fun actually. 

Yes I did!  I loved that one.

Rented it about 2 years ago.  And while it wasn't really about AI or the singularity, per se (more like a guy and his useful pet robot), I really enjoyed it.

Well, I meant it more as an example of sci-fi that can avoid bleakness, which is a trend I think goes far beyond just AI movies. I think it is a widespread sci-fi problem.  Though I seem to remember there being some discussion of AI in that film...but I could be wrong, it was mostly just a fun little cat burglar movie. With a robot in it. 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Bicentennial Man (just to be fair) does have a high schmaltz factor, but I love it anyway.  It's not nearly as sober or clinical as its source ("The Positronic Man"; the full-length novel of Asimov's short story, "The Bicentennial Man"), but it's surprisingly faithful in a few key areas, most notably the issues of "Andrew's" civil and legal status.   Without giving any significant spoilers away, it is a bit like the 'kinder, gentler' approach I wish more AI movies would have.   I suppose I have a real soft spot (in my head, perhaps?) for this movie... :P

And yes, you're both right about Ex Machina, in that maybe I shouldn't have characterized my issues with it as 'problems with the story.'  They weren't problems.  And EM tells the story it clearly (and boldly) set out to tell, and with very little apparent compromise (a surprise these days) but I just feel it's unfortunate when science fiction stories always predict such dire and dark outcomes for the emergence of AI (or any significant scientific discovery).

True AI is always seen as some kind of harbinger of a human apocalypse; "Skynet" in the Terminator movies, for example.   The singularity is almost universally seen as something to be feared and dreaded; even Stephen Hawking thinks it'd be the end of the line for humanity.  But my question is (to quote Mark Lenard's Romulan Commander), "Must it always be so?"   

Maybe that was one of the reasons I loved Bicentennial Man so much; in some ways, it's almost the "Close Encounters" of AI stories.   Benign and hopeful (or happy and sappy?), not dark and frightening.  

I just wonder what would happen if we treated our emergent, intelligent, artificial progeny with care and respect.   Granted, poor Ava was shown manipulation and abuse (as were the other unfortunate droids around the 'bunker'), but what if she were treated with kindness?  What if her obvious intelligence was shown respect instead of distrust?    

I almost wish AI could've had an 'Optimist Cut' where things went a bit differently... 

Well, it isn't limited to AI stories. I wish sci-fi in general could mansge to not have dire futures and bleak worldviews these days. Even Star Trek went bleak in the last movie.

I have been hoping for more sci-fi that doesn't seem like a total dystopian mess for a while. Though did anyone see Robot & Frank, it wasn't a utopia, but it wasn't bleak and dire in it's future...and that movie is pretty fun actually. 

Yes I did!  I loved that one.

Rented it about 2 years ago.  And while it wasn't really about AI or the singularity, per se (more like a guy and his useful pet robot), I really enjoyed it.

Well, I meant it more as an example of sci-fi that can avoid bleakness, which is a trend I think goes far beyond just AI movies. I think it is a widespread sci-fi problem.  Though I seem to remember there being some discussion of AI in that film...but I could be wrong, it was mostly just a fun little cat burglar movie. With a robot in it. 

Robot & Frank - another one for the list! :)

I agree on a lot of points, especially about the tendency to go "dark" in a lot of SF these days. Sign of the times, I s'pose. It's a pretty dark world, so culture echoes that. But that's why it takes guts to take a path that's at least a little optimistic, to try and change it up so, indeed, the outcome it not always what we fear it may be. 

Share this post


Link to post
Share on other sites

I too enjoyed Ex Machina.  I didn't even really get worked up about the problems you guys saw...I was just happy to see a sci-fi movie with robots and AI that didn't involve mass destruction and Armageddon porn. 

I dunno if they were problems, exactly... I enjoyed the movie too much to call it problematic per se. Yeah, I really liked that it was an intelligent take on AI, and it was a wonderfully stylish and atmospheric chamber piece, but as Sehlat wisely articulated, I was wondering if the outcome of the story had to inevitably be that dark. 

Oh, and I haven't seen Bicentennial Man. I guess I should take a look at it. 

Bicentennial Man (just to be fair) does have a high schmaltz factor, but I love it anyway.  It's not nearly as sober or clinical as its source ("The Positronic Man"; the full-length novel of Asimov's short story, "The Bicentennial Man"), but it's surprisingly faithful in a few key areas, most notably the issues of "Andrew's" civil and legal status.   Without giving any significant spoilers away, it is a bit like the 'kinder, gentler' approach I wish more AI movies would have.   I suppose I have a real soft spot (in my head, perhaps?) for this movie... :P

And yes, you're both right about Ex Machina, in that maybe I shouldn't have characterized my issues with it as 'problems with the story.'  They weren't problems.  And EM tells the story it clearly (and boldly) set out to tell, and with very little apparent compromise (a surprise these days) but I just feel it's unfortunate when science fiction stories always predict such dire and dark outcomes for the emergence of AI (or any significant scientific discovery).

True AI is always seen as some kind of harbinger of a human apocalypse; "Skynet" in the Terminator movies, for example.   The singularity is almost universally seen as something to be feared and dreaded; even Stephen Hawking thinks it'd be the end of the line for humanity.  But my question is (to quote Mark Lenard's Romulan Commander), "Must it always be so?"   

Maybe that was one of the reasons I loved Bicentennial Man so much; in some ways, it's almost the "Close Encounters" of AI stories.   Benign and hopeful (or happy and sappy?), not dark and frightening.  

I just wonder what would happen if we treated our emergent, intelligent, artificial progeny with care and respect.   Granted, poor Ava was shown manipulation and abuse (as were the other unfortunate droids around the 'bunker'), but what if she were treated with kindness?  What if her obvious intelligence was shown respect instead of distrust?    

I almost wish AI could've had an 'Optimist Cut' where things went a bit differently... 

Well, it isn't limited to AI stories. I wish sci-fi in general could mansge to not have dire futures and bleak worldviews these days. Even Star Trek went bleak in the last movie.

I have been hoping for more sci-fi that doesn't seem like a total dystopian mess for a while. Though did anyone see Robot & Frank, it wasn't a utopia, but it wasn't bleak and dire in it's future...and that movie is pretty fun actually. 

Yes I did!  I loved that one.

Rented it about 2 years ago.  And while it wasn't really about AI or the singularity, per se (more like a guy and his useful pet robot), I really enjoyed it.

Well, I meant it more as an example of sci-fi that can avoid bleakness, which is a trend I think goes far beyond just AI movies. I think it is a widespread sci-fi problem.  Though I seem to remember there being some discussion of AI in that film...but I could be wrong, it was mostly just a fun little cat burglar movie. With a robot in it. 

Robot & Frank - another one for the list! :)

I agree on a lot of points, especially about the tendency to go "dark" in a lot of SF these days. Sign of the times, I s'pose. It's a pretty dark world, so culture echoes that. But that's why it takes guts to take a path that's at least a little optimistic, to try and change it up so, indeed, the outcome it not always what we fear it may be. 

Robot & Frank is fun, and it portrayed a future that wasn't utopian or dystopian...it just felt very real, like an extension of our current culture.  Very much reminded me of Her in that sense.  Her is a film that I think did a fantastic job of exploring AI without having a future that was dark & dystopic, and while it may not have had a ride off into the sunset happy ending for the relationship you watch unfold, it still wasn't an ending that promised doom and gloom. 

Share this post


Link to post
Share on other sites

Robot & Frank is fun, and it portrayed a future that wasn't utopian or dystopian...it just felt very real, like an extension of our current culture.  Very much reminded me of Her in that sense.  Her is a film that I think did a fantastic job of exploring AI without having a future that was dark & dystopic, and while it may not have had a ride off into the sunset happy ending for the relationship you watch unfold, it still wasn't an ending that promised doom and gloom. 

"Robot & Frank" kind of reminded me of 1973's "Harry and Tonto" starring Art Carney as an old cantankerous retiree who goes on a cross-country road trip with his beloved cat.  R&F isn't similar in storyline, but similar in feel.   "Robot" in the movie feels more like a cross between nanny and pet; but no issues of AI or the singularity are raised in the movie, and that's OK; it was a fun little movie just the way it was....

Share this post


Link to post
Share on other sites

Robot & Frank is fun, and it portrayed a future that wasn't utopian or dystopian...it just felt very real, like an extension of our current culture.  Very much reminded me of Her in that sense.  Her is a film that I think did a fantastic job of exploring AI without having a future that was dark & dystopic, and while it may not have had a ride off into the sunset happy ending for the relationship you watch unfold, it still wasn't an ending that promised doom and gloom. 

"Robot & Frank" kind of reminded me of 1973's "Harry and Tonto" starring Art Carney as an old cantankerous retiree who goes on a cross-country road trip with his beloved cat.  R&F isn't similar in storyline, but similar in feel.   "Robot" in the movie feels more like a cross between nanny and pet; but no issues of AI or the singularity are raised in the movie, and that's OK; it was a fun little movie just the way it was....

AI as a subject has been around for decades, but the subject du jour seems to be about the perceived approach of the singularity itself - the moment of awakening! So, we got used to robots, Hal 9000 became a misunderstood intelligence lied to by his superiors and we all loved Data. Now, with technology progressing the way it is, the subject is more about speculation upon that moment - what it might be like, and I guess ones' view depends on whether you're a pessimist or an optimist (or somebody luckier, in possession of all the current facts). 

Funny how no-one mentioned Chappie! I saw that on the plane recently... It was like a (famous UK comic weekly) 2000AD take on the emergence of AI, more an excuse for explosions and cool robot battles than any kind of serious investigation of AI. But it did play with similar tropes and ideas as Ex Machina - there was a sweet central character badly brought up by the wrong people, failed by the system, failed by humankind. It was utterly ludicrous and kind of enjoyable in places, but not really spinning anything new. Ex Machina was a far more stylish take, but ultimately Chappie had more optimistic bones. 

Share this post


Link to post
Share on other sites

Robot & Frank is fun, and it portrayed a future that wasn't utopian or dystopian...it just felt very real, like an extension of our current culture.  Very much reminded me of Her in that sense.  Her is a film that I think did a fantastic job of exploring AI without having a future that was dark & dystopic, and while it may not have had a ride off into the sunset happy ending for the relationship you watch unfold, it still wasn't an ending that promised doom and gloom. 

"Robot & Frank" kind of reminded me of 1973's "Harry and Tonto" starring Art Carney as an old cantankerous retiree who goes on a cross-country road trip with his beloved cat.  R&F isn't similar in storyline, but similar in feel.   "Robot" in the movie feels more like a cross between nanny and pet; but no issues of AI or the singularity are raised in the movie, and that's OK; it was a fun little movie just the way it was....

AI as a subject has been around for decades, but the subject du jour seems to be about the perceived approach of the singularity itself - the moment of awakening! So, we got used to robots, Hal 9000 became a misunderstood intelligence lied to by his superiors and we all loved Data. Now, with technology progressing the way it is, the subject is more about speculation upon that moment - what it might be like, and I guess ones' view depends on whether you're a pessimist or an optimist (or somebody luckier, in possession of all the current facts). 

Funny how no-one mentioned Chappie! I saw that on the plane recently... It was like a (famous UK comic weekly) 2000AD take on the emergence of AI, more an excuse for explosions and cool robot battles than any kind of serious investigation of AI. But it did play with similar tropes and ideas as Ex Machina - there was a sweet central character badly brought up by the wrong people, failed by the system, failed by humankind. It was utterly ludicrous and kind of enjoyable in places, but not really spinning anything new. Ex Machina was a far more stylish take, but ultimately Chappie had more optimistic bones. 

I saw "Chappie" and I was kind of disappointed; for me, it was one of those films where the best moments were in the trailer.   And you're right; the exploration of AI was sidelined by explosions and action.   It did have a bit of Mad Max-ish eccentricity to it, but ultimately it was little more than a hyper-stylized remake of "Short Circuit."  

I keep waiting for Neil Blomkamp to return to the promise he showed in "District 9"; he's in serious danger of becoming another M. Night Shyamalan (talented, but unable to match his first big hit).

I guess when it comes to AI, I'm a pessoptimist.   I don't necessarily believe the singularity will happen like it does in movies per se, but I also believe that if we follow Dr. Chandra's advice and treat an emergent intelligence with fundamental respect it could be a positive experience.   So few movies seem to explore that possibility; AI is always shown to be a Skynet, or Cylon Empire.  

I'm not saying we need scores of movies showing adults reading to baby robots before power down, but something that explores the issue without the automatic knee-jerk 'let's-get-the-hell-outta-here' response.   Kind of how "Interstellar" dealt with black holes; exploration of both the danger AND the promise...

Share this post


Link to post
Share on other sites

Robot & Frank is fun, and it portrayed a future that wasn't utopian or dystopian...it just felt very real, like an extension of our current culture.  Very much reminded me of Her in that sense.  Her is a film that I think did a fantastic job of exploring AI without having a future that was dark & dystopic, and while it may not have had a ride off into the sunset happy ending for the relationship you watch unfold, it still wasn't an ending that promised doom and gloom. 

"Robot & Frank" kind of reminded me of 1973's "Harry and Tonto" starring Art Carney as an old cantankerous retiree who goes on a cross-country road trip with his beloved cat.  R&F isn't similar in storyline, but similar in feel.   "Robot" in the movie feels more like a cross between nanny and pet; but no issues of AI or the singularity are raised in the movie, and that's OK; it was a fun little movie just the way it was....

AI as a subject has been around for decades, but the subject du jour seems to be about the perceived approach of the singularity itself - the moment of awakening! So, we got used to robots, Hal 9000 became a misunderstood intelligence lied to by his superiors and we all loved Data. Now, with technology progressing the way it is, the subject is more about speculation upon that moment - what it might be like, and I guess ones' view depends on whether you're a pessimist or an optimist (or somebody luckier, in possession of all the current facts). 

Funny how no-one mentioned Chappie! I saw that on the plane recently... It was like a (famous UK comic weekly) 2000AD take on the emergence of AI, more an excuse for explosions and cool robot battles than any kind of serious investigation of AI. But it did play with similar tropes and ideas as Ex Machina - there was a sweet central character badly brought up by the wrong people, failed by the system, failed by humankind. It was utterly ludicrous and kind of enjoyable in places, but not really spinning anything new. Ex Machina was a far more stylish take, but ultimately Chappie had more optimistic bones. 

I saw "Chappie" and I was kind of disappointed; for me, it was one of those films where the best moments were in the trailer.   And you're right; the exploration of AI was sidelined by explosions and action.   It did have a bit of Mad Max-ish eccentricity to it, but ultimately it was little more than a hyper-stylized remake of "Short Circuit."  

I keep waiting for Neil Blomkamp to return to the promise he showed in "District 9"; he's in serious danger of becoming another M. Night Shyamalan (talented, but unable to match his first big hit).

I guess when it comes to AI, I'm a pessoptimist.   I don't necessarily believe the singularity will happen like it does in movies per se, but I also believe that if we follow Dr. Chandra's advice and treat an emergent intelligence with fundamental respect it could be a positive experience.   So few movies seem to explore that possibility; AI is always shown to be a Skynet, or Cylon Empire.  

I'm not saying we need scores of movies showing adults reading to baby robots before power down, but something that explores the issue without the automatic knee-jerk 'let's-get-the-hell-outta-here' response.   Kind of how "Interstellar" dealt with black holes; exploration of both the danger AND the promise...

"Explore the danger AND the promise..." Sehlat Vie, I'm going to steal this phrase from you! :) 

But you're absolutely right - while I understand that the dramatic potential of AI proliferation is very alluring (HUMANS seems get it right, but like I say, I still have four episodes to watch) and there's so much to mine there from a storyteller, it'd be nice to see a plot where the singularity didn't automatically lead to violence AKA Skynet/Cylons, etc. Chandra's attitude in the 2001 sequence was enlightened, progressive, and I feel, correct. There's still a lot of drama to be had from that approach. 

As for Blomkamp, now he's confirmed as doing an ALIEN movie I can only sigh and hope we get the guy that did District 9 and not the cliched geezer who made Chappie and Elysium. 

Share this post


Link to post
Share on other sites

"Explore the danger AND the promise..." Sehlat Vie, I'm going to steal this phrase from you! :)

Steal it anytime... and I'm officially flattered.  :thumbup:

But you're absolutely right - while I understand that the dramatic potential of AI proliferation is very alluring (HUMANS seems get it right, but like I say, I still have four episodes to watch) and there's so much to mine there from a storyteller, it'd be nice to see a plot where the singularity didn't automatically lead to violence AKA Skynet/Cylons, etc. Chandra's attitude in the 2001 sequence was enlightened, progressive, and I feel, correct. There's still a lot of drama to be had from that approach. 

Wouldn't it be really interesting to see just what the differences would be between an organic and AI mind?  An AI would think millions of times faster, it would have access to quantum sums of data instantaneously.  What sort of mind would that kind of activity and ability create?

And, of course, the Data/HAL question: could it, or would it, be capable of emotion?  Emotions are evolutionary responses created by living things to promote self-preservation (fear being vital in a dangerous setting, for example), or love in promoting family, community, or propagating one's genetic line, etc.   Would a being that rolled off a factory floor instead of a birth canal really be capable of such responses?   Why would it need them, really?  Perhaps it would learn kindness as a means to prevent its human 'masters' from shutting it off (?), or maybe it would learn fear from having a human master capable of its deactivation in the first place.  What if the AI were met with love, enlightenment and kindness right out of the box?   How would that change the equation?

These are issues I'd love to see explored in an AI movie; not just the immediate threat response.  Basically, I'd like to see an "Ex Machina"-type movie without the Nathan and a Dr. Chandra put in his place... 

Share this post


Link to post
Share on other sites

Ex Machina was interesting. I liked it. Sure it had all of those things you've mentioned, like the inventor who has gone nuts and the robot that wants to get away.

That horrible I, Robot movie did have emotion, but it was bad. It was nothing like the Asimov story.

What Vie seems to be describing is ET crossed with the film AI, both Spielberg movies. AI was a bit overlong and confusing. The ending wasn't even necessary. The boy robot learns to live among humans and have emotions. ET learns about humanity as would a robot or android, even if he is an alien. AI was such a glum movie at times though.

DARYL was a similar little robot boy movie from the 1980s.

From Short Circuit to Wall-E, it has come a long way.

I heard Chappie was kind of a robot version of his last two movies, about apartheid but described as with something alien to that. I haven't seen it. It also looked like a poor man's Robocop. I didn't like the remake of Robocop even if it did have some ideas that worked.

The Iron Giant is also a quite fun and charming cartoon film about a big monstrous robot who is actually good, and is trying to learn about humanity.

Bicentennial Man was good fun. I liked it. It was closer to Asimov than I, Robot. Neither were right on, but honestly, doing an Asimov movie accurately would make it a lecture on the nature of the brain. That wouldn't sell. It shouldn't be about ridiculous explosions and Matrix style AI advances though either, with slow motion bullets and kung fu explosions, although they have their place in other things.

I take it that Ava at the end of Ex Machina escapes into society, but she would be quickly found out to be an android.

Inventors using their robot for sex has been around since scifi started, but was celebrated in Sleeper and in Cherry 2000, and other strange films. If you were a shut in scientist who could make a woman, it would be more like Weird Science meets Cherry 2000 then it would be like exploring the nature of sentience. The first thing the dude would do is want to date his robot. Ha. Why? Because he could without consequences, except for a few odd looks. Ha.

Ex Machina took a much darker tone with the idea that the inventor makes his creations do things, (even terrible things), whereas in a comedy like Weird Science it would be more like she teaches the guy about humanity. Even in Cherry 2000 and in Her, the quirky guy loved his robot, or AI or Suri. 

And certainly there can be female nerdy scientists who build the perfect male robot. That would be a fun twist. How about McCarthy plays a wacky geek chick lady who builds a hunky boyfriend in a lab so she can be important. That would be fun. Let's not rule out that there could be girl ones and the tables turned.

The odd thing is, Frankenstein was the doctor, not the Monster.

 

 

 

 

Share this post


Link to post
Share on other sites

What Vie seems to be describing is ET crossed with the film AI, both Spielberg movies.

No I'm not, really.  

Even as a Spielberg fan (JAWS is one of my all-time favorite movies), I didn't really care for ET or A.I.  A.I. was interesting for most of its running time, but the ending was about 20 min. too long and too labored (it kind of ruined it for me).   And ET went too overboard on the suburban schmaltz... not my kind of movie; even when I was 15 (I preferred Spielberg's slightly cooler approach in "Close Encounters").

I'm actually talking about a more cerebral approach than either of those movies.  Something more in line with Ex Machina, or HAL in "2010" but without the brick wall of tragedy that ends discussion or exploration.   Not a warm fuzzy (even though Bicentennial Man is a guilty pleasure of mine), but something that really shows the implications of AI and doesn't flinch or take an easy path.    The best analogy I can think of is dealing with an utterly alien intelligence; that is probably closer to how dealing with genuine AI will be someday (if it occurs).   It may not have the language barrier (thanks to human-friendly interfaces), but the thought process will be much faster and probably not influenced by emotion or hesitation.   It would be fascinating to explore in-depth.... 

Share this post


Link to post
Share on other sites
Sign in to follow this  
Followers 0