Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Nadira Anjum

Pages: 1 [2]
Great Information..:)

The idea of a man-made being that eventually surpasses its creator was the subject of at least four movies last year — and comes up again in Morgan, a sci-fi thriller opening Friday.
What’s the deal on our fixation with robots?
And why are they so often the bad guys?
Perhaps stories about robots and Artificial Intelligence reflect our own ideas about consciousness or free will.
Or what it means to be human.
Maybe A.I. represents one way to see human-like beings behave in inhuman ways — a sort of 21st century puppet show. Then again, humans might well feel like God when they help create a new life form, especially a robotic life form that can represent a type of immortality.
And the ongoing problem is this: How do you make sure that machines smarter than people don’t figure out ways to seize power away from their creators?
In a 1942 story, Isaac Asimov spelled out the three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Interesting — but Science Fiction is full of stories about robotic creations rebelling against those very laws, unwittingly or otherwise. And nobody loves a recalcitrant robot more than Hollywood.

Here are 10 best movies about AI.

ROBOCOP (1987)
An indestructible cyborg (Peter Weller) that’s part machine, part hero cop’s consciousness, appears to be the future of police work. It doesn’t quite work out that way. This action drama/social commentary is directed by Paul Verhoeven.

An ex-cop (Harrison Ford) working as a blade runner — an assassin of rogue androids — grapples with his feelings about the human-like replicants. Ridley Scott directs Sean Young, Rutger Hauer, M. Emmet Walsh, Daryl Hannah.

Evil A.I. Ultron (James Spader) dukes it out with Iron Man, Hulk, Captain America, Black Widow et al in this particular Marvel adventure.

Alicia Vikander is terrific as the A.I. who may or many not have her own ideas about becoming part of the human race. Sci-fi thriller also stars Domhnall Gleeson and Oscar Isaac.

Keanu Reeves, Laurence Fishburne and Carrie-Anne Moss star in this futuristic thriller (from the Wachowskis) as rebels who dare go up against the machines that have imprisoned human minds. Can they escape the artificial reality created to lull them into cooperating?

A robotic boy (Haley Joel Osment) wants his human mother to love him. Then her human son comes home. Steven Spielberg directs Frances O’Connor, Jude Law, William Hurt.

Arnold Schwarzenegger stars as a cyborg sent from the future to kill a woman (Linda Hamilton) in the present. The idea is to prevent the birth of the woman’s son — who will grow up to one day lead a rebellion against the machines. James Cameron directs.

MOON (2009)
Astronaut Sam Bell (Sam Rockwell) has a solo stint on the moon, where isolation is either causing him to go mad or keeping him from finding out the truth about his origins. Duncan Jones co-wrote and directs.

HER (2013)
Joaquin Phoenix stars as a writer who falls in love with the A.I. operating system that runs his life. Well — the machine does have a fabulous human voice (courtesy Scarlett Johansson). Spike Jonze directs; also with Amy Adams.

ROBOT & FRANK (2012)
Frank, an aging jewel thief (Frank Langella) is given a robot butler by his son (James Marsden) to help around the house. Frank retrains his robot to assist in heists. This is sci-fi for the non-sci-fi-fan; also with Susan Sarandon and Liv Tyler, and that’s Peter Sarsgaard as the voice of the robot.

Smartphone apps could eventually predict arguments among couples and help nip them in the bud before they blow up. For the first time outside the lab, artificial intelligence has helped researchers begin looking for patterns in couples’ language and physiological signs that could help predict conflicts in relationships.

Most of conflict-monitoring experiments with real-life couples have previously taken place in the controlled settings of psychology labs. Researchers with the Couple Mobile Sensing Project at the University of Southern California, in Los Angeles, took a different approach by studying couples in their normal living conditions using wearable devices and smartphones to collect data. Their early field trial with 34 couples has shown that the combination of wearable devices and artificial intelligence based on machine learning AI could lead to the future of smartphone apps acting as relationship counselors.

“In our current models, we can detect when conflict is occurring, but we haven't yet predicted conflict before it happens,” says Adela Timmons, a doctoral candidate in clinical and quantitative psychology at the University of Southern California (USC). “In our next steps, we hope to predict conflict episodes and to also send real-time prompts, for example prompting couples to take a break or do a meditation exercise, to see if we can prevent or deescalate conflict cycles in couples.”

Trying to predict something as complex as conflict among couples is no easy task in the real world. In that sense, machine learning algorithms that can automatically begin identifying patterns in data could help researchers sift through the language of couples and their different physiological indicators—such as heart rate or skin conductance response—to more accurately identify signs of brewing conflict. The USC team detailed its approach in IEEE Computer.

Before turning their off-the-shelf machine learning algorithm loose on the data, researchers had to identify which key features they should focus on during the experiment to get the best possible predictors of conflict. Past psychology studies have shown that conflict between couples is associated with physiological arousal signs such as raised heart rate and and skin conductance level. Arguing couples also tend to use certain wording such as more second-person pronouns (“you”), more negative emotion words, and more certainty words such as “always” or “never,” Timmons explains.

The 34 couples who participated in the day-long trial were given wearable devices such as a wristband sensor to measure  skin conductance, body temperature and physical activity. A separate sensor worn on the chest measured heart rate. Each romantic partner also received a smartphone to collect audio recordings of their conversations and to allow for GPS tracking. To verify that a conflict had taken place, the smartphone would prompt couples to report whenever they had in fact been arguing. (Of the 34 couples, 19 ended up reporting a conflict during the experiment.)

Early results with the small sample size were promising. The findings generally matched with what past psychological studies and theories had suggested about conflict in relationships. For example, negative emotion expressed in language was associated with conflict at an accuracy rate of 62.3 percent. When the machine learning algorithm analyzed all the data from many different indicators in addition to negative emotion, it accurately identified conflict 79.3 percent of the time.

“These models rely on machine learning,” Timmons says. “To be able to do classification experiments and say with reasonable accuracy whether conflict is occurring or not occurring really requires big data.”

That 79.3 percent accuracy is still below what might be expected in a future smartphone app providing active counseling or similar interventions for real couples. Incorrect identification of conflict could potentially cause unnecessary alarm, says Theodora Chaspari, a doctoral candidate in the Signal Analysis and Interpretation Laboratory (SAIL) at USC and coauthor on the study. But the higher accuracy from combining data from many different features seems to confirm the general approach of using many different measures to help infer the mental state of couples in conflict.

The researchers also face challenges in cleaning up the real-life data collected from the couples, which was much messier than the data that can be collected in the controlled confines of a lab. They sometimes encountered missing data segments, such as when some couples turned off their smartphone audio recordings at certain times for privacy. Still, Chaspari expects a larger dataset collected from additional couples to help the machine learning algorithm smooth out some of these wrinkles.

Eventually, the USC team hopes to use their system to collect enough data on individual couples to identify the personal quirks in their conflict patterns—something that could go a long way toward boosting the system’s accuracy in identifying conflicts for each couple. “We now have a generalized system that works, but the challenge is how to make the system specific for a couple or certain clusters of couples,” Chaspari says.

Accurate identification of conflicts could eventually enable the algorithm to predict conflicts before couples are even aware they have begun fighting. The USC team’s next steps will involve collecting additional data to boost the accuracy of their current algorithm. For example, wearable and smartphone technologies could help researchers collect data on a variety of other factors, such as phone usage, time on the internet, or how much light exposure couples receive during the course of their day—all theoretical but possible subtle predictors of conflict, for example given how light exposure can impact individual mood.

“Part of what helps these models work well and to have high classification accuracy is to have a lot of data and a lot of features,” Timmons says. “In our next steps we’re going to be including more predictors of conflict.”

Common Forum/Request/Suggestions / Future of Artificial Intelligence
« on: April 20, 2017, 05:45:02 PM »
The field of artificial intelligence may not be able to create a robotic vacuum cleaner that never knocks over a vase, at least not within a couple of years, but intelligent machines will increasingly replace knowledge workers in the near future, a group of AI experts predicted.

An AI machine that can learn the same way humans do, and has the equivalent processing power of a human brain, is still a few years off, the experts said. But AI programs that can reliably assist with medical diagnosis and offer sound investing advice are on the near horizon, said Andrew McAfee, co-founder of the Initiative on the Digital Economy at the Massachusetts Institute of Technology.

For decades, Luddites have mistakenly predicted that automation will create large unemployment problems, but those predictions may finally come true as AI matures in the next few years, McAfee said Monday during a discussion on the future of AI at the Council on Foreign Relations in Washington, D.C.

[ Further reading: Your new PC needs these 15 free, excellent programs ]
Innovative companies will increasingly combine human knowledge with AI knowledge to refine results, McAfee said. “What smart companies are doing is buttressing a few brains with a ton of processing power and data,” he said. “The economic consequences of that are going to be profound and are going to come sooner than a lot of us think.”

Rote work will be replaced by machines

Many knowledge workers today get paid to do things that computers will soon be able to do, McAfee predicted. “I don’t think a lot of employers are going to be willing to pay a lot of people for what they’re currently doing,” he said.

Software has already replaced human payroll processors, and AI will increasingly move up the skill ladder to replace U.S. middle-class workers, he said. He used the field of financial advising as an example.

It’s a “bad joke” that humans almost exclusively produce financial advice today, he said. “There’s no way a human can keep on top of all possible financial instruments, analyze their performance in any rigorous way, and assemble them in a portfolio that makes sense for where you are in your life.”

But AI still has many limitations, with AI scientists still not able to “solve the problem of common sense, of endowing a computer with the knowledge that every 5-year-old has,” said Paul Cohen, program manager in the Information Innovation Office at the U.S. Defense Advanced Research Projects Agency (DARPA) and founding director of the University of Arizona School of Information’s science, technology and arts program.

There is, however, a class of problems where AI will do “magnificent things,” by pulling information out of huge data sets to make increasingly specific distinctions, he added. IBM’s recent decision to focus its Watson AI computer on medical diagnostics is a potential “game changer,” he said.

Matching data to profiles

“Medical diagnosis is about making finer and finer distinctions,” he said. “Online marketing is about making finer and finer distinctions. If you think about it, much of the technology humans interact with is about putting you in a particular bucket.”

McAfee agreed with Cohen about the potential of AI for medical diagnosis. “I have a very good primary care physician, but there’s no possible way he can stay on top of all the relevant medical knowledge he would have to read,” he said. “Human computers are amazing, but they have lots of glitches. They have all kinds of flaws and biases.”

AI machines are still a ways off from equalling the processing power of the human brain, but that’s largely a problem with hardware resources, said Peter Bock, an emeritus professor of engineering at George Washington University. Scientists should be able to build an AI device that matches the processing power of a human brain within 12 years, he predicted.

That AI device would then take several years to learn the information it needs to function like a human brain, just as a child needs years to develop, he said.

One audience member asked the AI experts if the technology will ever replace computer programmers.

If scientists are eventually able to build an AI machine that has the processing power of a human brain, that machine “could become a programmer,” Bock said. “She might become an actress. Why not? They can be anything they want.”

DARPA now has a project that focuses on using software to assemble code, by pulling from code that someone has already written, Cohen said. Many programmers today focus more on assembling code from resources such as, instead of re-creating code that already exists, he said, and DARPA has automated that process.

Humans still have to tell the assembling program what they want the final code to do, he noted.

At some point, an AI program may be able to write code, but that’s still years off, McAfee said. In order to deny that could never happen, “you’d have to believe there’s something ineffable about the human brain, that there’s some kind of spark of a soul or something that could never be understood,” he said. “I don’t believe that.”

There are things humans can still do, however, that have “proved really, really resistant to understanding, let alone automation,” McAfee added. “I think of programming as long-form creative work. I’ve never seen a long-form creative output from a machine that is anything except a joke or mishmash.”

Teaching & Research Forum / Re: How to Analyse Questions
« on: March 14, 2016, 12:12:29 PM »
Very helpful for students...

Teaching & Research Forum / Re: Writing the Statement of Purpose
« on: March 14, 2016, 11:59:24 AM »
Nice Post..Thanks for sharing... :)

Pages: 1 [2]