Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Naznin Sultana

Pages: [1]
1
Bad News: Artificial Intelligence Is Racist, Too

 When Microsoft released an artificially intelligent chatbot named Tay on Twitter last March, things took a predictably disastrous turn. Within 24 hours, the bot was spewing racist, neo-Nazi rants, much of which it picked up by incorporating the language of Twitter users who interacted with it. 

Unfortunately, new research finds that Twitter trolls aren't the only way that AI devices can learn racist language. In fact, any artificial intelligence that learns from human language is likely to come away biased in the same ways that humans are, according to the scientists.

The researchers experimented with a widely used machine-learning system called the Global Vectors for Word Representation (GloVe) and found that every sort of human bias they tested showed up in the artificial system.
 GloVe is a tool used to extract associations from texts — in this case, a standard corpus of language pulled from the World Wide Web.

Psychologists have long known that the human brain makes associations between words based on their underlying meanings. A tool called the Implicit Association Test uses reaction times to demonstrate these associations: People see a word like "daffodil" alongside pleasant or unpleasant concepts like "pain" or "beauty" and have to quickly associate the terms using a key press. Unsurprisingly, flowers are more quickly associated with positive concepts; while weapons, for example, are more quickly associated with negative concepts.

The IAT can be used to reveal unconscious associations people make about social or demographic groups, as well. For example, some IATs that are available on the Project Implicit website find that people are more likely to automatically associate weapons with black Americans and harmless objects with white Americans.
 Caliskan and her colleagues developed an IAT for computers, which they dubbed the WEAT, for Word-Embedding Association Test. This test measured the strength of associations between words as represented by GloVe, much as the IAT measures the strength of word associations in the human brain.

For every association and stereotype tested, the WEAT returned the same results as the IAT. The machine-learning tool reproduced human associations between flowers and pleasant words; insects and unpleasant words; musical instruments and pleasant words; and weapons and unpleasant words. In a more troubling finding, it saw European-American names as more pleasant than African-American names. It also associated male names more readily with career words, and female names more readily with family words. Men were more closely associated with math and science, and women with the arts. Names associated with old people were more unpleasant than names associated with young people.

"We were quite surprised that we were able to replicate every single IAT that was performed in the past by millions," Caliskan said.

Using a second method that was similar, the researchers also found that the machine-learning tool was able to accurately represent facts about the world from its semantic associations. Comparing the GloVe word-embedding results with real U.S. Bureau of Labor Statistics data on the percentage of women in occupations, Caliskan found a 90 percent correlation between professions that the GloVe saw as "female" and the actual percentage of women in those professions.

In other words, programs that learn from human language do get "a very accurate representation of the world and culture," Caliskan said, even if that culture — like stereotypes and prejudice — is problematic. The AI is also bad at understanding context that humans grasp easily. For example, an article about Martin Luther King Jr. being jailed for civil rights protests in Birmingham, Alabama, in 1963 would likely associate a lot of negative words with African-Americans. A human would reasonably interpret the story as one of righteous protest by an American hero; a computer would add another tally to its "black=jail" category.

Retaining accuracy while getting AI tools to understand fairness is a big challenge, Caliskan said.
The new study, published online today (April 12) in the journal Science, is not surprising, said Sorelle Friedler, a computer scientist at Haverford College who was not involved in the research. It is, however, important, she said.

"This is using a standard underlying method that many systems are then built off of," Friedler told Live Science. In other words, biases are likely to infiltrate any AI that uses GloVe, or that learns from human language in general. 

Friedler is involved in an emerging field of research called Fairness, Accountability and Transparency in Machine Learning. There are no easy ways to solve these problems, she said. In some cases, programmers might be able to explicitly tell the system to automatically disregard specific stereotypes, she said. In any case involving nuance, humans may need to be looped in to make sure the machine doesn't run amok. The solutions will likely vary, depending on what the AI is designed to do, Caliskan said — are they for search applications, for decision making or for something else?

In humans, implicit attitudes actually don't correlate very strongly with explicit attitudes about social groups. Psychologists have argued about why this is: Are people just keeping mum about their prejudices to avoid stigma? Does the IAT not actually measure prejudice that well? But, it appears that people at least have the ability to reason about right and wrong, with their biased associations, Caliskan said. She and her colleagues think humans will need to be involved — and programming code will need to be transparent — so that people can make value judgments about the fairness of machines.

"In a biased situation, we know how to make the right decision," Caliskan said, "but unfortunately, machines are not self-aware."

2
Just Add Heat: New 4D-Printed Objects Morph on Cue

Objects that can change shape within seconds after being exposed to heat demonstrate a novel 4D-printing technique that could one day be used to create medical devices that unfurl on their own in the body during surgical procedures.
Engineers created a 3D-printed plastic lattice that quickly expands when submerged in hot water and an artificial flower that can close its petals similar to the way plants do in nature as experiments designed to demonstrate this method of 4D printing. 
The new technique significantly simplifies the process of "teaching" 3D-printed materials to change their shape when triggered to do so, said study co-author Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at the Georgia Institute of Technology in Atlanta.
"Previously, we had to train and program the material after we 3D-printed it," Qi told Live Science. "We had to heat it up and stretch it and then cool it down again for the material to learn the new form. It was relatively tedious. With this new approach, we do all the programming already in the printer."
The researchers are using two types of materials that are carefully combined in the 3D-printed structure to create the desired shape-shifting effect. A soft material holds the energy that drives the shape-change but in the cool state, the energy of the soft polymer is contained by another, glass-like stiff material. This stiff material, however, softens when exposed to heat, allowing the soft polymer to take over. The material is designed to remember the second shape and default to it when it's heated.
"You can heat it up and deform the structure into a new, third shape and it will keep that shape until you heat it up again," Qi said. "Then it transforms back into the second shape."
Previous 4D-printing techniques were able to create materials that change their shape only temporarily, and then after a while, return to the original printed shape.
In the new study, the researchers used a material that changes shape when it is heated to about 122 degrees Fahrenheit (50 degrees Celsius), but Qi said that by engineering the characteristics of the stiff material, the researchers can choose the temperature at which the object transforms.Previous 4D-printing techniques were able to create materials that change their shape only temporarily, and then after a while, return to the original printed shape.
"It promises to enable myriad applications across biomedical devices, 3D electronics and consumer products," said Martin Dunn, a professor of mechanical engineering at Singapore University of Technology and Design, who worked with the Georgia team.
For example electronic components could be printed in the flat form and then once they are assembled into devices, they could "inflate" into their useful 3D shapes.
"It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service," Dunn said in a statement.
Qi thinks biomedical devices such as stents, which are tiny tubes that are used to widen clogged up arteries to prevent strokes, could be created using the technique. These 4D-printed stents would expand inside a blood vessel, automatically triggered just by exposure to the heat of the human body. Currently, surgeons have to inflate the stents with balloons attached to the end of the catheter through which the device is being inserted.
Qi said the new technique is more suitable for practical applications than approaches that rely on hydrogels. The objects described in the new study could transform completely in less than 10 seconds, compared to about 7 minutes required for a hydrogel-based material that was presented a few years ago by a team of researchers from MIT.
Hydrogel-based 4D printing relies on the combination of hydrogels and non-swelling polymer filaments. When immersed in water, the hydrogel swells, forcing the filaments into a new shape.
"In hydrogel-based materials, the shape-change is driven by the absorption of water," Qi said. "But that's a relatively slow process. It takes time, especially if you have large structures."
Engineers from China's Xi'an Jiaotong University also collaborated on the study, which was funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation.

The study was published online April 12 in the journal Science Advances.

3
A GPS app can plan the best route between two subway stops if it has been specifically programmed for the task. But a new artificial intelligence system can figure out how to do so on the fly by learning general rules from specific examples, researchers report October 12 in Nature.

AI system learns like a human, stores info like a computer

Artificial neural networks, computer programs that mimic the human brain, are great at learning patterns and sequences, but so far they’ve been limited in their ability to solve complex reasoning problems that require storing and manipulating lots of data. The new hybrid computer links a neural network to an external memory source that works somewhat like RAM in a regular computer.

Scientists trained the computer by giving it solved examples of reasoning problems, like finding the shortest distance between two points on a randomly generated map. Then, the computer could generalize that knowledge to solve new problems, like planning the shortest route between stops on the London Underground. Rather than being programmed, the neural network, like the human brain, responds to training: It can continually integrate new information and change its response accordingly.

The development comes from Google DeepMind, the same team behind the Alpha Go computer program that beat a world champion at the logic-based board game.

4
Common Forum/Request/Suggestions / Robot awakening
« on: April 20, 2017, 03:56:25 PM »
Artificial intelligence needs smart senses to be useful

True intelligence, Meghan Rosen notes in this issue’s cover story "Robot awakening" (SN: 11/12/16, p. 18), lies in the body as well as the brain. And building machines with the physical intelligence that even the clumsiest human takes for granted — the ability to sense, respond to and move through the world — has long been a stumbling block for artificial intelligence research. While more sophisticated software and ultrafast computers have led to machine “brains” that can beat a person at chess or Go, building a robot that can move the pieces, fetch an iced tea or notice if the chessboard has turned into Candy Land has been difficult.
Rosen explores several examples of how roboticists are embodying smarts in their creations, crucial steps in creating the autonomous machines most of us imagine when we hear “robot.” Of course, we are already, if unwittingly, living in a world of robots. As AI researcher Richard Vaughan of Simon Fraser University in Burnaby, Canada, pointed out to me recently, once a machine becomes part of everyday life, most people stop thinking of it as a robot. “Driverless cars are robots. Your dishwasher is a robot. Drones are extremely cheap flying robots.”
In fact, Vaughan says, in the last few decades, robots’ intelligence and skills have grown dramatically. Those advances were made possible by major developments in probabilistic state estimation — which allows robots to figure out where they are and what’s going on around them — and machine learning software.
Probabilistic state estimation has enabled better integration of information from a robot’s sensors. Using the math of Bayesian reasoning, robots can compare sensor data against a model of the world, and interpret their likelihood of being right. For example, a robot in a building can use its laser sensors to assess the space around it, compare that with its inner map of the building and determine that it’s not in Hall A but has equal chances of being in Hall B or C.
Robots could do that in the 1990s. Scientists then asked a tougher question: How do you know where you are if you have no map? In two dimensions, researchers solved that by integrating sensory information with a set of all possible maps. But only recently was the problem solved in three dimensions, and challenges still remain for robots in less-structured or harsh environments.
Machine learning advances have aided aspects of AI such as computer vision, much improved by work done on boosting search engines’ ability to identify images (so you can search “birthday party” to find images of candled cakes, for example). This research has helped to make robot senses smarter.
Progress is swift, as Rosen makes clear in her story, but many challenges remain. Roboticists still struggle with hardware, especially for humanoid robots, which remain rather clunky. Walking, climbing stairs, picking things up and getting back up after a fall are still hard. Providing independent power sources is also a big deal — batteries aren’t yet good enough. But to build the robots that can do all that people want them to do, whether that’s driving us to work, helping the elderly up from a chair or collaborating safely with human workers in factories or warehouses, will take even better senses. Intelligence is not simply processing information or even learning new information. It’s also about noticing what’s going on around you and how to best respond.

5
Closer look at brain circuits reveals important role of genetics

Scientists at The Scripps Research Institute (TSRI) in La Jolla have revealed new clues to the wiring of the brain. A team led by Associate Professor Anton Maximov found that neurons in brain regions that store memory can form networks in the absence of synaptic activity.

"Our results imply that assembly of neural circuits in areas required for cognition is largely controlled by intrinsic genetic programs that operate independently of the external world," Maximov explained. A similar phenomenon was observed by the group of Professor Nils Brose at the Max Planck Institute for Experimental Medicine in Germany. The two complementary studies were co-published as cover stories in the April 19, 2017, issue of the journal Neuron.

Experience makes every brain unique by changing the patterns and properties of neuronal connections. Vision, hearing, smell, taste and touch play particularly important roles during early postnatal life when the majority of synapses is formed. New synapses also appear in the adult brain during learning. These activity-dependent changes in neuronal wiring are driven by chemical neurotransmitters that relay signals from one neuron to another. Yet, animals and humans have innate behaviors whose features are consistent across generations, suggesting that some synaptic connections are genetically predetermined.
The notion that neurons do not need to communicate to develop networks has also been supported by earlier discoveries of synapses in mice that lacked transmitter secretion in the entire brain. These studies were performed in the laboratory of Professor Thomas Südhof, who won the 2013 Nobel Prize in Physiology or Medicine.
"We thought these experiments were quite intriguing," Maximov said, "but they also had a major limitation: mice with completely disabled nervous systems became paralyzed and died shortly after birth, when circuitry in the brain is still rudimental."
The TSRI team set out to investigate if neurons can form and maintain connections with appropriate partners in genetically engineered animals that live into adulthood with virtually no synaptic activity in the hippocampus, a brain region that is critical for learning and memory storage. "While the idea may sound crazy at the first glance," Maximov continued, "several observations hinted that this task is technically feasible." Indeed, mammals can survive with injuries and developmental abnormalities that result in a massive loss of brain tissue.
Inspired by these examples, Richard Sando, a graduate student in the Maximov lab, generated mice whose hippocampus permanently lacked secretion of glutamate, a neurotransmitter that activates neurons when a memory is formed. Despite apparent inability to learn and remember, these animals could eat, walk around, groom, and even engage in rudimental social interactions.
Working closely with Professor Mark Ellisman, who directs the National Center for Microscopy and Imaging Research at the University of California, San Diego, Sando and his co-workers then examined the connectivity in permanently inactive areas. Combining contemporary genetic and imaging tools was fruitful: the collaborative team found that several key stages of neural circuit development widely believed to require synaptic activity were remarkably unaffected in their mouse model.
The outcomes of ultra-structural analyses were particularly surprising: it turns out that neurotransmission is unnecessary for assembly of basic building blocks of single synaptic connections, including so-called dendritic spines that recruit signaling complexes that enable neurons to sense glutamate.
Maximov emphasized that the mice could not function normally. In a way, their hippocampus can be compared to a computer that goes though the assembly line, but never gets plugged to a power source and loaded with software. As the next step, the team aims to exploit new chemical-genetic approaches to test if intrinsically-formed networks can support learning.
________________________________________
Story Source:
Materials provided by The Scripps Research Institute.

6
We'll Probably Have to Genetically Augment Our Bodies to Survive in Mars
:

When it comes to space travel, there's no shortage of enthusiasm to get humans to Mars, with Space X's Elon Musk saying his company could take passengers to the Red Planet by 2025, and NASA being asked by Congress to achieve the mission by 2033.
But while making the trip could be technologically feasible in the next decade or two, are humans really physically and psychologically ready to abandon Earth and begin colonising the Red Planet?
Szocik argues that no amount of year-long Martian simulations on Earth or long-duration stays aboard the International Space Station (ISS) could prepare human astronauts for the challenges that Mars colonisation would provide.
"We cannot simulate the same physical and environmental conditions to reconstruct the Martian environment, I mean such traits like Martian microgravitation or radiation exposure," Szocik told Elizabeth Howell at Seeker.
In a recent article, Szocik and his co-authors discussed some of the political, cultural, and personal challenges Mars colonists would face, and in a nutshell, the team doesn't think human beings could cut it on the Red Planet – not without making changes to our bodies to help us more easily adapt to the Martian environment.
"Consequently, some particular physiological and psychological challenges during [the] journey and then during living on Mars probably will be too difficult for human beings to survive."
While NASA astronaut Scott Kelly and Russian cosmonaut Mikhail Kornienko famously spent a year on the ISS – the ordeal was not without significant physiological effects and pains resulting from so much time living in space.
But those hardships would be much less than what travellers to Mars would experience, who would be making much longer journeys – and not knowing when or if they could ever return to Earth.
"These first astronauts will be aware that after the almost one-year journey, they will have to live on Mars for at least several years or probably their entire lives due to the fact that their return will most likely be technologically impossible," the authors explain.
The researchers acknowledge that inducing travellers into a coma-like state might make the voyage itself more bearable, but once they've arrived, colonists will be faced with an environment where artificial life support is a constant requirement – that is, until some far-off, future terraforming technology can make Mars' arid and freezing environment hospitable.
Until that happens, the researchers think that humanity's best prospects for living on Mars would involve some kind of body or genetic altering that might give us a fighting chance of survival on a planet we've never had to evolve on.
"We claim that human beings are not evolutionally adapted to colonise cosmic environments," the authors explain.
"We suggest that the best solution could be the artificial acceleration of the biological evolution of the astronauts before they start their space deep
might make the voyage itself more bearable, but once they've arrived, colonists will be faced with an environment where artificial life support is a constant requirement – that is, until some far-off, future terraforming technology can make Mars' arid and freezing environment hospitable.
Until that happens, the researchers think that humanity's best prospects for living on Mars would involve some kind of body or genetic altering that might give us a fighting chance of survival on a planet we've never had to evolve on.
"We suggest that the best solution could be the artificial acceleration of the biological evolution of the astronauts before they start their space deep discussion about what it will take for humans to adapt to Mars' environment, once talk turns to genetics, you run into a minefield of other potential issues.
"Already, people have suggested selecting astronauts for genetic predisposition for such things as radiation resistance," says Shelhamer.
"Of course, this idea is fraught with problems. For one, it's illegal to make employment decisions based on genetic information. For another, there are usually unintended consequences when making manipulations like this, and who knows what might get worse if we pick and choose what we think needs to be made better."
Those sound like pretty fair points – especially considering Szocik goes as far as to suggest that "human cloning or other similar methods" might ultimately be necessary to sustain colony populations over generations without running the risk of in-breeding between too few colonists.
Clearly, there's a lot to work out here, and while some of the researchers' ideas are definitely a bit out there, we're going to need to think outside the box if we want to inhabit a planet that at its closest is about 56 million km (33.9 million miles) away.
For his part, Shelhamer is confident that the right kind of training will equip human travellers for the ordeals of their Mars journey – and if current estimates on when we can expect to see this happen are correct, we won't have too long to wait to see if he's right.
The research is published in “Space Policy”.

7
Scientists Say This Rogue Planet Contradicts Existing Models of Planetary Formation.

A giant rogue world once described as the "planet that shouldn't be there" looks like it actually formed out in deep space, far from its host star and the cosmic material that usually births planets, according to new research.
The anomaly, called HD 106906b, is a young planet located approximately 300 light-years from Earth in the Crux constellation. HD 106906 was discovered in 2013, and what makes so unique is how distantly it orbits its star – at 650 astronomical units (au), or 650 times the distance from Earth to the Sun.
That epic stretch gives HD 106906b the record for the largest orbit around a single, Sun-like star, which takes the planet about 1,500 years to complete one loop.
The most puzzling thing about HD 106906b's aloofness is that its distant orbit places it well beyond the disk of cosmic debris surrounding HD 106906 – the dust and gas from which planets usually form.
Since HD 106906b was first discovered, scientists have been trying to explain how the planet could have ended up so far removed from HD 106906, since the vast majority of exoplanets are thought to be located inside debris disks.
And the same holds true closer to home in our own Solar System, with all of the planets orbiting the Sun falling inside the Kuiper belt – the circumstellar disk that extends beyond Neptune, encompassing dwarf planets and several other smaller remnants left over from the formation of our Solar System.
Previous research had suggested that HD 106906b might have formed inside the debris disk before gravitational interactions ejected the planet into its far-off exile – but Naoz's team don't think that's the case.
One of the researchers – Erika Nesvold from the Carnegie Institution for Science – created a computer model called Superparticle-Method Algorithm for Collisions in Kuiper belts and debris disks (SMACK), which suggests that the planet formed outside the debris disk.
SMACK took the known data about the HD 106906 system and calculated how an outside planet like HD 106906b would affect the structure of the star's debris disk.
It's not known if the HD 106906 system contains any other planets, but the model suggests that the shape of the elliptical debris disk as it currently exists is compatible with the lone orbit of HD 106906b.
"We were able to create the known shape of HD 106906's debris disk without adding another planet into the system, as some had suggested was necessary to achieve the observed architecture." Nesvold says.
The model also indicates that HD 106906b most likely formed outside of the disk – if it initially formed inside and then later moved outward, gravitational effects would mean that the disk would hold a different shape to the one it has now.
While this means we still can't exactly explain how HD 106906b took shape so far from the dust and gas that gives birth to most planets, at least we've narrowed down the planet's origin story a little.
And if we can find more rogue outliers like HD 106906b, the SMACK model could help us learn more about how these planets could be possible.
"Other debris disks that are shaped by the influence of distant giant planets are probably likely," Nesvold says.
"My modelling tool can help recreate and visualise how the various features of these disks came to be and improve our understanding of planetary system evolution overall."
The findings are reported in “The Astrophysical Journal Letters”.

Pages: [1]