Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Abdus Sattar

Pages: [1] 2 3 ... 5
Giving robots a better feel for object manipulation
Model improves a robot’s ability to mold materials into shapes and interact with liquids and solid objects.
Rob Matheson | MIT News Office
April 16, 2019

A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch — and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials — “particles” — interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration — such as a “T” shape — that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material — rigid, deformable, and liquid — to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”


Teaching & Research Forum / Can science writing be automated?
« on: April 20, 2019, 01:11:59 PM »
Can science writing be automated?
A neural network can read scientific papers and render a plain-English summary.

David L. Chandler | MIT News Office
April 17, 2019

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.


Faculty Sections / In China, a link between happiness and air quality
« on: January 24, 2019, 11:25:01 AM »
In China, a link between happiness and air quality
Moods expressed on social media tend to decline when air pollution gets worse, study finds.

Helen Knight | MIT News correspondent
January 21, 2019

For many years, China has been struggling to tackle high pollution levels that are crippling its major cities. Indeed, a recent study by researchers at Chinese Hong Kong University has found that air pollution in the country causes an average of 1.1 million premature deaths each year and costs its economy $38 billion.

Now researchers at MIT have discovered that air pollution in China’s cities may be contributing to low levels of happiness amongst the country’s urban population.

In a paper published today in the journal Nature Human Behaviour, a research team led by Siqi Zheng, the Samuel Tak Lee Associate Professor in MIT’s Department of Urban Studies and Planning and Center for Real Estate, and the Faculty Director of MIT China Future City Lab, reveals that higher levels of pollution are associated with a decrease in people’s happiness levels.

The paper also includes co-first author Jianghao Wang of the Chinese Academy of Sciences, Matthew Kahn of the University of Southern California, Cong Sun of the Shanghai University of Finance and Economics, and Xiaonan Zhang of Tsinghua University in Beijing.

Despite an annual economic growth rate of 8 percent, satisfaction levels amongst China’s urban population have not risen as much as would be expected.

Alongside inadequate public services, soaring house prices, and concerns over food safety, air pollution — caused by the country’s industrialization, coal burning, and increasing use of cars — has had a significant impact on quality of life in urban areas.

Research has previously shown that air pollution is damaging to health, cognitive performance, labor productivity, and educational outcomes. But air pollution also has a broader impact on people’s social lives and behavior, according to Zheng.

To avoid high levels of air pollution, for example, people may move to cleaner cities or green buildings, buy protective equipment such as face masks and air purifiers, and spend less time outdoors.

“Pollution also has an emotional cost,” Zheng says. “People are unhappy, and that means they may make irrational decisions.”

On polluted days, people have been shown to be more likely to engage in impulsive and risky behavior that they may later regret, possibly as a result of short-term depression and anxiety, according to Zheng.

“So we wanted to explore a broader range of effects of air pollution on people’s daily lives in highly polluted Chinese cities,” she says.

To this end, the researchers used real-time data from social media to track how changing daily pollution levels impact people’s happiness in 144 Chinese cities.

In the past, happiness levels have typically been measured using questionnaires. However, such surveys provide only a single snapshot; people’s responses tend to reflect their overall feeling of well-being, rather than their happiness on particular days.

“Social media gives a real-time measure of people’s happiness levels and also provides a huge amount of data, across a lot of different cities,” Zheng says.

The researchers used information on urban levels of ultrafine particulate matter — PM 2.5 concentration — from the daily air quality readings released by China’s Ministry of Environmental Protection. Airborne particulate matter has become the primary air pollutant in Chinese cities in recent years, and PM 2.5 particles, which measure less than 2.5 microns in diameter, are particularly dangerous to people’s lungs.

To measure daily happiness levels for each city, the team applied a machine-learning algorithm to analyze the 210 million geotagged tweets from China’s largest microblogging platform, Sina Weibo.

The tweets cover a period from March to November 2014. For each tweet, the researchers applied the machine-trained sentiment analysis algorithm to measure the sentiment of the post. They then calculated the median value for that city and day, the so-called expressed happiness index, ranging from 0 to 100, with 0 indicating a very negative mood, and 100 a very positive one.

Finally, the researchers merged this index with the daily PM2.5 concentration and weather data.

They found a significantly negative correlation between pollution and happiness levels. What’s more, women were more sensitive to higher pollution levels than men, as were those on higher incomes.

When the researchers looked at the type of cities that the tweets originated from, they found that people from the very cleanest and very dirtiest cities were the most severely affected by pollution levels.

This may be because those people who are particularly concerned about their health and air quality tend to move to clean cities, while those in very dirty cities are more aware of the damage to their health from long-term exposure to pollutants, Zheng says.

Through a creative use of social media data, the authors convincingly demonstrate a strong relationship between air quality and expressed happiness, a subjective measure of well-being, says Shanjun Li, a professor of environmental economics at Cornell University, who was not involved in the research.

“The study adds to the growing scientific knowledge on the social cost of air pollution by focusing on the cost borne by the ‘silent majority’ who do not typically show up in the studies based on morbidity and mortality outcomes,” Li says.

Zheng now hopes to continue her research into the impact of pollution on people’s behavior, and to investigate how China’s politicians will respond to the increasing public demand for cleaner air.


Faculty Sections / Merging engineering and education
« on: January 24, 2019, 11:23:48 AM »
Merging engineering and education
Senior and first-generation student Nikayah Etienne aims to incorporate hands-on science in under-resourced classrooms.

Gina Vitale | MIT News correspondent
January 23, 2019

Nikayah Etienne’s mother, an immigrant from the Caribbean island of Dominica, was passionate about her daughter’s education. At her mother’s insistence, Etienne spent her Saturday mornings in the classroom for additional schooling throughout middle school. She wasn’t a fan of the extra education at the time — but looking back, she thinks it paved the way for her to become the first person in her family to earn a bachelor’s degree.

Etienne grew up in a largely Caribbean neighborhood in Brooklyn, New York, and attended one of the city’s magnet high schools, where she excelled in the STEM fields. At first, it seemed everyone was telling her to become a doctor — but her calculus teacher recognized her talent for math and science, and encouraged her to consider engineering. That’s when Etienne started looking into MIT.

After her junior year of high school, she participated in the intensive Minority Introduction to Engineering and Science (MITES) program at MIT, where she stayed in Simmons Hall and attended six weeks of classes. It was there that she took her first real engineering class — underwater robotics.

“MITES definitely solidified the fact that I should pursue engineering, especially since I’m the kind of person that likes hands-on learning instead of lecture-style learning,” she says.

Now a senior, Etienne is majoring in mechanical engineering with a concentration in education. Her aim is to identify ways to merge the two.

“Right now, I’m trying to focus on learning about different equitable teaching practices and  different education technology platforms, in an effort to see the overlap between engineering and education and how it can be improved in underresourced communities,” she says.

Researching teaching

During Etienne’s first research project at MIT, she was on a team of four in the Teaching Systems Lab analyzing online learning platforms. Basically, a group of students would join an online forum and participate in discussions. Each of these students had identified their political affinity — say, Democrat or Republican — to the researchers, but not to each other. Etienne and her team then analyzed how those students interacted.

This past January, Etienne shifted her research to be more hands-on. Through a grant from the Priscilla King Gray Public Service Center, she travelled to Dominica, her mother’s original home, to see how Hurricane Maria had impacted education there.

She spent six weeks in a school helping with a class of third graders, while observing the teaching style of the instructors, the learning style of the students, and other dynamics of the classroom. The second and third grade classes shared a UNICEF tent and were separated with a divider. Books had been lost in the hurricane, and donations had not fully restored the inventory. The instructor taught on a very small whiteboard, and there was no digital technology to speak of. Etienne noticed there weren’t a lot of hands-on learning opportunities for the students.

“That experience made me realize that I do want to introduce engineering or STEM in general to underrepresented communities, because I really saw the challenges that occurred from not having resources in schools,” she says.

Currently, Etienne has returned to the Teaching Systems Lab to work on its equity team. On one project, she assists in developing educational interfaces that train community members, law enforcement and criminal justice officials, and educators to recognize their biases and to better contextualize social media posts by young people of color experiencing violence. In her second project, she focuses on designing multimedia “practice spaces,” or immersive simulations, for teachers in training to develop equitable teaching approaches and mindsets.

Service in the school and the city

Community service plays a large role in Etienne’s life outside of her studies. In the fall of her first year, she became a counselor for Camp Kesem, a summer camp for kids whose parents are affected by cancer. In her sophomore year, she served as the treasurer for the Black Women’s Alliance, helping to plan their 50-year reunion.

Starting in the spring of her second year, she also worked for three semesters as a lab assistant for 2.678 (Electronics for Mechanical Systems), helping teach students how to build electronic systems. Additionally, Etienne served as a STEM mentor for the Office of Engineering Outreach Programs, working with a middle schooler every other Saturday on little engineering projects.

In the fall of her junior year, Etienne became an admissions ambassador, helping to identify minority students that would be great fits for the MIT community. In the spring, she was elected to be the social chair of MIT Class Awareness Support and Equality (CASE), and she currently serves as vice president of the student group.

“Now, students have a say in what initiatives get implemented for low income students. MIT’s efforts in trying to better the college experience of low income students at MIT is very transformative,” she says. “[It’s] something that’s really never been done before, or hasn’t been done to the extent that CASE has been doing, so I’m glad I’m helping with those efforts.”

This fall, Etienne joined the citywide chapter of public service sorority Delta Sigma Theta Sorority, which spans five local colleges and universities. The organization’s focus is to provide service to the black community around Cambridge, both on college campuses and to the general public. It has held food and clothing drives, educated students about AIDS on World Aids Day, and is working on holding a black-owned business pop-up shop in the future. She currently serves as the second vice president of her chapter. Etienne’s favorite part of being in this sorority is the strong sisterhood that she knows will continue to support her throughout her lifetime.

“Now I’m serving an area that is way bigger than just the realm of MIT,” she says. “So I’m really glad that I’m able to provide programming to our service schools and serve underrepresented communities around Cambridge.”

Fun and future

For fun, Etienne loves spending time in the city. Her preferred activities downtown include escape rooms, paint bars, and sports tournaments. An enthusiast of music and dancing, she also frequents concerts, performances, and workshops led by professional dancers. And she enjoys checking out new restaurants — her current favourite is La Fabrica in Central Square.

Whether or not she remains in the city she’s had these adventures in, Etienne aims to get some experience in the engineering industry before continuing her education.

“For right now I want the engineering experience to develop my engineering skills and mindset,” she says.

After that, she wants to go to graduate school, although she hasn’t decided exactly what type of program to apply for.


Latest Technology / New Boeing Business to Tackle AI, Advanced Computing
« on: November 04, 2018, 08:53:12 PM »
New Boeing Business to Tackle AI, Advanced Computing
By Nick Zazulia | October 18, 2018   

Boeing has launched a new business tasked with researching and development of solutions in artificial intelligence (AI), secure communications and complex systems optimization for commercial and government applications.

Called Disruptive Computing and Networks (DC&N), the new group will operate out of Southern California as part of Boeing Engineering, Test & Technology under the direction of VP and General Manager Charles Toups. Toups formerly headed Boeing Research & Technology (BR&T), where he will be replaced by current VP of Aeromechanics Technology Naveed Hussain.

Boeing said that DC&N will work closely with the company's investment arm, HorizonX, to "identify external partners for collaboration to accelerate growth," though a company spokesman told Avionics International that the relationship would not change operating procedure for either group or investment plans for HorizonX. Both organizations are overseen by CTO Greg Hyslop.

"Advanced computing and communications technologies are increasingly at the core of all aerospace innovation," said Hyslop said. "We're excited to stand up the Disruptive Computing and Networks organization because it will help us develop new businesses and partnerships in this rapidly expanding field."

Boeing said that DC&N's initial products would be in the area of computing and neuromorphic processors to aid in the quick solution of complex problems and pattern detection. A representative said that the company would look for applications as research evolves, but an example would be the real-time analysis and optimization of global air traffic routes for both manned and unmanned vehicles.

The company will staff DC&N with a combination of incumbent Boeing employees, largely from BR&T, and new hires. Over the next five years, it tentatively expects to hit an employment figure of about 500 at the shop, according to a spokesman. BR&T, for comparison, comprises nearly 4,000 engineers, scientists and technicians across 11 worldwide facilities.


5 Technologies Airbus is Working on as Faury Becomes CEO
By Woodrow Bellamy III | October 9, 2018   

Guillaume Faury has been selected by the Airbus board of directors to succeed Tom Enders as the next CEO of Airbus. Faury currently serves as the president of the commercial aircraft division of Airbus and will officially take over as CEO April 10, 2019.

Faury began his career with Airbus when the company’s helicopter division was still branded as Eurocopter, where he served in various senior management roles before eventually becoming CEO of Airbus Helicopters in 2013. In early 2018, he transitioned to the head of Airbus Commercial Aircraft and is now preparing to become CEO of Airbus.

Here are five future facing technologies that Airbus is researching and developing across its various divisions that could become a reality under Faury.

E-Fan X
The E-Fan X is a near-term electric-hybrid propulsion technology demonstrator aircraft being developed under a three-way partnership between Airbus, Rolls-Royce and Siemens. Airbus released the latest program update on the E-Fan X at the 2018 Farnborough Air Show, where the company noted that the demonstrator will test a two-megawatt hybrid-electric propulsion system.

The E-Fan X. Image, courtesy of Airbus.

According to Siemens, the electric propulsion system's generator is powered by a turbine in the fuselage. E-Fan X will also feature lithium-ion batteries with 700 kilowatts of power. Airbus is providing the overall integration of the hybrid-electric propulsion system and batteries control architecture and integration with aircraft flight controls.

Airbus plans to begin parts manufacturing for the E-FAN X in 2019, followed by ground testing and a planned first flight by the end of 2020. The demonstrator will provide key insights and data for Airbus in terms of eventually integrating electric-hybrid propulsion technology into future passenger airframe designs.

Vahana, the self-piloted electric-vertical-takeoff-and-landing (EVTOL) aircraft being developed by Airbus' Silicon Valley-based A3, took its first flight Jan. 31, 2018, reaching a height of 16 feet and remaining airborne for 53 seconds. Powered by eight propellers with a primary and backup battery system, Vahana is envisioned to be Airbus' offering for the future urban air mobility market to reduce congestion in traffic-choked cities.

Airbus A3 Vahana
Airbus A3 Vahana. Photo courtesy of Airbus

Dennis Muillenburg, CEO of Airbus rival Boeing, recently told analysts attending the Morgan Stanley Laguna Conference that he sees major potential in the electric air taxi prototype Boeing is developing. Uber has been one of the leading major technology companies driving interest in the development of a future urban air taxi with its Uber Elevate division. The company wants to start its flying taxi service by 2023.

An August 2018 blog post published by the Vahana project team predicts Vahana will eventually see demand for “millions of flight hours a year,” but gives no timeline on when the aircraft could become a reality.

Airbus Defense and Space has been tasked with building and designing more than 900 satellites to provide high-speed global internet access for the OneWeb constellation. Satellite operator OneWeb wants to use the ambitious constellation to dramatically lower the cost of access to high-speed satellite-based internet.

While the future goals of OneWeb are a disruptor for the satellite communications industry, it could also hold major implications for the commercial airline industry. In February, Airbus, Delta Air Lines, OneWeb, Sprint and Bharti Airtel announced the formation of the Seamless Air Alliance in an effort to give mobile members the ability to extend their services to commercial airlines.

There have been relatively few announcements from OneWeb recently, other than a note on its website stating that initial production and launch into low-Earth orbit of its satellites is slated to begin in 2018.

OneWeb concept of operations for satellite constellation.

Project ICARO-EU
A major goal for the commercial aircraft division of Airbus is to establish more flexibility in the air-to-ground and satellite connectivity architectures featured on its airframes. One initiative the company has undertaken in its efforts toward more flexible connected aircraft is “Project ICARO-EU.” The goal of the project is to create a gate-to-gate direct air-to-ground communications system to enable connectivity on aircraft flying in European airspace. As part of the project, Airbus has partnered with Swedish engineering and technology university KTH Sweden, Italian telecommunications research center Create-Net and U.S.-based telecommunications company Ericsson.

Ericsson, as the telecommunications vendor, will provide radio connectivity equipment to Airbus, which will coordinate the integration of the ICARO-EU system and provide it to airline companies. The system also integrates machine-type communications to support various wireless onboard applications and use cases.

One of the components of the project is to enable license-assisted access (LAA) inside an aircraft. This access uses cellular communications within unlicensed frequency bands to bring passengers more network capacity.

Another goal of the project is to provide improved connectivity for transportation safety boards and European flight movement tracking agencies through enriched monitoring and management approaches to include live cockpit video streams and cloud storage of flight data information.

In April, Airbus announced a new partnership with Dassault Aviation to develop Europe’s Future Combat Air System (FCAS), which is slated to complement and eventually replace the current generation of Eurofighter and Rafale fighter aircraft between 2035 and 2040. The two companies previously collaborated on the development of Europe’s medium-altitude, long-endurance, new-generation drone program. Now they’re working on demonstrators to include both manned and unmanned concepts for the FCAS that could be ready to fly by 2025.

In September, Airbus performed a manned-unmanned teaming test over the Baltic Sea. The trial flights included demonstrations with five Airbus-built Do-DT25 target drones controlled from a mission group commander who was airborne in a manned command-and-control (C2) aircraft. As part of the demonstration, Airbus developed an advanced flight management control system for the unmanned aircraft, combining fully automatic guidance, navigation and control with intelligent swarming.

Results from the test will be used by Airbus engineers in their continued development of next-generation technologies for the European FCAS.


How to Visualize Your Qualitative User Research Results for Maximum Impact

When thinking about visualization of research results, many people will automatically have an image of a graph in mind. Do you have that image, too? You would be right in thinking that many research results benefit from a graph-like visualization, showing trends and anomalies. But this is mainly true for results from quantitative user research. Graphs are often not the best way to communicate the results from qualitative user research methods such as interviews or observations. Frequently, the number of participants in these types of studies is too low to create meaningful graphs. Moreover, the insights you will want to communicate sometimes don’t translate to a clean number. Let’s show you how to visualize more subjective and fuzzy data from qualitative user research methods, in a way that communicates the essential insights to other stakeholders, so they don’t have to plow through voluminous research reports.

“The purpose of visualization is insight, not pictures.”

— Ben Shneiderman, Distinguished university professor in computer science
When you’re sharing results from qualitative user research efforts, you’re most likely focusing on creating an understanding for the lives people lead, the tasks that they need to fulfill, and the interactions they must effect so as to achieve what they need or want to do. This holds true whether you’re using the research in the beginning phases of a design process (getting to know what to design), or using it in the final stages (understanding how well a design is meeting its targets). Depending on the people you’re communicating with (such as your design team or a client) and the type of understanding you need them to have (in other words, a deep empathy for the user needs or a global feeling for the context in which a product will be used), you need to determine what type of visualization suits your results best.

Imagine that you’ve conducted several interviews with people from your target group: overworked and worried informal caregivers of seniors with early signs of dementia. They have shared some essential information with you, regarding the fears they have about a new product that’s supposed to help them be more independent in the care they provide to their loved ones. You used a thematic analysis technique with lots of Post-it notes to make sense of the data, and you found four categories of fears that are relevant to consider when designing the new product: changes in the relationship, a constant feeling of worrying, lack of competencies, and lack of personal time. You need to share your insights with your design team—so that everyone is on the same page and continues the design process with the same level of empathy for this fragile target group. Also, you need to communicate these insights to your clients: the management team of a healthcare organization. They are hoping to engage informal caregivers more into the care process, since they need to reorganize their budgets and unburden their employees. How would you go about communicating the results that you found? Would you simply give them that short list of four fears? Would you give them a pie diagram, showing how often a certain category of fears was mentioned in the interviews? We would argue that this does not lead to the deep understanding you’re aiming for. A list is not immersive enough to trigger any type of empathy. Here, we’ll show you three ways of visualizing your results that are much more effective.

Affinity Diagram
By using Post-it notes for the thematic analysis technique to come to your conclusions on the four main fears that your target group struggles with, you’ve already used a visualization method that we would recommend: an affinity diagram. You have taken quotes and notes from the interviews and have written each of them on a separate Post-it. Then, you started to reorganize them according to similarities, creating themes as you went along. There’s a tremendous amount of information present in the diagram you’ve created as an analysis tool. However, you will need to clean up this diagram so that it better reflects the insights you want to communicate.

You can quickly decide that the categories should reflect the four main fears that you discovered. You then need to ask yourself what pieces of information will help your fellow designers and your client understand what these fears entail. What impact do they have on your users’ lives? When is this fear most prominent? What triggers this fear? Do you have some insight into what can reduce this fear? All this information will already be present in the Post-it notes you collected within a theme. Now you simply have to filter out the most important ones, and present them in a clear and visually appealing way to accommodate the people you’re communicating this to. You can use quotes or keywords, and—if you happen to have made some observations as well—illustrate them with pictures or drawings. The image below shows what an affinity diagram for this purpose could look like.


What The 4th Industrial Revolution Will Mean For Your Career
Published on October 25, 2018

We are at the beginnings of a new industrial revolution that will not only fundamentally transform most industries and businesses but will also have a wide-reaching impact on our careers. This new 4th industrial revolution will bring change and innovation at an unprecedented rate, fuelled by technological advancements like artificial intelligence, machine learning, big data, the internet of things and more.

Businesses need to be agile to react to rapidly changing technology and ways of doing business, so who they need on their team today might be drastically different to who they will need in even the very near future. This has led to a more flexible approach to workforce planning where businesses dynamically create teams with the talent they need for a period of time, often composed of a flexible internal workforce and contractors who come in for one project and then leave again. This gives businesses the flexibility they need to adapt to changing business needs, expand and contract as business dictates, and recruit individuals with the particular skill-set and technical know-how they require for a project. We often refer to this new environment as the gig economy and I believe we will see a shift from long-term employment to shorter-term more flexible gigs. 

According to a report from LinkedIn, it’s likely that millennials will hold twice as many jobs by the time they are 30 years old than their colleagues who are 10 years older. Regardless of whether you are a freelancer or a full-time employee of an organization, I believe that as a result of this change, individuals must treat their careers more like a business of which they are the CEO. The professionals that behave like any effective CEO and prepare and train to be able to supply the skills and resources customers and businesses seek are going to be in command of their professional futures. These employees are those that have an eye on the horizon for upcoming opportunities.

This new reality is enabled by the explosion of internet-based resources that make it easy for anyone to acquire skills in any discipline quickly and cheaply. It means more people have access to essential training than ever before and those that want to forge out on their own can make it happen with the help of abundant free (or cheap) education. It’s those workers that take advantage of these resources who will set themselves up for success. Plus, data-powered career and employment sites fueled by rich data streams make it easier to know and compare the opportunities and rewards available to us.

How committed are you to upgrading your skills and knowledge? There are several learning opportunities that you can take advantage of now to enhance your skill-set to keep you marketable for the changing economy. Here are a few:

On-the-Job Training from Top Colleges

Corporations who want to invest in flexible continuing education for their workers from colleges find value with online educational programs such as those offered from ExecOnline. This type of service opens the door to professional development programs from top colleges, but doesn’t require people to leave the office for extended periods of time.

Network for Top-Tier Talent

If the shift for businesses to rely more on contractors rather than full-time personnel is going to work out, there will need to be changes in the way highly skilled talent and those businesses who need them find each other. That’s where businesses such as Toptal come in. This is a network of top-tier talent—developers, designers and financial experts—all screened and verified with a language and personality test, test screenings and test projects. This gives companies a chance to work with freelancers and contractors without any risk at all.

Sometimes it Takes a Crowd

With the power of more than 1 million coders, designers, data scientists and algorithmists at your disposal through the services of Topcoder, organizations have the ability to deliver faster solutions to their tech issues. Through its crowdsourcing marketplace, Topcoder has helped develop and design apps, cognitive solutions, analytics and more quickly and effectively to help businesses get ready for the challenges of tomorrow.

Expand your Knowledge with Massive Open Online Courses (MOOCs)

When you consider all the learning you can access from some of the world’s top universities with just an internet connection, it’s simply incredible. These are free college-level courses on sites such as Coursera, edX and Khan Academy. Although at the end of the courses you will receive certificates of achievement rather than course credits or grades, this is still marketable education and HR executives and recruiters see these certificates as proof of your commitment to upgrading skills and knowledge.

Take charge of your professional development like you’re the CEO of your professional future, because you are. Professionals who show that they are commitment to skills acquisition and personal development will be those poised to be the resources needed by tomorrow’s businesses.


We need to talk about the mental health of content moderators
September 27, 2018 3.13pm AEST

Selena Scola worked as a public content contractor, or content moderator, for Facebook in its Silicon Valley offices. She left the company in March after less than a year.

In documents filed last week in California, Scola alleges unsafe work practices led her to develop post-traumatic stress disorder (PTSD) from witnessing “thousands of acts of extreme and graphic violence”.

Facebook acknowledged the work of moderation is not easy in a blog post published in July. In the same post, Facebook’s Vice President of Operations Ellen Silver outlined some of the ways the company supports their moderators:

All content reviewers — whether full-time employees, contractors, or those employed by partner companies — have access to mental health resources, including trained professionals onsite for both individual and group counselling.

But Scola claims Facebook fails to practice what it preaches. Previous reports about its workplace conditions also suggest the support they provide to moderators isn’t enough.

It’s not the first time
Scola’s legal action is not the first of its kind. Microsoft have been involved in a similar case since December 2016 involving two employees who worked in their child safety team.

In both cases the plaintiffs allege their employer failed to provide sufficient support, despite knowing the psychological dangers of the work.

Both Microsoft and Facebook dispute the claims.

How moderating can affect your mental health
Facebook moderators sift through hundreds of examples of distressing content during each eight hour shift.

They assess posts including, but not limited to, depictions of violent death – including suicide and murder – self-harm, assault, violence against animals, hate speech and sexualised violence.

Studies in areas such as child protection, journalism and law enforcement show repeated exposure to these types of content has serious consequences. That includes the development of PTSD. Workers also experience higher rates of burnout, relationship breakdown and, in some instances, suicide.

Aren’t there workplace guidelines?
Industries including journalism, law and policing have invested a significant amount of thought and money into best practice policies designed to protect workers.

In Australia, for example, those working in child safety opt-in to the work rather than cases being assigned. They are then required to undertake rigorous psychological testing to assess if they are able to emotionally compartmentalise the work effectively. Once working, they have regular mandated counselling sessions and are routinely reassigned into other areas of investigation to limit the amount of exposure.

The tech industry has similar guidelines. In fact, Facebook helped create the Technology Coalition, which aims to eradicate online child sexual exploitation. In 2015, the coalition released its Employee Resilience Guidebook, which outlines occupational health and safety measures for workers routinely viewing distressing materials. While these guidelines are couched as specific to workers viewing child pornography, they are also applicable to all types of distressing imagery.

The guidelines include “providing mandatory group and individual counselling sessions” with a trauma specialist, and “permitting moderators to opt-out” of viewing child pornography.

The guidelines also recommend limiting exposure to disturbing materials to four hours, encouraging workers to switch to other projects to get relief, and allowing workers time off to recover from trauma.

But it’s not just about guidelines
Having support available doesn’t necessarily mean staff feel like they can actually access it. Most of Facebook’s moderators, including Scola, work in precarious employment conditions as outside contractors employed through third party companies.

Working under these conditions has been shown to have a detrimental impact on employee well-being. That’s because these kinds of employees are not only less likely to be able to access support mechanisms, they often feel doing so will risk them losing their job. In addition, low pay can lead to employees being unable to take time off to recover from trauma.

Insecure work can also impact one’s sense of control. As I’ve previously discussed, moderators have little to no control over their work flow. They do not control the type of content that pops up on their screen. They have limited time to make decisions, often with little or no context. And, they have no personal say in how those decisions are made.

According to both the filing, and media reports around Facebook’s moderator employment conditions, employees are under immense pressure from the company to get through thousands of posts per day. They are also regularly audited, which adds to the stress.

Where to from here?
Adequate workplace support is essential for moderators. Some sections of the industry provide us with best case examples. In particular, the support provided to those who work in online mental health communities, such as Beyond Blue in Australia, is exemplary and provides a good blueprint.

We also need to address the ongoing issue of precarity in an industry that asks people to put their mental health at risk on a daily basis. This requires good industry governance and representation. To this end, Australian Community Managers have recently partnered with the MEAA to push for better conditions for everyone in the industry, including moderators.

As for Facebook, Scola’s suit is a class action. If it’s successful, Facebook could find itself compensating hundreds of moderators employed in California over the past three years. It could also set an industry-wide precedent, opening the door to complaints from thousands of moderators employed across a range of tech and media industries.


Apple shows off three new iPhones and smartwatch to detect heart problems
Tim Cook, Apple’s CEO, at the product launch in Cupertino.

Tech giant unveils new gadgets at California bash, including biggest iPhone yet and Apple Watch with range of health features.

Apple has launched a trio of iPhones and upgraded smartwatches that can detect heart problems at its annual product launch event in Cupertino on Wednesday.

The tech company unveiled a new iteration of the flagship iPhone X with a 5.8in screen dubbed the iPhone XS – pronounced “ten S” rather than “excess” as well as the iPhone XS Max, which comes with a larger 6.5in screen.
Both feature an upgraded version of the iPhone X’s dual lens camera – one of the device’s best features – which allows for better portraits and higher-quality videos even in low light. They also come with Face ID facial recognition for unlocking the device and a richer display.

The company is also launching a lower-cost 6.1inch version, the iPhone XR. The iPhone XR comes in a range of candy colours, is made from aluminium rather than stainless steel, has a single camera system and trades the OLED screen of its more expensive counterparts for an edge-to-edge LCD screen.

The last time Apple launched a lower-cost version of the iPhone was the iPhone 5c in 2013, when it took the innards of the previous generation’s iPhone 5 and wrapped it in a plastic body.
All three devices also come with a more powerful ‘A12 Bionic’ processor which allows developers to build more sophisticated video games, augmented reality experiences and other apps with real-time features that couldn’t previously run on a mobile device. Apple demonstrated one app, HomeCourt, that uses the camera viewfinder to track and analyse basketball shots to provide real-time feedback that can help players improve their technique.

“No other chip in the world would allow us to do this,” said Apple’s Kaiann Drance, on stage at the Steve Jobs Theater at Apple Park, the company’s new “spaceship” campus.

“With the new iPhone X line-up and its pricing repositioning, Apple demonstrates once again it excels at extending the lifecycle of its product portfolio through incremental innovation, impressive technical specifications boosting the performance of its devices and smart marketing,” said Forrester analyst Thomas Husson.
The iPhone XR starts at $749 for 64GB of storage; the iPhone XS at $999; and the iPhone XS Max at $1099. All can be pre-ordered from 19 October and will start shipping the following week.

Apple also unveiled the the Apple Watch Series 4, with improved cellular connectivity, larger screen, more powerful processor and a range of additional health-tracking features.

The smartwatches now feature an electrocardiogram (ECG) sensor which can measure not only the heart’s rate but its rhythm. Such a sensor can detect heart rhythm disorders such as atrial fibrillation, a common cause of strokes. The watches, which have been endorsed by the American Heart Association and have clearance from the FDA, still have the original optical sensors, which flash pulsed light through the skin to detect heart rate.
The device has also been fitted with an improved accelerometer and gyroscope that can detect if someone has fallen over, a feature that could be useful for older people. If a fall is detected and the person remains immobile for more than a minute, the device automatically calls their emergency contact.

“Fall detection is a feature we hope you never need, but it’s really nice to know it’s there,” said Apple chief operating officer Jeff Williams.

At 40mm and 44m, the two variants are slightly larger, but thinner, than the previous iterations that measured 38mm and 42mm. The watches can be pre-ordered from 14 September with cellular connectivity, starting at $499, or just GPS, starting at $399.

The older Series 3 watches will remain on sale from $279.

Since you’re here…
… we have a small favour to ask. More people are reading the Guardian than ever but advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.

The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our Editor. No one steers our opinion. This is important because it enables us to give a voice to the voiceless, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.

If everyone who reads our reporting, who likes it, helps to support it, our future would be much more secure. For as little as $1, you can support the Guardian – and it only takes a minute. Thank you.



Blockchain technology and Cryptocurrency are both significantly affecting the gaming industry, which is relied upon to end up a $143 billion worldwide industry by 2020. That is the reason it can be said that the gaming business is one of the quickest developing tech industries in the world.  Blockchain gaming is now turning the gaming sector on its head; this relies upon to proceed as an avalanche of enthusiast players and investors find the huge number of opportunities blockchain tech gives.

The blockchain and gaming industries have dependably had a huge statistic overlap. Gamers are acclimated with digital economies and rush to adjust to new innovation. The requirement for the joining between gaming and Blockchian rapidly wound up apparent, making new wave of innovations that were at no other time considered, yet felt extremely normal and eventually self-evident. Mobile game companies are already making digital resources, so the blockchain might be a natural step in the development.

There are a number of mobile games developed by the best mobile game development companies which are based on bitcoin technology or have similar functionality. The association of blockchain and gaming enables developers to make one of a kind approaches to monetize. Organizations will also be able to save millions of payment processing, as payment fees become wind up unimportant through the use of digital currencies. This enables distributors to investigate new adoption models utilizing micro-transactions.

Digital Assets
Virtual things, for example, weapons, tools, accessories and costumes will hold expanded value as blockchain foundation facilitates their buy, sharing, gifting, and trade. Platforms can also be set up to enable users to offer in-game items, in this manner giving the things genuine financial esteem.

One such case is Gods Unchained, the world’s first blockchain- based eSport made by Fuel Games. They use the innovation to permit the players trust-free ownership of the items they buy or earn in games. This recently discovered proprietorship adds a totally new measurement to gaming, and makes it straightforward how the decentralization of in-game economies is important.

Future of the blockchain-enabled gaming industry
*Gamers are running to blockchain-enabled gamers because of the ability to have the capacity to pay for everything from crypto-collectibles to games redesigns with cryptographic forms of money like Bitcoin. Indeed, even as the value of cryptocurrencies fall and rise, gamers still appreciate the flexibility that originates from paying for their gaming purchases with an assortment of cryptocurrencies.

*Blockchain technology will make it conceivable to utilize a similar gaming characters over various games.  At the point when each of your activities and information are verified through smart contracts, why confine game play for your character to only one game.  Simply imagine the conceivable outcomes of building a reputation for your gaming character and utilizing that character over a large number of gaming systems or gamers when all your game play is put away by means of blockchain technology.

*With blockchain empowered gaming, the risk of hacking is for all intents and purposes incomprehensible. Blockchain technology will decrease the danger of hacking for gamers.

*Game developers will have the capacity to incorporate blockchain nodes specifically into the design and play structure of their games. Blockchain technology will change the way data is stored for gamers, and also it will change the manner in which actual games are developed.

These are only a few of numerous ways blockchain will change the gaming industry.

Blockchain technology, serves very easy for game developers because it is helping in developing new business process, at the same time, this Blockchain technology also serves the user with the best security apps.

FuGenX is one of the best mobile game development companies Dubai, is capable of building apps for gaming industry and other major industries.


Latest Technology / How Virtual Reality is augmenting offline business
« on: September 04, 2018, 11:39:39 AM »
How Virtual Reality is augmenting offline business
Last Published: Mon, Aug 27 2018. 09 26 AM IST
Abhijit Ahaskar

Earlier restricted to building games and multimedia experiences by putting users into them, Virtual Reality (VR) is now increasingly being used in retail, hospitals, educational institutions and design and manufacturing.

New Delhi: A month back, Emirates Airlines began allowing people to saunter in their luxurious A380 airbus and book seats of their choice. Dubai-based Emirates Airlines, thus, became one of the many companies that have begun using Virtual Reality (VR) and Augmented Reality (AR) to provide a first-hand experience of their products to customers.

VR works by immersing users in a virtual world while AR adds virtual objects in the real world to augment that experience. Earlier restricted to building games and multimedia experiences by putting users into them, VR is now increasingly being used in retail, hospitals, educational institutions and design and manufacturing. For instance, Bengaluru-based firm Livspace is using VR to let interior designers create a design blueprint according to a customer’s requirement in 3D or VR. Once the design is shared with customers, they can provide their feedback and suggest further changes. Livspace then creates a shopping cart and provides an estimate of the total cost of redesigning. “Getting a home designed has been a major challenge for customers. With VR, Livspace has managed to address those concerns. This has helped us grow 5x on the consumer side in the last one year. We receive 90 to 100 registrations every day. We have become major destination for customers who want their home redesigned, says Ramakant Sharma, co-founder and chief technology officer, Livspace.

Noida-based SmartVizx’s Trezi is another platform that uses VR to help designers visualise how a room would like for their customers in advance. “Physical catalogues and mock-ups, multiple customer iterations, last minute changes—we have long endured these pain points and still do. Trezi has helped us move towards being sustainable and help and in controlling costs which arise due to multiple and distributed stakeholder. We can now virtually decide what works and prevent real, cost-inducing errors,” says Praveen Rawal, managing director, India Steelcase.

Hospitals are using VR for complicated procedures which require detailed planning before the actual operation is carried out. In 2017, Hyderabad-based Global Hospitals organised a VR-based surgical training sessions for 700 doctors, allowing them to witness a laparoscopy surgery live form the operation theatre through a VR headset.

In the automotive sector, VR is being used to provide consumers virtual test drives and a first-hand experience of the interiors of new and upcoming models. Lexus uses Facebook’s Oculus Rift while Volvo uses Google’s Cardboard to offer these experiences to customers through their respective website and app.

Meanwhile, proliferation of AR platforms such as Apple’s ARKit and Google ARCore and availability of smartphones with built in AR sensors has made accessing AR content a lot easier. UK-based AR start-up, Blippar, has built an indoor visual positioning system which can be leveraged by retailers at airports, malls and stadiums to help customers find their way to their stores. Travel-booking platform Ixigo recently added a feature called AR Coach Position in its iOS app.


The science of superstition – and why people believe in the unbelievable

The number 13, black cats, breaking mirrors, or walking under ladders, may all be things you actively avoid – if you’re anything like the 25% of people in the US who consider themselves superstitious.

Even if you don’t consider yourself a particularly superstitious person, you probably say “bless you” when someone sneezes, just in case the devil should decide to steal their soul – as our ancestors thought possible during a sneeze.

Superstition also explains why many buildings do not have a 13th floor – preferring to label it 14, 14A 12B or M (the 13th letter of the alphabet) on elevator button panels because of concerns about superstitious tenants. Indeed, 13% of people in one survey indicated that staying on the 13th floor of a hotel would bother them – and 9% said they would ask for a different room.

On top of this, some airlines such as Air France and Lufthansa, do not have a 13th row. Lufthansa also has no 17th row – because in some countries – such as Italy and Brazil – the typical unlucky number is 17 and not 13.

What is superstition?
Although there is no single definition of superstition, it generally means a belief in supernatural forces – such as fate – the desire to influence unpredictable factors and a need to resolve uncertainty. In this way then, individual beliefs and experiences drive superstitions, which explains why they are generally irrational and often defy current scientific wisdom.

Psychologists who have investigated what role superstitions play, have found that they derive from the assumption that a connection exists between co-occurring, non-related events. For instance, the notion that charms promote good luck, or protect you from bad luck.

Black cats are less likely to be adopted. Does superstition play a part? Shutterstock
For many people, engaging with superstitious behaviours provides a sense of control and reduces anxiety – which is why levels of superstition increase at times of stress and angst. This is particularly the case during times of economic crisis and social uncertainty – notably wars and conflicts. Indeed, Researchers have observed how in Germany between 1918 and 1940 measures of economic threat correlated directly with measures of superstition.

Touch wood
Superstitious beliefs have been shown to help promote a positive mental attitude. Although they can lead to irrational decisions, such as trusting in the merits of good luck and destiny rather than sound decision making.

Carrying charms, wearing certain clothes, visiting places associated with good fortune, preferring specific colours and using particular numbers are all elements of superstition. And although these behaviours and actions can appear trivial, for some people, they can often affect choices made in the real world.

Lucky horseshoes. Shutterstock
Superstitions can also give rise to the notion that objects and places are cursed. Such as the Annabelle the Doll – who featured in The Conjuring and two other movies – and is said to be inhabited by the spirit of a dead girl. A more traditional illustration is the Curse of the Pharaohs, which is said to be cast upon any person who disturbs the mummy of an Ancient Egyptian person – especially a pharaoh.

Numbers themselves can also often be associated with curses. For example, the figure 666 in a licence plate is often featured in stories of misfortune. The most famous case was the numberplate “ARK 666Y”, which is believed to have caused mysterious vehicle fires and “bad vibes” for passengers.

Sporting superstitions
Superstition is also highly prevalent within sport – especially in highly competitive situations. Four out of five professional athletes report engaging with at least one superstitious behaviour prior to performance. Within sport, superstitions have been shown to reduce tension and provide a sense of control over unpredictable, chance factors.

Superstitions practices tend to vary across sports, but there are similarities. Within football, gymnastics and athletics, for example, competitors reported praying for success, checking appearance in mirror and dressing well to feel better prepared. Players and athletes also engage with personalised actions and behaviours – such as wearing lucky clothes, kit and charms.

Dayton baseball players try to bring good luck by twirling their fingers. Shutterstock
Famous sportspeople often display superstitious behaviours. Notably, basketball legend Michael Jordan concealed his lucky North Carolina shorts under his Chicago Bulls team kit. Similarly, the tennis legend Björn Bork, reportedly wore the same brand of shirt when preparing for Wimbledon.

Rafael Nadal has an array of rituals that he performs each time he plays. These include the manner in which he places his water bottles and taking freezing cold showers. Nadal believes these rituals help him to find focus, flow and perform well.

Walking under ladders
What all this shows is that superstitions can provide reassurance and can help to reduce anxiety in some people. But while this may well be true, research has shown that actions associated with superstitions can also become self-reinforcing – in that the behaviour develops into a habit and failure to perform the ritual can actually result in anxiety.

This is even though the actual outcome of an event or situation is still dependent on known factors – rather than unknown supernatural forces. A notion consistent with the often quoted maxim, “the harder you work (practice) the luckier you get”.

So the next time you break a mirror, see a black cat or encounter the number 13 – don’t worry too much about “bad luck”, as it’s most likely just a trick of the mind.


How the latest tech and some healthy activism can curb fake news
The term “fake news” has become ubiquitous over the past two years. The Cambridge English dictionary defines it as “false stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke”.

As part of a global push to curb the spread of deliberate misinformation, researchers are trying to understand what drives people to share fake news and how its endorsement can propagate through a social network.

But humans are complex social animals, and technology misses the richness of human learning and interactions.

That’s why we decided to take a different approach in our research. We used the latest techniques from artificial intelligence to study how support for – or opposition to – a piece of fake news can spread within a social network. We believe our model is more realistic than previous approaches because individuals in our model learn endogenously from their interactions with the environment and not just follow prescribed rules. Our novel approach allowed us to learn a number of new things about how fake news is spread.

The main take away from our research is that when it comes to preventing the spread of fake news, privacy is key. It is important to keep your personal data to yourself and be cautious when providing information to large social media websites or search engines.

The most recent wave of technological innovations has brought us the data-centric web 2.0 and with it a number of fundamental challenges to user privacy and the integrity of news shared in social networks. But as our research shows, there’s reason to be optimistic that technology, paired with a healthy dose of individual activism, might also provide solutions to the scourge of fake news.

Modelling human behaviour
Existing literature models the spread of fake news in a social network in one of two ways.

In the first instance, you could model what happens when people observe what their neighbours do and then use this information in a complicated calculation to optimally update their beliefs about the world.

The second approach assumes that people follow a simple majority rule: everyone does what most of their neighbours do.

But both approaches have their shortcomings. They cannot mimic what happens when someone’s mind is changed after several conversations or interactions.

Our research differed. We modelled humans as agents who develop their own strategies on how to update their views on a piece of news given their neighbours’ actions. We then introduced an adversary that tried to spread fake news and compared how efficient the adversary was when he had knowledge about the strength of other agents’ beliefs compared to when he didn’t.

So in a real world example, an adversary determined to spread fake news might first read your Facebook profile and see what you believe, then tailor his disinformation to try and match your beliefs to increase the likelihood that you share the fake news he sent to you.

We learnt a few new things about how fake news is spread. For example, we show that providing feedback about news that’s been shared means that its easier for people to detect fake news.

Our work also suggests that artificially injecting a certain amount of fake news into a social network can train users to better spot fake news.

Crucially, we can also use models like ours to come up with strategies on how to curb the spread of fake news.

There are three things we have learned from this research about what everyone can do to stop fake news.

Fighting fake news
Because humans learn from their neighbours, who learn from their neighbours, and so on, everybody who detects and flags fake news can help prevent the spread of fake news on the network. When we modelled how the spread of fake news can be prevented, we found the single best way was to allow users to provide feedback to their friends about a piece of news they shared.

Beyond pointing out fake news, you can also praise a friend when they share a well researched and balanced piece of quality journalism. Importantly, this praise can happen even when you disagree with the conclusion or political point of view expressed in the article. Studies in human psychology and reinforcement learning show that people adapt their behaviour in response to negative and positive feedback – particularly when this feedback comes from within their social circle.

The second big lesson was: keep your data to yourself.

The web 2.0 was built on the premise that companies offer free services in exchange for users’ data. Billions followed the siren’s call, turning Facebook, Google, Twitter, and LinkedIn into multi-billion dollar behemoths. But as these companies grew, more and more data was collected. Some estimate that as much as 90% of all the world’s data has only been created in the past few years.

Do not give your personal information away easily or freely. Whenever possible, use tools that are fully encrypted and very little information is collected about you online. There is a more secure and more privacy-focused alternative for most applications, from search engines to messaging apps.

Social media sites don’t yet have privacy-focused alternatives. Luckily the emergence of blockchain has provided a new technology that could solve the privacy-profitability paradox. Instead of having to trust Facebook to keep your data secure, you can now put it on a decentralised blockchain that was designed to operate as a trustless environment.


Key differences between Python 2 and 3: How to navigate change
 August 22, 2018  Vinodh Kumar

As every programming language evolves, there are big changes between each major release. In this article, Vinodh Kumar explains some of the big differences between Python 2 and Python 3 with examples to help illustrate how the language has changed.

This tutorial will cover the following topics:

Print options
Unequal operations
Automated migration
Performance issues
Some major housekeeping changes
Having problems?

1. Expressions
Expressions represent something, like a number, a string, or an instance of a class. Any value is an expression! Anything that does something is a statement. Any assignment to a variable or function call is a statement. Any value contained in that statement in an expression.

This is what you’d type to get an evaluated expression in Python 2:

1   X = raw_input ("enter some values)
But in Python 3, you’d have to type this:

1   X = input ("enter some values")
So, whatever we enter then for the value is assigned to variable x in both 2 and 3. When I enter 2*6 in Python 2, the result will be 12, which is the evaluated value.

However, when this same program is run in Python 3, the result is string values. In this case, it would look like 2*6 in string format.

Then, how can we get the evaluated expression? Now, we have to use an expression or function called eval. when you write eval before the input, it will turn the expression into an evaluated value.

1   x= eval(input("enter some values")) = 12
Detailed expression examples:

Here’s what it would look like in Python 2:

name = input("What is your name? ")
print ("Hello, %s." %name)
And the output:


Here’s how it would look like in Python 3:

name = input("What is your name? ")
print ("Hello, %s." %name)
And the output:


As you can clearly see, there is very little difference between the two.

SEE MORE: Will Python dethrone Java this year? Programming language rankings say yes
2. Print options
In Python 2, print is a statement that does not need a parenthesis. In Python 3, print is a function and the values need to be written in parenthesis.

Python 2


print "hello world"


Python 3


1 != 1.0
print (False)


3. Unequal operations
Let’s move on to the third difference. When we use an unequal operator in Python 2, we need to use the greater than > or less than < signs. However, in Python 3, there is a general operator. The exclamation mark ! and equal sign = are used to show if things do not equal the same amount.

Python 2 – <> operator is used for not equal
Python 3 –  ! operator is used for not equal

Python 2


1 <> 1.0
print "False"


Python 3


1 != 1.0
print (False)1 != 1.0
print (False)

SEE ALSO: Glances: Keep a good eye on your code with this Python monitoring tool
4. Range
Now, let’s turn to ranges. What are the ranges?

A range is used to generate a list of numbers, which is generally used to iterate over with for loops.


Here you can see X equal to Range 10. When we check the variable X, it returned our list type. This means that in Python 2, range is the type of list. When I write X, after that, we get a list of object. which is 0 1 2 3 4 5 6 7 8 9.


Now let’s move to the Python 3, when we write x equal to range 5. This value of range 5 is assigned to the variable X; when we check the type for variable X, then it returns a range object itself. This means that in Python 3 range is a range object itself.

Python 2


print range(0,10,1)


Python 3




5. Automated migration
So, how do we automate the migration script to move code from Python 2 into 3?

Here, we can test with a simple program like Add 2 Numbers in python.

Python 2


n1 = 1
n2 = 2
add = float(n1) + float(n2)
print 'sum of {0} and {1} is {2}'.format(n1, n2, add)

Now using the 2 to 3 migration we can convert the above code.


n1 = 1
n2 = 2
add = float(n1) + float(n2)
print('sum of {0} and {1} is {2}'.format(n1, n2, add))

So here we see it can be converted to Python 3 code by 2 to 3 on the command line.

Python provides its own tool called Which runs a bunch of scripts to translate your python 2 code into 3. While it’s not perfect, but it does an amazing job overall. After converting any code, you should go in and manually fix up any problems.

SEE ALSO: Come, Nagini. We need to verify that Python code
6. Performance
Most of the performance issues have been fixed in this upgrade! When comparing benchmarks between the two versions, the differences are almost negligible.

7. Some major housekeeping changes
Python 2

print functional brackets optional.
Prefix string with u to make unicode string.
Division of integers always return integer – 5/2=2.
Raw_input () reads string.
input() evaluates data read.
generator .next().
Python 3

print functional brackets compulsory.
String unicode by default.
Division of integers may result in float – 5/2=2.5.
Raw_input() not available.
Input always reads string.
Next (generator).
Py2 to py3 utility.
Dictionary .keys() and .values() returns a view not a list.
Can no longer use comparison operators on non natural comparisons.
Eg. None < None  will raise a TypeError instead of returning false.
Percent (%) string formatting operator is  deprecated use the .format() Function or concatenation.
SEE ALSO: Top 5 IDEs and code editors for Python
8. Having problems?
You may encounter an error here and there if you have been working in python 2.x for some time. That’s fine! Just google the problem, it’s almost certain that someone else has also had that problem too when migrating.


Pages: [1] 2 3 ... 5