Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - tany

Pages: 1 ... 3 4 [5] 6 7 ... 26
61
Faculty Forum / Bridging the gap between human and machine vision
« on: February 19, 2020, 11:34:12 AM »
Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognize her face? "Yes, of course," you probably are thinking. If this is true, it would mean that our visual system, having seen a single image of an object such as a specific face, recognizes it robustly despite changes to the object's position and scale, for example. On the other hand, we know that state-of-the-art classifiers, such as vanilla deep networks, will fail this simple test.In order to recognize a specific face under a range of transformations, neural networks need to be trained with many examples of the face under the different conditions. In other words, they can achieve invariance through memorization, but cannot do it if only one image is available. Thus, understanding how human vision can pull off this remarkable feat is relevant for engineers aiming to improve their existing classifiers. It also is important for neuroscientists modeling the primate visual system with deep networks. In particular, it is possible that the invariance with one-shot learning exhibited by biological vision requires a rather different computational strategy than that of deep networks.

A new paper by MIT Ph.D. candidate in electrical engineering and computer science Yena Han and colleagues in Nature Scientific Reports, titled "Scale and translation-invariance for novel objects in human vision," discusses how they study this phenomenon more carefully to create novel biologically inspired networks.

"Humans can learn from very few examples, unlike deep networks. This is a huge difference with vast implications for engineering of vision systems and for understanding how human vision really works," states co-author Tomaso Poggio—director of the Center for Brains, Minds and Machines (CBMM) and the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT. "A key reason for this difference is the relative invariance of the primate visual system to scale, shift, and other transformations. Strangely, this has been mostly neglected in the AI community, in part because the psychophysical data were so far less than clear-cut. Han's work has now established solid measurements of basic invariances of human vision."

To differentiate invariance rising from intrinsic computation with that from experience and memorization, the new study measured the range of invariance in one-shot learning. A one-shot learning task was performed by presenting Korean letter stimuli to human subjects who were unfamiliar with the language. These letters were initially presented a single time under one specific condition and tested at different scales or positions than the original condition. The first experimental result is that—just as you guessed—humans showed significant scale-invariant recognition after only a single exposure to these novel objects. The second result is that the range of position-invariance is limited, depending on the size and placement of objects.

Next, Han and her colleagues performed a comparable experiment in deep neural networks designed to reproduce this human performance. The results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance. In addition, limited position-invariance of human vision is better replicated in the network by having the model neurons' receptive fields increase as they are further from the center of the visual field. This architecture is different from commonly used neural network models, where an image is processed under uniform resolution with the same shared filters.

"Our work provides a new understanding of the brain representation of objects under different viewpoints. It also has implications for AI, as the results provide new insights into what is a good architectural design for deep neural networks," remarks Han, CBMM researcher and lead author of the study.
source: tech xplore

62
Faculty Forum / Facial expressions don't tell the whole story of emotion
« on: February 19, 2020, 11:32:20 AM »
Interacting with other people is almost always a game of reading cues and volleying back. We think a smile conveys happiness, so we offer a smile in return. We think a frown shows sadness, and maybe we attempt to cheer that person up.Some businesses are even working on technology to determine customer satisfaction through facial expressions.

But facial expressions might not be reliable indicators of emotion, research indicates. In fact, it might be more accurate to say we should never trust a person's face, new research suggests.

"The question we really asked is: 'Can we truly detect emotion from facial articulations?'" said Aleix Martinez, a professor of electrical and computer engineering at The Ohio State University.

"And the basic conclusion is, no, you can't."

Martinez, whose work has focused on building computer algorithms that analyze facial expressions, and his colleagues presented their findings today (Feb. 16, 2020) at the annual meeting of the American Association for the Advancement of Science in Seattle.

The researchers analyzed the kinetics of muscle movement in the human face and compared those muscle movements with a person's emotions. They found that attempts to detect or define emotions based on a person's facial expressions were almost always wrong.

"Everyone makes different facial expressions based on context and cultural background," Martinez said. "And it's important to realize that not everyone who smiles is happy. Not everyone who is happy smiles. I would even go to the extreme of saying most people who do not smile are not necessarily unhappy. And if you are happy for a whole day, you don't go walking down the street with a smile on your face. You're just happy."

It is also true, Martinez said, that sometimes, people smile out of an obligation to the social norms. This would not inherently be a problem, he said—people are certainly entitled to put on a smile for the rest of the world—but some companies have begun developing technology to recognize facial muscle movements and assign emotion or intent to those movements.

The research group that presented at AAAS analyzed some of those technologies and, Martinez said, largely found them lacking."Some claim they can detect whether someone is guilty of a crime or not, or whether a student is paying attention in class, or whether a customer is satisfied after a purchase," he said. "What our research showed is that those claims are complete baloney. There's no way you can determine those things. And worse, it can be dangerous."

The danger, Martinez said, lies in the possibility of missing the real emotion or intent in another person, and then making decisions about that person's future or abilities.

For example, consider a classroom environment, and a teacher who assumes that a student is not paying attention because of the expression on the student's face. The teacher might expect the student to smile and nod along if the student is paying attention. But maybe that student, for reasons the teacher doesn't understand—cultural reasons, perhaps, or contextual ones—is listening intently, but not smiling at all. It would be, Martinez argues, wrong for the teacher to dismiss that student because of the student's facial expressions.

After analyzing data about facial expressions and emotion, the research team—which included scientists from Northeastern University, the California Institute of Technology and the University of Wisconsin—concluded that it takes more than expressions to correctly detect emotion.

Facial color, for example, can help provide clues.

"What we showed is that when you experience emotion, your brain releases peptides—mostly hormones—that change the blood flow and blood composition, and because the face is inundated with these peptides, it changes color," Martinez said.

The human body offers other hints, too, he said: body posture, for example. And context plays a crucial role as well.

In one experiment, Martinez showed study participants a picture cropped to display just a man's face. The man's mouth is open in an apparent scream; his face is bright red.

"When people looked at it, they would think, wow, this guy is super annoyed, or really mad at something, that he's angry and shouting," Martinez said. "But when participants saw the whole image, they saw that it was a soccer player who was celebrating a goal."

In context, it's clear the man is very happy. But isolate his face, Martinez said, and he appears almost dangerous.

Cultural biases play a role, too.

"In the U.S., we tend to smile a lot," Martinez said. "We are just being friendly. But in other cultures, that means different things—in some cultures, if you walked around the supermarket smiling at everyone, you might get smacked."

Martinez said the research group's findings could indicate that people—from hiring managers to professors to criminal justice experts—should consider more than just a facial expression when they evaluate another person.

And while Martinez said he is "a big believer" in developing computer algorithms that try to understand social cues and the intent of a person, he added that two things are important to know about that technology.

"One is you are never going to get 100 percent accuracy," he said. "And the second is that deciphering a person's intent goes beyond their facial expression, and it's important that people—and the computer algorithms they create—understand that."
source: tech xplore

63
Despite billions of dollars spent and decades of research, computation in the human brain remains largely a mystery. Meanwhile, we have made great strides in the development of artificial neural networks, which are designed to loosely mimic how brains compute. We have learned a lot about the nature of neural computation from these artificial brains and it's time to take what we've learned and apply it back to the biological ones.

Neurological diseases are on the rise worldwide, making a better understanding of computation in the brain a pressing problem. Given the ability of modern artificial neural networks to solve complex problems, a framework for neuroscience guided by machine learning insights may unlock valuable secrets about our own brains and how they can malfunction.

Our thoughts and behaviors are generated by computations that take place in our brains. To effectively treat neurological disorders that alter our thoughts and behaviors, like schizophrenia or depression, we likely have to understand how the computations in the brain go wrong.

However, understanding neural computation has proven to be an immensely difficult challenge. When neuroscientists record activity in the brain, it is often indecipherable.

In a paper published in Nature Neuroscience, my co-authors and I argue that the lessons we have learned from artificial neural networks can guide us down the right path of understanding the brain as a computational system rather than as a collection of indecipherable cells.

Brain network models

Artificial neural networks are computational models that loosely mimic the integration and activation properties of real neurons. They have become ubiquitous in the field of artificial intelligence.

To construct artificial neural networks, you start by first designing the network architecture: how the different components of the network are connected to one another. Then, you define the learning goal for the architecture, such as "learn to predict what you're going to see next." Then, you define a rule that tells the network how to change in order to achieve that goal using the data it receives.

What you do not do is specify how each neuron in the network is going to function. You leave it up to the network to determine how each neuron should function to best accomplish the task. I believe the development of the brain is probably the product of a similar process, both on an evolutionary timescale and at the timescale of an individual learning within their lifetime.

Assigning neuron roles

This calls into question the usefulness of trying to determine the functions of individual neurons in the brain, when it is possible that these neurons are the result of an optimization process much like what we see with artificial neural networks.

The different components of artificial neural networks are often very hard to understand. There's no simple verbal or simple mathematical description that explains exactly what they do.

In our paper, we propose that the same holds true for the brain, and so we have to move away from trying to understand the role of each neuron in the brain and instead look at the brain's architecture, that is its network structure; the optimization goals, either at the evolutionary timescale or within the person's lifetime; and the rules by which the brain updates itself—either over generations or within a lifetime—to meet those goals. By defining these three components, we may get a much better understanding of how the brain works than by trying to state what each neuron does.

Optimizing frameworks

One successful application of this approach has shown that the dopamine releasing neurons in the brain appear to encode information about unexpected rewards, e.g. unexpected delivery of some food. This sort of signal, called a reward prediction error, is often used to train artificial neural networks to maximize the rewards they get.

For example, by programming an artificial neural network to interpret points received in a video game as a reward, you can use reward prediction errors to train the network how to play the video game. In the real brain, as in the artificial neural networks, even if we don't understand what each individual signal means, we can understand the role of these neurons and the neurons that receive their signals in relation to the learning goal of maximizing rewards.

While current theories in systems neuroscience are beautiful and insightful, I believe a cohesive framework founded in the way in which evolution and learning shape our brain could fill in a lot the blanks we have been struggling with.

To make progress in systems neuroscience, it will take both bottom-up descriptive work, such as tracing out the connections and gene expression patterns of cells in the brain, and top-down theoretical work, using artificial neural networks to understand learning goals and learning rules.

Given the ability of modern artificial neural networks to solve complex problems, a framework for systems neuroscience guided by machine learning insights may unlock valuable secrets about the human brain.
Souce: tech xplore

64
Faculty Forum / Neuroscience opens the black box of artificial intelligence
« on: February 19, 2020, 11:29:14 AM »
Computer scientists at Otto von Guericke University Magdeburg are aiming to use the findings and established methods of brain research to better understand the way in which artificial intelligence works.As part of a research project, the scientists led by Professor Dr.-Ing. Sebastian Stober from the Artificial Intelligence Lab at the University of Magdeburg will apply methods from cognitive neuroscience to analyze artificial neural networks and better understand the way they work.

The Cognitive neuroscience inspired techniques for explainable AI research project, or CogXAI for short, which will run for three years, will receive over a million euros of funding from the Federal Ministry of Education and Research of Germany.

Artificial neural networks, or ANNs for short, are self-learning intelligent systems that are inspired by the structure of natural brains. They are—like biological nervous systems—able to learn by example in order to independently solve complex problems.

"Whereas in our brains these networks consist of millions of nerve cells communicating with one another by means of chemical and electrical signals, artificial neural networks can be understood as computer programs," says Professor Stober. "Thanks to their strong capacity for learning and their flexibility, in recent years artificial neural networks have, under the term 'deep learning,' established themselves as a popular choice for the development of intelligent systems."

Stober and his team research how to find different regions in an artificial neural network, which—like in biological brains—are responsible for certain functions. As with the recording of a brain scan in a magnetic resonance imaging scanner (MRI), the AI experts aim to identify certain areas of the ANNs in order to better understand the way in which they work.

Furthermore, brain research also provides important findings about the learning behavior of the human brain. The computer scientists are using this wealth of experience to enable the artificial neural networks to acquire fast and effective learning behavior. By transferring concepts of human perception and signal processing to artificial neural networks, they intend to discover how these self-learning systems make predictions and/or why they make mistakes.

"Natural brains have been researched for over 50 years," explains Professor Stober. "However, at present this potential is barely used in the development of AI architectures. By transferring neuroscientific methods to the study of artificial neural networks, their learning processes will also become more transparent and easier to understand. In this way it will be possible to identify malfunctions of artificial neurons at an early stage during the learning process and correct them during training."

According to Stober, the development of artificial neural networks is progressing rapidly. "Through the use of high-performance computers, increasing numbers of artificial neurons can be used for learning. However, the growing complexity of these networks makes it harder even for experts to understand their internal processes and decision-making," explains the computer scientist and leader of the CogXAI project. "However, if we want to be able to make safe use of Artificial Intelligence in future, it is essential to fully understand how it works."
source: tech xplore

65
The year was 2016 and the headlines talked about something called cyborg insects and reflected on a branch of technology called biorobotics.Washington University in St. Louis has made news with their research efforts to use cyborg insects as biorobotic sensing machines. Translation: University engineers wanted to see if they could capitalize on the sense of smell in locusts for sensing systems that could be used by such departments as homeland security.

Barani Raman, an associate professor at the Washington University's biomedical engineering, and his team have been studying how sensory signals are received and processed in locusts' brains. Fundamental olfactory processing in grasshoppers was in the spotlight; Raman focused on how sensory signals are received and processed in their relatively simple brains and his team fashioned a cyborg sniffer.

Fast forward from 2016 to Monday. New Scientist reported that cyborg grasshoppers have been engineered to sniff out explosives.

How the system works: Bomb-sniffing grasshoppers are kitted out with backpacks. They are engineered to transmit data to reveal explosive chemicals. The signals are transmitted wirelessly to a computer from their attached backpacks.

Again, it was Prof. Raman and colleagues at Washington University in St. Louis, featured this time for having tapped into "the olfactory senses of the Schistocerca americana, to create bomb sniffers, uniting sensors of a grasshopper with electronics. Donna Lu reported in New Scientist that these tiny lightweight sensor backpacks fitted to the grasshoppers "were able to record and wirelessly transmit the electrical activity almost instantaneously to a computer."

What gives insects a special edge on sniffing out dangerous systems?New Scientist: Consider olfactory receptor neurons in the antennae. They pick up on chemical odors in the air. They send electrical signals to a part of the insect brain known as the antennal lobe. Each grasshopper antenna has approximately 50,000 of these neurons.In their testing, the team implanted tiny electrodes into the insects' antennal lobes and puffed vapors of different explosive materials. The non-explosive controls were hot air and benzaldehyde. Vapors of different explosive materials puffed into the antennae included TNT and DNT.

"The last step was to fit grasshoppers with a sensor 'backpack' which would record and transmit their neural activity in real-time to a computer, where it would be interpreted," said ZME Science.

What were the test results? Recordings of neural activity from seven grasshoppers were around 80 percent accurate,

"The grasshoppers' brains continued to successfully detect explosives up to seven hours after the researchers implanted the electrodes, before they became fatigued and ultimately died," said Lu.

Not only that, and also impressive: "The grasshoppers were able to detect where the highest concentration of explosives was when the team moved the platform to different locations," said New Scientist.

The paper "Explosive sensing with insect-based biorobots" is up on the preprint server bioRxiv. The authors stated that "We demonstrate a bio-robotic chemical sensing approach where signals from an insect brain are directly utilized to detect and distinguish various explosive chemical vapors."

They said in their paper that they believed their approach was not that different from the 'canary in a coal mine' approach, "where the viability of the entire organism is used as an indicator of absence/presence of toxic gases."

66
More portable, fully wireless smart home setups. Lower power wearables. Batteryless smart devices. These could all be made possible thanks to a new ultra-low power Wi-Fi radio developed by electrical engineers at the University of California San Diego.The device, which is housed in a chip smaller than a grain of rice, enables Internet of Things (IoT) devices to communicate with existing Wi-Fi networks using 5,000 times less power than today's Wi-Fi radios. It consumes just 28 microwatts of power. And it does so while transmitting data at a rate of 2 megabits per second (a connection fast enough to stream music and most YouTube videos) over a range of up to 21 meters.

The team will present their work at the ISSCC 2020 conference Feb. 16 to 20 in San Francisco.

"You can connect your phone, your smart devices, even small cameras or various sensors to this chip, and it can directly send data from these devices to a Wi-Fi access point near you. You don't need to buy anything else. And it could last for years on a single coin cell battery," said Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.

Commercial Wi-Fi radios typically consume hundreds of milliwatts to connect IoT devices with Wi-Fi transceivers. As a result, Wi-Fi compatible devices need either large batteries, frequent recharging or other external power sources to run."This Wi-Fi radio is low enough power that we can now start thinking about new application spaces where you no longer need to plug IoT devices into the wall. This could unleash smaller, fully wireless IoT setups," said UC San Diego electrical and computer engineering professor Patrick Mercier, who co-led the work with Bharadia.

Think a portable Google Home device that you can take around the house and can last for years instead of just hours when unplugged.

"It could also allow you to connect devices that are not currently connected—things that cannot meet the power demands of current Wi-Fi radios, like a smoke alarm—and not have a huge burden on battery replacement," Mercier said.The Wi-Fi radio runs on extremely low power by transmitting data via a technique called backscattering. It takes incoming Wi-Fi signals from a nearby device (like a smartphone) or Wi-Fi access point, modifies the signals and encodes its own data onto them, and then reflects the new signals onto a different Wi-Fi channel to another device or access point.This work builds on low-power Wi-Fi radio technology that Bharadia helped develop as a Ph.D. student at Stanford. In this project, he teamed up with Mercier to develop an even lower-power Wi-Fi radio. They accomplished this by building in a component called a wake-up receiver. This "wakes up" the Wi-Fi radio only when it needs to communicate with Wi-Fi signals, so it can stay in low-power sleep mode the rest of the time, during which it consumes only 3 microwatts of power.The UC San Diego team's improvements to the technology also feature a custom integrated circuit for backscattering data, which makes the whole system smaller and more efficient, and thus enables their Wi-Fi radio to operate over longer communication range (21 meters). This is a practical distance for operating in a smart home environment, the researchers said.

"Here, we demonstrate the first pragmatic chip design that can actually be deployed in a small, low-power device," Mercier said.

The paper is titled "A 28µW IoT Tag that can Communicate with Commodity WiFi Transceivers via a Single-Side-Band QPSK Backscatter Communication Technique."


67
A Washington State University research team has developed a way to address a major safety issue with lithium metal batteries—an innovation that could make high-energy batteries more viable for next-generation energy storage.The researchers used a formulation for their batteries that led to the formation of a unique, protective layer around their lithium anode, protecting the batteries from degradation and allowing them to work longer under typical conditions. Led by Min-Kyu Song, assistant professor in the WSU School of Mechanical and Materials Engineering, the researchers report on the work in the journal, Nano Energy.

Lithium metal is considered the "dream material" for batteries, Song said. That's because among known solid materials, it has the highest energy density, meaning that batteries could run twice as long and hold more energy than the ubiquitous lithium-ion batteries that power most modern-day electronics. While lithium-ion batteries work by passing lithium ions between a graphite anode and a lithium cobalt oxide cathode, the anode in a lithium-metal battery is made of the high-energy lithium metal.

"If we can directly use lithium metal, we can improve the energy density of batteries dramatically," Song said.

While the advantages of lithium metal have been known for decades, researchers have never been able to make them work safely. As electrons travel between the anode and cathode through the external circuit to power a device, Christmas-tree like dendrites begin to form on the lithium metal. The dendrites grow until they cause electric shorts, fires, or explosions. Even if they don't catch on fire, the lithium metal batteries also very rapidly lose their ability to charge.

The WSU research team developed a battery in which they packed selenium disulfide, a non-toxic chemical used in dandruff shampoo, into a porous carbon structure for their cathode. They added two additives to the liquid electrolytes that are typically explored in next-generation lithium batteries.

The two additives worked synergistically and formed a protective layer on the lithium metal surface that was dense, conductive, and robust enough to suppress the growth of dendrites while allowing good cycling stability, Song said. When tested at typical current densities people would use for electronics, the protected lithium metal anode was able to re-charge 500 times and retained high efficiency.

"Such a unique protective layer led to little morphological changes of the lithium anode over cycling and effectively mitigated the growth of lithium dendrites and unwanted side reactions," he said.

The researchers believe their technology can be scalable and cost-effective.

"If commercialized, this novel formulation has real potential," Song said. "Compared to solid-state batteries which are still years away, you don't have to change the manufacturing procedures, and this would be applicable to real industry much sooner, opening up a promising route toward the development of high-energy lithium metal batteries with a long cycle life."

The researchers are continuing to work on the battery, developing a separator that will further protect the battery materials from deterioration and enhance safety without compromising performance.
Source: Tech xplore

68
Faculty Forum / Snakes help engineers design search and rescue robots
« on: February 19, 2020, 11:21:05 AM »
Snakes live in diverse environments ranging from unbearably hot deserts to lush tropical forests, where they slither up trees, rocks and shrubbery every day. By studying how these serpents move, Johns Hopkins engineers have created a snake robot that can nimbly and stably climb large steps.The team's new findings, published in Journal of Experimental Biology and Royal Society Open Science, advance the creation of search and rescue robots that can successfully navigate treacherous terrain.

"We look to these creepy creatures for movement inspiration because they're already so adept at stably scaling obstacles in their day-to-day lives. Hopefully our robot can learn how to bob and weave across surfaces just like snakes," says Chen Li, an assistant professor of mechanical engineering at The Johns Hopkins University and the papers' senior author.Previous studies had mainly observed snake movements on flat surfaces, but rarely in 3-D terrain except for on trees, says Li, and don't account for real-life large obstacles such as rubble and debris that search and rescue robots would have to climb over.

Li's team first studied how the variable kingsnake, a snake that can commonly be found living in both deserts and pine-oak forests, climbed steps in Li's Terradynamics Lab. Li's lab melds the fields of robotics, biology and physics together to study animal movements for tips and tricks to build more versatile robots.

"These snakes have to regularly travel across boulders and fallen trees; they're the masters of movement and there's much we can learn from them," says Li.

Li and his team ran a series of experiments, changing step height and the steps' surface friction to observe just how the snakes contorted their bodies in response to these barriers.Snake robots may someday help us explore inaccessible terrain like building rubble after an earthquake. But can robots ever match the incredible locomotion of biological snakes? Researchers at Johns Hopkins University are moving closer to that goal, having developed a snake robot that can traverse large obstacles better than previous designs. Credit: Patrick Ridgely, Dave Schmelick, Len Turner/Johns Hopkins University Office of Communications
They found that snakes partitioned their bodies into three sections: their front and rear body wriggled back and forth on the horizontal steps like a wave while their middle body section remained stiff, hovering just so, to bridge the large step. The wriggling portions, they noticed, provided stability to keep the snake from tipping over.

As the snakes got closer and onto the step, the three body sections traveled down each body segment. As more and more of the snake reached the step, its front body section would get longer and its rear section would get shorter while the middle body section remained roughly the same length, suspended vertically above the two steps.If the steps got taller and more slippery, the snakes would move more slowly and wriggle their front and rear body less to maintain stability.

After analyzing their videos and noting how snakes climbed steps in the lab, Qiyuan Fu, a graduate student in Li's lab, created a robot to mimic the animals' movements.

At first, the robot snake had difficulty staying stable on large steps and often wobbled and flipped over or got stuck on the steps. To address these issues, the researchers inserted a suspension system (like that in your car) into each body segment so it could compress against the surface when needed. After this, the snake robot was less wobbly, more stable and climbed steps as high as 38% of its body length with a nearly 100% success rate.

Compared to snake robots from other studies, Li's snake robot was speedier and more stable than all but one, and even came close to mimicking the actual snake's speed. One downside of the added body suspension system, however, was the robot used more electricity.

"The animal is still far more superior, but these results are promising for the field of robots that can travel across large obstacles," adds Li.

Next, the team will test and improve the snake robot for even more complex 3-D terrain with more unstructured large obstacles.
Source: Tech xplore

69
Faculty Forum / Survey: Singapore has the world's fastest 4G internet
« on: February 19, 2020, 11:01:00 AM »
Bangladesh was not part of the survey


According to a recent survey of regional wireless internet services across 88 countries by OpenSignal, a United Kingdom-based wireless technology company, Singapore has the fastest 4G internet provider with 44.3Mbps speed.

Singapore is followed by South Korea and Australia.

Although Vietnam and Myanmar launched their 4G networks recently, they have managed to provide connectivity that challenges the speeds of Taiwan and Japan.India, Sri Lanka, Cambodia, and Pakistan faired poorly in the report. Their 4G internet provides below 15Mbps speed to their customers.

Bangladesh was not part of the survey.

Asia News Network reports that the industry seems to have reached a limit to what the current technology, spectral bandwidth and mobile economics can support on a nationwide level.


70
Faculty Forum / 5G is magic
« on: February 19, 2020, 10:58:59 AM »
The new generation network will completely transform flow of information, causing a paradigm shift.5G will enable lightning fast movie downloads and internet browsing. But to think of 5G simply in those terms will be underestimating its enormous power.

Some of the real transformation will take place in how machines interact with people and with each other. It will open up new economic opportunities by transforming cities and organizations, making services possible that were unthinkable before.

The ability to transmit massive amounts of data in incredible speed means automatic procedures will be extremely effective, allowing for unprecedented remote medical procedure, remote controling security related tasks and so on.

A complete implementation of 5G will finally make smart cities possible which now only exist in science fictions. The concept of smart cities is based on the implementatoin of Internet of Things or IoT, which is basically the idea that all devices can communicate and share data in a seamless and meaningful fashion.

Imagine all cars in Dhaka are communicating with each other and the central traffic ssytem and then deciding among themselves which routes will be best for which to achieve optimum traffic. While it sounds magical, it is certainly within the scope of a powerful IoT system. And IoT at that level requires transmission of unthinkable amount of data every second. That’s where 5G comes in.

To get a more concrete picture of 5G’s data transmission capability it’s best to look at the data. The initial speed after 5G launch will be 20 times faster than what is possible with 4G. This clocks in at 1.4 gigabits per second. This means you can downlaod a full feature length movie in about 17 seconds.

However, it has been estimated that the optimum speed 5G can achieve is even faster. Leading wireless chip maker Qualcomm said it had demonstrated peak 5G download speeds of 4.5 gigabits per second.

Bangladesh is expected to get over one gigabits speed after the launch of 5G, as demonstrated in Robi-Huawei 5G testing last year. However, it is not clear at this point if peak speed that developed countries will likely enjoy, will be available in countries like Baangladesh.

5G service in Bangladesh from 2021

ICT Advisor Sajeeb Wazed to the Prime Minister  said he wants to make Bangladesh among the first countries to deploy 5G. Wazed is hopeful that the country will enter the 5G along with the developing countries.

Telecom Minister Mustafa Jabbar made bold proclaimation saying Teletak, the state-owned telecom company, will be the first to roll out 5G. He said that the government plans to offer 5G service in the country by 2021 and has started preparations regarding the rollout.

Globally, the new network will be available from this year in many countries. In the United States and South Korea have already launched 5G, somewhat hastily.

Network that will breathe new life into technology

The most significant impact of 5G is far beyond movie downloads. The difference the new network will create, when fully implemented, will be similar to, as Masayoshi Son put it ‘the Cambrian explosion,’a biological event 500 million years ago that gave rise to complex animals different from life with simple individual cells.

Masayoshi Son, the chairman of global business mammoth Softbank, said that the emergence of 5G will mark a turning point comparable to the ecological paradigm shift created by the the Cambrian Explosion and give “birth of a super-intelligence that will contribute to humanity.”

Among the most talked about impact that 5G will immediately have is the ability of employing driverless cars.

Automobile manufacturers have made big advancements in driverless technologies, making autodriven cars completely possible, save for the fact that limitations in wireless communications kept the otherwise sound tech risky.

Latency problems with 4G technology remain a big whole in the fully constructed ship. With a lag in communicating with their control centres autodriven cars can crash and cause severe damage to life and property.

5G solves this. There is no latency with 5G. Cars and data centers can communicate without delay, making driverless cars finally possible.

In healthcare, professionals need to wait to perform a procedure if the right perosn is not available physically. With latency gone, doctors will be able to carry out remote procedures that weren’t possible, despite the existence of the remote tech.

But robotic help will be most used in manufacturing, allowing for fater production and likely for less cost, which can eventually lead to better economy.

5G speed is likely to push the virtual reality technology to cross its current barriers and make VR projections seamless. In entertainment, this will be particularly felt in multiplayer VR games. But it can open up all sorts of new possibilities, in education for example, that is not possible now.
Source: Dhaka Tribune

71
Faculty Forum / Fear induced modern consumerism
« on: February 19, 2020, 10:55:56 AM »
Digital device: a basic idea


Bryant McGill, a human potential thought leader, international bestselling author, activist, and social entrepreneur, says in his book Voice of Reason, “In consumer life we become what we consume - disposable junk to be used and thrown away.” 

Living in a modern world full of digital devices and exposure to all sorts of information that World Wide Web carries, we often forget how to live life as it should be. We often forget to earth ourselves with others in the most natural way possible. And our dependency on digital devices is actually quite harmful to our own wellbeing.  What’s slowly fading is peace and tranquility within the connections amongst ourselves and with others in much more meaningful ways, as we live through digital screens.  And we often forget this because digital devices serve our reward system in a far greater way than a normal meaningful conversation, upfront. We are clearly contradicting what we desire, as each one of us is striving for that meaningful connection but instead we are becoming puppets of digital device.

Back in the day, when internet was not widely popular, its use was specific, which was sharing information for the purpose of learning. Internet was a mesh created to share academic research information between universities and government. Although many have found dependency of www to be positive in their lives, they don’t seem to understand how these devices carry curated information that affects our minds and livelihood. There is an underlying factor often unseen that governs how we react to certain information and how this information changes our behaviour in many different ways. One of the vital signs is consuming the unnecessary. Modern science has discovered a clear relationship between fear and consumption- acknowledging this is just the start.

Fear leads to consumption. It can be any sort of fear. Fear of being left out from a peer group is fear of missing out, or FOMO. The term newly coined by a YouTuber Casey Neistat in one of his recent YouTube videos, where he explained swiping through the noise of Facebook and Instagram for hours, is something we all should consider harmful.

Fear of getting excluded is one form of fear that psychologists and neuroscientists are studying every day. As digital media and digital devices are becoming our staple needs, we constantly seek inclusion and acceptance. We seek to be included to our peers. But when we don’t receive such inclusivity, we experience acute depression, social anxiety and many other forms of short-term mental illnesses. Short term may sound like we are safe, but it can flip any day, causing long-term ailment. One of the many symptoms of depression and social anxiety is willingness to consume (almost anything from material to services) more than necessary. And this has become proverbial in the realm of medical science, predicting that in the next two decades, cases of depression and anxiety leading to formidable diseases will be widely common all over the world.

So, what is this fear that these scientists are talking about? How does it lead to consuming goods and services that we don’t actually need?

The very basic step to understanding fear starts from understanding how our brains work. Simply put, our association with the brain is deeply rooted to our senses, namely our eyes and ears. We use these senses to put more emphasis on things we can relate to and things we don’t relate to. We all have a certain level of curiosity. Some may have more than others, but it’s generally present in our conscious. Digital devices keep our curiosity on the clock. Information of any sort that ignites fear in us can disrupt our brain’s chemistry, depending on the longevity of exposure. And the exponential rate of digital dependency in the last five years clearly indicates its reason, that we have arrived somewhere by relentlessly putting hours swiping up and down, left and right.

Having said that, we all know a vast amount of information is curated and is widely present in our web world. But, we don’t put our thoughts on questions like why am I being fed this news or why this is relevant for me to know. Thus comes the culture of inducing fear through digital devices. As we swipe and slide through our devices, we find some information is just blunt and unnecessary. Wars in the Middle East that will never end, Donald Trump and his nonsensical governance, crimes all around the country, fires burning buildings and homes ignites sparks inside our amygdala at a slightly unconscious level, without even signaling how the information affects us. In a world where hopelessness floats around so easily, fear takes up our brain as we run through our digital devices affecting us in a subliminal way.

Fear of not being included can cause disharmony in our brain chemistry. Fear of losing a person we care for can also cause disharmony. Fear in general causes disharmony. But what’s shocking is how digital devices can or may control our sense of fear through manipulating information as we swipe through e-media and, thus, turning us into a prey for corporations who control the overall economy.

It’s not abnormal to have certain phobias. Nobody is perfect. A person CAN have a phobia that doesn’t distress day to day livelihood. But when it comes to a fear that is controlled by social, political and corporation’s agenda, our amygdala (the inner part of our brain that reacts to fear and emotion) experiences a constant dissonance that translates to “anything might happen which may disharmonize our normality”; thus, we become prone towards consuming the “good life” by buying materials and services that are often unnecessary. And it feels good to own, as ownership gives us comfort.

Modern day consumerism starts with fear, and as obnoxious as it sounds, we have all given enough to support this viscous agenda. Our dependency on digital devices has given way to e-commerce, e-news, e-books, e-lies that we feed off of and this normality has hidden an ugly truth: that we are getting sucked up inside capitalism in its best form. What is real seems like a distant planet that never shows up in the night sky. Our lenses have changed without even giving us the slightest scope to understand what was coming, as we are made to “Agree to The Terms and Conditions” while installing all those applications.

Stressful news of crime, murder, terrorism, and fascism is exposed to us every day via digital devices. And we learn to empathize with these issues by soaking them in for hours and hours, without knowing how our impulses and conscious are getting affected.  The moderation of useless information is madly uncanny and unjust, as we are bombarded with curated information that serves a purpose. Yes, your friend posting a picture of a perfect meal does suffice for the term curated information. Slightly unnecessary, but the picture serves a purpose - consumerism.

I wouldn’t be surprised if there was another fire in an office building and, a few days later, thousands of people rushed into Burger King to lower their depression without even acknowledging how acute their symptoms were and how ignorant we all have become. Because we have agreed to stay afloat on the same boat as we are mammals entangled in need for acceptance and inclusion.

“A person needs ideas in order to survive. A person whom ascribes to a philosophy for living and is dedicated to constant learning will find that ordinary life is enough without living in a zone of consumer consumption and media devices.”  ― Kilroy J. Oldster, Dead Toad Scrolls.
source: Dhaka Tribune

72
Faculty Forum / Microsoft unveils surface duo folding smartphone
« on: February 19, 2020, 10:53:39 AM »
In the latest gamble to take on Apple and Samsung, Microsoft is taking on the market with Surface Duo


With the foldable Surface Duo smartphone, Microsoft is once again attempting to gain a foothold in the multibillion-dollar smartphone market

The Surface Duo was announced at an event in New York City on Wednesday alongside the Surface Laptop 3 and Surface Pro 7, as well as a dual-screen foldable tablet called the Surface Neo.

It is Microsoft's first phone in more than three years, and comes after previous ventures failed to compete with the likes of Apple and Samsung, reports The Independent.

A brand new operating system, Windows 10X, was also announced. It is designed to work with dual-screen devices.

However, unusually for Microsoft, it has used a rival's software for the Surface Duo phone, in the form of Google's Android mobile operating system.

“This product brings together the absolute best of Microsoft, and we’re partnering with Google to bring the absolute best of Android in one product,” said Microsoft product chief Panos Panay. “This is industry-pushing technology.”

The Duo will feature two 5.6inch displays that fold out to form an 8.3inch device.

Microsoft has said its release date will come in time for "Holiday 2020", so consumers will have to wait at least a year to try the Surface Duo for themselves.

Microsoft had to redesign its operating system to fit its new foldable tablet, but will not be used for the Surface Duo.

Folding phones will represent the next great market opportunity within the smartphone industry, Microsoft is betting, though it is a relatively new and untested category.

By the time the Surface Duo launches it will face competition from the Samsung Galaxy Fold and Huawei Mate X, as well as other folding smartphones set to be released by other manufacturers over the next year.

Industry analysts said it was too soon to see if this gamble will pay off but called it a bold statement in an already over-crowded market.

 “Microsoft is making a play for the mass market of devices,” JP Gownder, principal analyst at technology insight firm Forrester, told The Independent.

“The success of these new devices will fully depend on the availability of software to light up experiences that make the form factors more than just pretty premium hardware.

“The pay-off could be big if the user experience and developer ecosystem come together - we will need to wait and see.”

source: Dhaka Tribune

73
Faculty Forum / Despite robot efficiency, human skills still matter at work
« on: February 19, 2020, 10:52:07 AM »
Artificial intelligence is approaching critical mass at the office, but humans are still likely to be necessary, according to a new study by executive development firm, Future Workplace, in partnership with Oracle


Future Workplace found an 18% jump over last year in the number of workers who use AI in some facet of their jobs, representing more than half of those surveyed.

Reuters spoke with Dan Schawbel, the research director at Future Workplace and bestselling author of “Back to Human,” about the study’s key findings and the future of work.

You found that 64% of people trust a robot more than their manager. What can robots do better than managers and what can managers do better than robots?

What managers can do better are soft skills: understanding employees’ feelings, coaching employees, creating a work culture -- things that are hard to measure, but affect someone’s workday.

The things robots can do better are hard skills: providing unbiased information, maintaining work schedules, problem solving and maintaining a budget.

Is AI advancing to take over soft skills?

Right now, we’re not seeing that. I think the future of work is that human resources is going to be managing the human workforce, whereas information technology is going to be managing the robot workforce. There is no doubt that humans and robots will be working side by side.

Are we properly preparing the next generation to work alongside AI?

I think technology is making people more antisocial as they grow up because they’re getting it earlier. Yet the demand right now is for a lot of hard skills that are going to be automated. So eventually, when the hard skills are automated and the soft skills are more in demand, the next generation is in big trouble.

Which countries are using AI the most?

India and China, and then Singapore. The countries that are gaining more power and prominence in the world are using AI at work.

If AI does all the easy tasks, will managers be mentally drained with only difficult tasks left to do?

I think it’s very possible. I always do tasks that require the most thought in the beginning of my day. After 5 or 6pm, I’m exhausted mentally. But if administrative tasks are automated, potentially, the work day becomes consolidated.

That would free us to do more personal things. We have to see if our workday gets shorter if AI eliminates those tasks. If it doesn’t, the burnout culture will increase dramatically.

70% of your survey respondents were concerned about AI collecting data on them at work. Is that concern legitimate?

Yes. You’re seeing more and more technology vendors enabling companies to monitor employees’ use of their computers.

If we collect data on employees in the workplace and make the employees suffer consequences for not being focused for eight hours a day, that’s going to be a huge problem. No one can focus for that long. It’s going to accelerate our burnout epidemic.

How is AI changing hiring practices?

One example is Unilever. The first half of their entry-level recruiting process is really AI-centric. You do a video interview and the AI collects data on you and matches it against successful employees. That lowers the pool of candidates. Then candidates spend a day at Unilever doing interviews, and a percentage get a job offer. That’s machines and humans working side-by-side.
source: Dhaka tribune

74
Google claims to have demonstrated something called “quantum supremacy”, in a paper published in Nature


This would mark a significant milestone in the development of a new type of computer, known as a quantum computer, that could perform very difficult calculations much faster than anything possible on conventional “classical” computers. But a team from IBM has published their own paper claiming they can reproduce the Google result on existing supercomputers.

While Google versus IBM might make a good story, this disagreement between two of the world’s biggest technology companies rather distracts from the real scientific and technological progress behind both teams’ work. Despite how it might sound, even exceeding the milestone of quantum supremacy wouldn’t mean quantum computers are about to take over. On the other hand, just approaching this point has exciting implications for the future of the technology.

Quantum computers represent a new way of processing data. Instead of storing information in “bits” as 0s or 1s like classical computers do, quantum computers use the principles of quantum physics to store information in “qubits” that can also be in states of 0 and 1 at the same time. In theory, this allows quantum machines to perform certain calculations much faster than classical computers.

In 2012, Professor John Preskill coined the term “quantum supremacy” to describe the point when quantum computers become powerful enough to perform some computational task that classical computers could not do in a reasonable timeframe. He deliberately didn’t require the computational task to be a useful one. Quantum supremacy is an intermediate milestone, something to aim for long before it is possible to build large, general-purpose quantum computers.

In its quantum supremacy experiment, the Google team performed one of these difficult but useless calculations, sampling the output of randomly chosen quantum circuits. They also carried out computations on the world’s most powerful classical supercomputer, Summit, and estimated it would take 10,000 years to fully simulate this quantum computation. IBM’s team have proposed a method for simulating Google’s experiment on the Summit computer, which they estimated would take only two days rather than 10,000 years.

Random circuit sampling has no known practical use, but there are very good mathematical and empirical reasons to believe it is very hard to replicate on classical computers. More precisely, for every additional qubit the quantum computer uses to perform the calculation, a classical computer would need to double its computation time to do the same.

The IBM paper does not challenge this exponential growth. What the IBM team did was find a way of trading increased memory usage for faster computation time. They used this to show how it might be possible to squeeze a simulation of the Google experiment onto the Summit supercomputer, by exploiting the vast memory resources of that machine. (They estimate simulating the Google experiment would require memory equivalent to about 10m regular hard drives.)

The 53-qubit Google experiment is right at the limit of what can be simulated classically. IBM’s new algorithm might just bring the calculation within reach of the world’s biggest supercomputer. But add a couple more qubits, and the calculation will be beyond reach again. The Google paper anticipates this, stating: “We expect that lower simulation costs than reported here will eventually be achieved, but we also expect that they will be consistently outpaced by hardware improvements on larger quantum processors.”

Whether this experiment is just within reach of the world’s most powerful classical supercomputer, or just beyond, isn’t really the point. The term “supremacy” is somewhat misleading in that it suggests a point when quantum computers can outperform classical computers at everything. In reality, it just means they can outperform classical computers at something. And that something might be an artificial demonstration with no practical applications. In retrospect, the choice of terminology was perhaps unfortunate (though Preskill recently wrote a reasoned defence of it).

Impressive science
Yet Google’s work is a significant milestone. With quantum hardware reaching the limits of what can be matched classically, it opens up the intriguing possibility that these devices -- or devices only slightly larger -- could have practical applications that cannot be done on classical supercomputers. On the other hand, we don’t know of any such applications yet, even for devices with a few hundred qubits. It’s a very interesting and challenging scientific question, and an extremely active area of research.

As such, the Google results are an impressive piece of experimental science. They do not imply that quantum computers are about to revolutionize computing overnight (and the Google paper never claims this). Nor are these useless results that achieve nothing new (and the IBM paper doesn’t claim this). The truth is somewhere in between. These new results undoubtedly move the technology forward, just as it has been steadily progressing for the last couple of decades.

As quantum computing technology develops, it is also pushing the design of new classical algorithms to simulate larger quantum systems than were previously possible. IBM’s paper is an example of that. This is also useful science. Not only in ensuring quantum computing progress is continually being fairly benchmarked against the best classical techniques, but also because simulating quantum systems is itself an important scientific computing application.

This is how science and technology progresses. Not in one dramatic and revolutionary breakthrough, but in a whole series of small breakthroughs, with the academic community carefully scrutinizing, criticizing and refining each step along the way. Only a few of these advances and debates hit the headlines. The reality is both less dramatic and more interesting.

source: Dhaka Tribune

75
It also highlighted a squeeze on the middle class and a widespread dissatisfaction in rich countries


The Organisation for Economic Co-operation and Development (OECD) has warned governments to act fast to counter the effects of automation and globalization.

Automation, robots and globalization are rapidly changing the workplace and the technologies could almost wipe out or radically alter almost half of all jobs in the next two decades, OECD said, reports Bloomberg.

According to OECD Labor Director Stefano Scarpetta, the pace of change will be “startling.”

The OECD warned the governments that they should act fast and decisively to counter the effects f automation, robots and globalization, or face a worsening of social and economic tensions.

Stating that some workers will benefit as technology opens new markets and increases productivity, young, low-skilled, part-time and gig-economy workers will be vulnerable as safety nets and training systems built up over decades to protect workers will be struggling to keep up with the “megatrends” changing the nature of work, the OECD said.

 Mentioning that automation may be the most important issue for labour markets in the near future, Scarpetta said: “Deep and rapid structural changes are on the horizon, bringing with them major new opportunities but also greater uncertainty among those who are not well equipped to grasp them.”

The employment report is the latest OECD warning about risks to governments in advanced economies, which have already manifested themselves in a surge of support for populist political leaders. The organization has highlighted a squeeze on the middle classes, future jobs losses from technology and a widespread dissatisfaction in rich countries.

Changes in employment will hit some workers more than others -- particularly young people with lower levels of education and women who are more likely to be under-employed and working in low paid jobs, the OECD said.

The organization recommends more training and urges governments to extend protections to workers in the “grey zone,” where a blurring of employment and self-employment often comes with a lack of rights.

It also said the union membership has fallen by almost half in the past three decades and one in seven workers globally are self employed while six out of ten workers lack basic IT skills.
source: Dhaka tribune

Pages: 1 ... 3 4 [5] 6 7 ... 26