Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Abdus Sattar

Pages: [1] 2 3 ... 25
1
সেইদিন পরলাম এইটা কিন্তু নতুন সমাধানটা বুঝা কিন্তু সহজ না। 

2
Faculty Sections / Re: Magic of the Maldives ...
« on: December 11, 2019, 08:03:37 PM »
পিকগুলা দেখিয়া যাইবার জন্যে উদগ্রীব হইয়া গেলাম।

4
Pharmacy / Re: আদা খান, সুস্থ থাকুন
« on: June 23, 2019, 12:09:42 PM »
Thanks for sharing

7
উপকারী পোষ্ট।

8
বাইরে পড়তে যেতে ইচ্ছুক ছাত্র-ছাত্রীদের জন্যে খুবই দরকারী পোষ্ট।

9
Latest Technology / User Experience in Artificial Intelligence
« on: May 28, 2019, 01:03:32 PM »
User Experience in Artificial Intelligence

Two years back, Toyota offered us a glimpse into their version of the future where surprisingly, driving is still fun. Concept-i is the star in the autonomous future where people are still driving. And in the case of Toyota, it's so much fun because they're cruising along with their buddy Yui, an AI personality that helps them navigate, communicate and even contributes in their discussions.

Yui is all over the car, controlling every function and even taking the wheel when required to. It's definitely an exciting future where the machine sounds and “feels” like a human, even exhibiting empathetic behaviour.

Related: Preparing for the Future of AI

That's the kind of future I'd imagine awaits user experience (UX) in the world of AI. A time when the human-AI connection is so deep that some experts say there will be “no interface.” But currently, UX does depend on an interface. It requires screens, for instance, and they don't do much justice to it. Integrating AI into the process will mean better experience all around.

From websites to homes and cars, here's how AI could help patch the holes and bring UX closer to maximum potential.

1. Complex data analysis.
Until now, to improve user engagement in their products, UX teams have turned to tools and metrics such as usability tests, A/B tests, heat maps and usage data. However, these methods are soon to be eclipsed by AI. It's not so much because AI can collect more data -- it's how much you can do with it.

Using AI, an ecommerce store can track user behaviour across various platforms to provide the owner with tips on how they can improve their purchasing experience, eventually leading to more sales. AI can be used to tailor the design to each user’s specifications, based on the analysis of the collected data.

All this is achieved through the application of deep learning that combines large data sets to make inferences. Additionally, these systems can learn from the data and adjust their behaviour accordingly, in real time. Thus, designers applying AI in their work are likely to create better UIs at a faster rate.


2. Deeper human connection.




By analysing the vast amount of data collected, AI systems can create a deeper connection with humans, enhancing their relationship. This is already happening in a couple of industries. When you think of Siri, you see a friendly-voiced (digital) personal assistant. When Amazon first introduced Alexa, it took the market by storm. But its usefulness could only be proven over time. And it was. Smart-home owners are using it to do a million things, including scouring the internet for recipes, schedule meetings and shop. It's also being used in ambulances. Even Netflix’s highly predictive algorithm is a case example of AI in use.

Toyota says Concept-i isn't just a car, but a partner. From the simulation video, you can see that Yui connects with the family on a level that current UX doesn't reach.

By using the function over and over, consumers end up establishing an interdependent relationship with the system. That's exactly how AI is designed to work. You use the system; it collects data; it uses it to learn; it becomes more useful; gives better user experience; you use it more as it collects data, learns and becomes more useful; and the cycle continues. You don't even see it coming -- and before you know it, you're deeply connected.

3. More control by the user.

A common concern about the adoption of AI to everyday life is whether the machines might eventually rise and take over the world. In other words, users are concerned about losing control over the systems. It's a legitimate concern with the autonomous cars, robots guards and smart homes expected to become commonplace.

This lack of control is mirrored in the skepticism for the future, but it can also be seen in commerce and other areas where user experience is of great importance. For instance, a user will be more likely to enter their card information into a system if they feel they have control over when money is transferred, to whom it goes and that they can retrieve it in case something goes wrong.
As AI develops, users will gain more control over the system, gradually improving trust which will lead to more usage.

In Which AI Could Enhance Your Company's UX

UX design is about a designer trying to communicate a machine's model to the user. Meaning, the designer is trying to show the user how the machine works and the kind of benefits they can get from it, from the former's point of view.

Traditionally, this involved following certain rules, and designers understood them very well. A designer knows how to create a web page by following certain rules that they can probably manipulate. With AI, however, the design is dependent on a complex analysis of data instead of following sets of rules. To be able to design using AI, designers will have to really understand the technology behind it.

Mixing UX and AI as we can have played with “AIBO”



Artificial intelligence
Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks - as, for example, discovering proofs for mathematical theorems or playing chess - with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

What Is Intelligence?
All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced.


Fixing the AI in real time
Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The program selects actions from a list of means—in the case of a simple robot, this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reached.

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.
About Author
Jagannathan Kannan
UX Lead Designer @ Verizon wireless


Source: http://forum.daffodilvarsity.edu.bd/index.php?action=post;board=382.0

11
Informative post.

12
Heart / Re: হার্টব্লকের উপসর্গ
« on: May 19, 2019, 02:36:46 PM »
Informative post.

13
ভালো  তথ্য, জানা ছিলো না অনেক গুলা তথ্য। ধন্যবাদ।

14
Giving robots a better feel for object manipulation
Model improves a robot’s ability to mold materials into shapes and interact with liquids and solid objects.
Rob Matheson | MIT News Office
April 16, 2019


A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch — and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials — “particles” — interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration — such as a “T” shape — that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.


The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material — rigid, deformable, and liquid — to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

Source: http://news.mit.edu/2019/robots-object-manipulation-particle-simulator-0417


15
Teaching & Research Forum / Can science writing be automated?
« on: April 20, 2019, 01:11:59 PM »
Can science writing be automated?
A neural network can read scientific papers and render a plain-English summary.

David L. Chandler | MIT News Office
April 17, 2019


The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

Source: http://news.mit.edu/2019/can-science-writing-be-automated-ai-0418

Pages: [1] 2 3 ... 25