Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - motiur.swe

Pages: 1 ... 3 4 [5]
61
Thanks sir for sharing this post. This will help us for increasing knowledge.

Welcome

62
Great and thanks.

63
X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls.

Their latest project, “RF-Pose,” uses artificial intelligence (AI) to teach wireless devices to sense people’s postures and movement, even from the other side of a wall.

The researchers use a neural network to analyze radio signals that bounce off people’s bodies, and can then create a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.


[embed=425,349]<iframe width="854" height="480" src="
" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>[/embed]

The team says that RF-Pose could be used to monitor diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy, providing a better understanding of disease progression and allowing doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries and changes in activity patterns. The team is currently working with doctors to explore RF-Pose’s applications in health care.

All data the team collected has subjects' consent and is anonymized and encrypted to protect user privacy. For future real-world applications, they plans to implement a “consent mechanism” in which the person who installs the device is cued to do a specific set of movements in order for it to begin to monitor the environment.

“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi, who co-wrote a new paper about the project. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.”

Besides health care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

Katabi co-wrote the new paper with PhD student and lead author Mingmin Zhao, MIT Professor Antonio Torralba, postdoc Mohammad Abu Alsheikh, graduate student Tianhong Li, and PhD students Yonglong Tian and Hang Zhao. They will present it later this month at the Conference on Computer Vision and Pattern Recognition (CVPR) in Salt Lake City, Utah.

One challenge the researchers had to address is that most neural networks are trained using data labeled by hand. A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can’t be easily labeled by humans.

To address this, the researchers collected examples using both their wireless device and a camera. They gathered thousands of images of people doing activities like walking, talking, sitting, opening doors and waiting for elevators.

They then used these images from the camera to extract the stick figures, which they showed to the neural network along with the corresponding radio signal. This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene.

Post-training, RF-Pose was able to estimate a person’s posture and movements without cameras, using only the wireless reflections that bounce off people’s bodies.

Since cameras can’t see through walls, the network was never explicitly trained on data from the other side of a wall – which is what made it particularly surprising to the MIT team that the network could generalize its knowledge to be able to handle through-wall movement.

“If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” says Torralba.

Besides sensing movement, the authors also showed that they could use wireless signals to accurately identify somebody 83 percent of the time out of a line-up of 100 individuals. This ability could be particularly useful for the application of search-and-rescue operations, when it may be helpful to know the identity of specific people.

For this paper, the model outputs a 2-D stick figure, but the team is also working to create 3-D representations that would be able to reflect even smaller micromovements. For example, it might be able to see if an older person’s hands are shaking regularly enough that they may want to get a check-up.

“By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives,” says Zhao.

Source: MIT News

64
Getting robots to do things isn’t easy: Usually, scientists have to either explicitly program them or get them to understand how humans communicate via language.

But what if we could control robots more intuitively, using just hand gestures and brainwaves?

A new system spearheaded by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to do exactly that, allowing users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.

Building off the team’s past work focused on simple binary-choice activities, the new work expands the scope to multiple-choice tasks, opening up new possibilities for how human workers could manage teams of robots.

By monitoring brain activity, the system can detect in real-time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.

[embed=425,349]<iframe width="854" height="480" src="
" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>[/embed]

The team demonstrated the system on a task in which a robot moves a power drill to one of three possible targets on the body of a mock plane. Importantly, they showed that the system works on people it’s never seen before, meaning that organizations could deploy it in real-world settings without needing to train it on users.

“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we've been able to do before using only EEG feedback,” says CSAIL Director Daniela Rus, who supervised the work. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”

PhD candidate Joseph DelPreto was lead author on a paper about the project alongside Rus, former CSAIL postdoc Andres F. Salazar-Gomez, former CSAIL research scientist Stephanie Gil, research scholar Ramin M. Hasani, and Boston University Professor Frank H. Guenther. The paper will be presented at the Robotics: Science and Systems (RSS) conference taking place in Pittsburgh next week.

In most previous work, systems could generally only recognize brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. For instance, a human operator might have to look at different light displays that correspond to different robot tasks during a training session.

Not surprisingly, such approaches are difficult for people to handle reliably, especially if they work in fields like construction or navigation that already require intense concentration.

Meanwhile, Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.

“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”

For the project the team used “Baxter,” a humanoid robot from Rethink Robotics. With human supervision, the robot went from choosing the correct target 70 percent of the time to more than 97 percent of the time.

To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm.

Both metrics have some individual shortcomings: EEG signals are not always reliably detectable, while EMG signals can sometimes be difficult to map to motions that are any more specific than “move left or right.” Merging the two, however, allows for more robust bio-sensing and makes it possible for the system to work on new users without training.

“By looking at both muscle and brain signals, we can start to pick up on a person's natural gestures along with their snap decisions about whether something is going wrong,” says DelPreto. “This helps make communicating with a robot more like communicating with another person.”

The team says that they could imagine the system one day being useful for the elderly, or workers with language disorders or limited mobility.

“We’d like to move away from a world where people have to adapt to the constraints of machines,” says Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”

Source: MIT News

65
Software Engineering / Faster analysis of medical images
« on: July 03, 2018, 05:34:24 PM »
Medical image registration is a common technique that involves overlaying two images, such as magnetic resonance imaging (MRI) scans, to compare and analyze anatomical differences in great detail. If a patient has a brain tumor, for instance, doctors can overlap a brain scan from several months ago onto a more recent scan to analyze small changes in the tumor’s progress.

This process, however, can often take two hours or more, as traditional systems meticulously align each of potentially a million pixels in the combined scans. In a pair of upcoming conference papers, MIT researchers describe a machine-learning algorithm that can register brain scans and other 3-D images more than 1,000 times more quickly using novel learning techniques.

The algorithm works by “learning” while registering thousands of pairs of images. In doing so, it acquires information about how to align images and estimates some optimal alignment parameters. After training, it uses those parameters to map all pixels of one image to another, all at once. This reduces registration time to a minute or two using a normal computer, or less than a second using a GPU with comparable accuracy to state-of-the-art systems.

“The tasks of aligning a brain MRI shouldn’t be that different when you’re aligning one pair of brain MRIs or another,” says co-author on both papers Guha Balakrishnan, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Department of Engineering and Computer Science (EECS). “There is information you should be able to carry over in how you do the alignment. If you’re able to learn something from previous image registration, you can do a new task much faster and with the same accuracy.”

The papers are being presented at the Conference on Computer Vision and Pattern Recognition (CVPR), held this week, and at the Medical Image Computing and Computer Assisted Interventions Conference (MICCAI), held in September. Co-authors are: Adrian Dalca, a postdoc at Massachusetts General Hospital and CSAIL; Amy Zhao, a graduate student in CSAIL; Mert R. Sabuncu, a former CSAIL postdoc and now a professor at Cornell University; and John Guttag, the Dugald C. Jackson Professor in Electrical Engineering at MIT.

Retaining information

MRI scans are basically hundreds of stacked 2-D images that form massive 3-D images, called “volumes,” containing a million or more 3-D pixels, called “voxels.” Therefore, it’s very time-consuming to align all voxels in the first volume with those in the second. Moreover, scans can come from different machines and have different spatial orientations, meaning matching voxels is even more computationally complex.

“You have two different images of two different brains, put them on top of each other, and you start wiggling one until one fits the other. Mathematically, this optimization procedure takes a long time,” says Dalca, senior author on the CVPR paper and lead author on the MICCAI paper.

This process becomes particularly slow when analyzing scans from large populations. Neuroscientists analyzing variations in brain structures across hundreds of patients with a particular disease or condition, for instance, could potentially take hundreds of hours.

That’s because those algorithms have one major flaw: They never learn. After each registration, they dismiss all data pertaining to voxel location. “Essentially, they start from scratch given a new pair of images,” Balakrishnan says. “After 100 registrations, you should have learned something from the alignment. That’s what we leverage.”

The researchers’ algorithm, called “VoxelMorph,” is powered by a convolutional neural network (CNN), a machine-learning approach commonly used for image processing. These networks consist of many nodes that process image and other information across several layers of computation.

In the CVPR paper, the researchers trained their algorithm on 7,000 publicly available MRI brain scans and then tested it on 250 additional scans.

During training, brain scans were fed into the algorithm in pairs. Using a CNN and modified computation layer called a spatial transformer, the method captures similarities of voxels in one MRI scan with voxels in the other scan. In doing so, the algorithm learns information about groups of voxels — such as anatomical shapes common to both scans — which it uses to calculate optimized parameters that can be applied to any scan pair.

When fed two new scans, a simple mathematical “function” uses those optimized parameters to rapidly calculate the exact alignment of every voxel in both scans. In short, the algorithm’s CNN component gains all necessary information during training so that, during each new registration, the entire registration can be executed using one, easily computable function evaluation.

The researchers found their algorithm could accurately register all of their 250 test brain scans — those registered after the training set — within two minutes using a traditional central processing unit, and in under one second using a graphics processing unit.

Importantly, the algorithm is “unsupervised,” meaning it doesn’t require additional information beyond image data. Some registration algorithms incorporate CNN models but require a “ground truth,” meaning another traditional algorithm is first run to compute accurate registrations. The researchers’ algorithm maintains its accuracy without that data.

The MICCAI paper develops a refined VoxelMorph algorithm that “says how sure we are about each registration,” Balakrishnan says. It also guarantees the registration “smoothness,” meaning it doesn’t produce folds, holes, or general distortions in the composite image. The paper presents a mathematical model that validates the algorithm’s accuracy using something called a Dice score, a standard metric to evaluate the accuracy of overlapped images. Across 17 brain regions, the refined VoxelMorph algorithm scored the same accuracy as a commonly used state-of-the-art registration algorithm, while providing runtime and methodological improvements.

Beyond brain scans

The speedy algorithm has a wide range of potential applications in addition to analyzing brain scans, the researchers say. MIT colleagues, for instance, are currently running the algorithm on lung images.

The algorithm could also pave the way for image registration during operations. Various scans of different qualities and speeds are currently used before or during some surgeries. But those images are not registered until after the operation. When resecting a brain tumor, for instance, surgeons sometimes scan a patient’s brain before and after surgery to see if they’ve removed all the tumor. If any bit remains, they’re back in the operating room.

With the new algorithm, Dalca says, surgeons could potentially register scans in near real-time, getting a much clearer picture on their progress. “Today, they can’t really overlap the images during surgery, because it will take two hours, and the surgery is ongoing” he says. “However, if it only takes a second, you can imagine that it could be feasible.”

"There is a ton of work using existing deep learning frameworks/loss functions with little creativity or imagination. This work departs from that mass of research with a very clever formulation of nonlinear warping as a learning problem ... [where] learning takes hours, but applying the network takes seconds," says Bruce Fischl, a professor in radiology at Harvard Medical School and a neuroscientist at Massachusetts General Hospital. "This is a case where a big enough quantitative change [of image registration] — from hours to seconds — becomes a qualitative one, opening up new possibilities such as running the algorithm during a scan session while a patient is still in the scanner, enabling clinical decision making about what types of data needs to be acquired and where in the brain it should be focused without forcing the patient to come back days or weeks later."

Fischl adds that his lab, which develops open-source software tools for neuroimaging analysis, hopes to use the algorithm soon. "Our biggest drawback is the length of time it takes us to analyze a dataset, and by far the more computational intensive portion of that analysis is nonlinear warping, so these tools are of great interest to me," he says.

Source: MIT News

66
@Ishak

I won't tell that it's compulsory but it will add extra value to interview board or to your CV definitely even also for big tech company like Google, Facebook etc it's necessary.

I don't tell that online contest but I am trying to let you understand that problem solving capacity.

But if you attend online contest then you will get inspired like a game. Online contest is a game where participant from world wide attend to win the race. So you will get fun and inspiration to defeat them.

So my main point is problem solving capacity and online contest is a platform from where you can apply and check your capacity in terms of world wide programmers.

If you start contest then you will try to be first on that contest and if you can do then you will get encourage and also there is a a chance to get connected with world's other programmers and bit tech companies as well.

Finally, the main term is "Problem Solving Capacity" and "Online Contest" is to test yourself. Now you told me that if you can improve your problem solving capacity and can dear to defeat anyone in contest then:

"Won't it add any value to you or to your CV?"

In industry also you need to prove yourself by solving complex problem in a efficient way so obviously it will increase your possibility to get the job.

I think I can let you understand.

67
I am getting new ideas from you all.

Thanks to all for sharing your thoughts.

68
@Dipto_Paul , great observation and your story also inspireable to new students. Please try to share your story and thoughts to your juniors so that they can get right way to start their journey as a student of software engineering.

Let's work for "Self Development" through sharing our thoughts to each other and help each other to take proper decision.

69
Software Engineering / Programming Languages for ACM-ICPC Problem Solving
« on: November 02, 2017, 03:00:14 PM »
Which programming language you prefer for ACM-ICPC contest?
Please provide your interest by voting on the poll.

Thanks.

70
Software Engineering / Multiple Programming Languages or Problem Solving
« on: November 01, 2017, 10:53:33 PM »
First of all, I like to say that "Programming language is nothing just syntax and set of rules but logical thinking and problem solving capacity should be our goal."

I suggest my students who come to me for advice from career perspective (as I have industry experience),I suggest them to start with C from any book(hard copy not soft copy), not from online first. Cause from the beginning if students started searching from web then they will get confused. They can't set their start point like from where they should start.

So, my suggestion is to encourage students to develop logic using C from any good bangla book rather than internet. After successfully completion of book then they will start to solve problem from ad hoc to complex step by step.

From our department, we need to develop a lab facility where students can work to develop their skills under a proper guidance. Even also as the last meeting discussion, we need to ensure that all of the advisor encourages their students to develop logical thinking capacity using C.

I believe that who can develop their problem solving capacity using C, they can easily move any language when need cause programming language is just syntax. So, at first , need to develop problem solving mentality and capacity among the students which is most important part as a student of software engineering.

Want opinion from experts.

Thanks.

72
Teaching & Research Forum / Re: Ten simple rules for structuring paper
« on: November 01, 2017, 09:58:42 PM »
Great Knowledgeable Resource...

Pages: 1 ... 3 4 [5]