Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Tasnim_Katha

Pages: [1] 2 3
Kurt Luther, Virginia Tech assistant professor of computer science, has developed a free software platform that uses crowdsourcing to significantly increase the ability of algorithms to identify faces in photos.

Through the software platform, called Photo Sleuth, Luther seeks to uncover the mysteries of the nearly 4 million photographs of Civil War-era images that may exist in the historical record.

Luther will present his research surrounding the Photo Sleuth platform on March 19 at the Association for Computing Machinery's Intelligent User Interfaces conference in Los Angeles, California. He will also demonstrate Photo Sleuth at the grand opening of the expanded American Civil War Museum, in Richmond, Virginia, on May 4, 2019.

Luther, a history buff himself, was inspired to develop the software for Civil War Photo Sleuth in 2013 while visiting the Heinz History Center's exhibit called "Pennsylvania's Civil War" in Pittsburgh, Pennsylvania. There he stumbled upon a Civil War-era portrait of Oliver Croxton, his great-great-great uncle who served in Company E of the 134th Pennsylvania, clad in a corporal's uniform.

"Seeing my distant relative staring back at me was like traveling through time," said Luther. "Historical photos can tell us a lot about not only our own familial history but also inform the historical record of the time more broadly than just reading about the event in a history book."

The Civil War Photo Sleuth project, funded primarily by the National Science Foundation, was officially launched as a web-based platform at the National Archives in Washington, D.C., on Aug. 1, 2018, and allows users to upload photos, tag them with visual cues, and connect them to profiles of Civil War soldiers with detailed records of military history. Photo Sleuth's initial reference database contained more than 15,000 identified Civil War soldier portraits from public domain sources like the U.S. Military History Institute and other private collections.

Prior to the project's official launch in August, the software platform won the $25,000 Microsoft Cloud AI Research Challenge and the Best Demo Award at the Human Computation and Crowdsourcing 2018 conference in Zurich, Switzerland, for Luther and his team, which includes academic and historical collaborators, the Virginia Center for Civil War Studies, and Military Images magazine.

According to Luther, the key to the site's post-launch success has been the ability to build a strong user community. More than 600 users contributed more than 2,000 Civil War photos to the website in the first month after the launch, and roughly half of those photos were unidentified. Over 100 of these unknown photos were linked to specific soldiers, and an expert analysis found that over 85 percent of these proposed identifications were probably or definitely correct. Presently, the database has grown to over 4,000 registered users and more than 8,000 photos.

"Typically, crowdsourced research such as this is challenging for novices if users don't have specific knowledge of the subject area," said Luther. "The step-by-step process of tagging visual clues and applying search filters linked to military service records makes this detective work more accessible, even for those that may not have a deeper knowledge of Civil War military history."

Person identification tasks can be challenging in larger candidate pools because there is a larger risk for false positives. The novel approach behind Civil War Photo Sleuth is based on the analogy of finding a needle in a haystack. The data pipeline has three haystack-related components: building the haystack, narrowing down the haystack, and finding the needle in the haystack. When combined, they allow users to identify unknown soldiers while reducing the risk of false positives.

Building the haystack is done by incentivizing users to upload scanned images of the fronts and backs of Civil War photos. Any time a user uploads a photo to identify it, the photo gets added to the site's digital archive or "haystack," making it available for future searches.

Following upload, the user tags metadata related to the photograph such as photo format or inscriptions, as well as visual clues, such as coat color, chevrons, shoulder straps, collar insignia, and hat insignia. These tags are linked to search filters to prioritize the most likely matches. For example, a soldier tagged with the "hunting horn" hat insignia would suggest potential matches who served in the infantry, while hiding results from the cavalry or artillery. Next, the site uses state-of-the-art face recognition technology to eliminate very different-looking faces and sort the remaining ones by similarity. Both the tagging and face recognition steps narrow down the haystack.

Finally, users find the needle in the haystack by exploring the highest-probability matches in more detail. A comparison tool with pan and zoom controls helps users carefully inspect a possible match and, if they decide it's a match, link the previously unknown photo to its new identity and biographical details.

The military records used by the filters come from myriad public sources, including the National Park Service Soldiers and Sailors Database.

Retracing historical Civil War photos through facial recognition software like Photo Sleuth has broad applications beyond identifying historical photos, too. The software has the potential to generate new ways to think about building person identification systems that look beyond face recognition and leverage the complementary strengths of both human and artificial intelligence.

The artificial intelligence software, created by researchers at Imperial College London and the University of Melbourne, has been able to predict the prognosis of patients with ovarian cancer more accurately than current methods. It can also predict what treatment would be most effective for patients following diagnosis.

The trial, published in Nature Communications took place at Hammersmith Hospital, part of Imperial College Healthcare NHS Trust.

Researchers say that this new technology could help clinicians administer the best treatments to patients more quickly and paves the way for more personalised medicine. They hope that the technology can be used to stratify ovarian cancer patients into groups based on the subtle differences in the texture of their cancer on CT scans rather than classification based on what type of cancer they have, or how advanced it is.

Professor Eric Aboagye, lead author and Professor of Cancer Pharmacology and Molecular Imaging, at Imperial College London, said:

"The long-term survival rates for patients with advanced ovarian cancer are poor despite the advancements made in cancer treatments. There is an urgent need to find new ways to treat the disease. Our technology is able to give clinicians more detailed and accurate information on the how patients are likely to respond to different treatments, which could enable them to make better and more targeted treatment decisions."

Professor Andrea Rockall, co-author and Honorary Consultant Radiologist, at Imperial College Healthcare NHS Trust, added:

"Artificial intelligence has the potential to transform the way healthcare is delivered and improve patient outcomes. Our software is an example of this and we hope that it can be used as a tool to help clinicians with how to best manage and treat patients with ovarian cancer."

Ovarian cancer is the sixth most common cancer in women and usually affects women after the menopause or those with a family history of the disease. There are 6,000 new cases of ovarian cancer a year in the UK but the long-term survival rate is just 35-40 per cent as the disease is often diagnosed at a much later stage once symptoms such as bloating are noticeable. Early detection of the disease could improve survival rates.

Doctors diagnose ovarian cancer in a number of ways including a blood test to look for a substance called CA125 -- an indication of cancer -- followed by a CT scan that uses x-rays and a computer to create detailed pictures of the ovarian tumour. This helps clinicians know how far the disease has spread and determines the type of treatment patients receive, such as surgery and chemotherapy.

However, the scans can't give clinicians detailed insight into patients' likely overall outcomes or on the likely effect of a therapeutic intervention.

Researchers used a mathematical software tool called TEXLab to identify the aggressiveness of tumours in CT scans and tissue samples from 364 women with ovarian cancer between 2004 and 2015.

The software examined four biological characteristics of the tumours which significantly influence overall survival -- structure, shape, size and genetic makeup -- to assess the patients' prognosis. The patients were then given a score known as Radiomic Prognostic Vector (RPV) which indicates how severe the disease is, ranging from mild to severe.

The researchers compared the results with blood tests and current prognostic scores used by doctors to estimate survival. They found that the software was up to four times more accurate for predicting deaths from ovarian cancer than standard methods.

The team also found that five per cent of patients with high RPV scores had a survival rate of less than two years. High RPV was also associated with chemotherapy resistance and poor surgical outcomes, suggesting that RPV can be used as a potential biomarker to predict how patients would respond to treatments.

Professor Aboagye suggests that this technology can be used to identify patients who are unlikely to respond to standard treatments and offer them alternative treatments.

The researchers will carry out a larger study to see how accurately the software can predict the outcomes of surgery and/or drug therapies for individual patients.

The study was funded by the NIHR Imperial Biomedical Research Centre, the Imperial College Experimental Cancer Medicine Centre and Imperial College London Tissue Bank.

This research is an example of the work carried out by Imperial College Academic Health Science Centre, a joint initiative between Imperial College London and three NHS hospital trusts. It aims to transform healthcare by turning scientific discoveries into medical advances to benefit local, national and global populations in as fast a timeframe as possible.

Researchers have created new AI software that can identify cardiac rhythm devices in x-rays more accurately and quickly than current methods.

The team believes this software could speed up the diagnosis and treatment of patients with faulty devices in an emergency setting.

The software, created by researchers at Imperial College London, has been able to identify the make and model of different cardiac rhythm devices, such as pacemakers and defibrillators, within seconds. The study, published in JACC: Clinical Electrophysiology, took place at Hammersmith Hospital, part of Imperial College Healthcare NHS Trust.

Dr James Howard, Clinical Research Fellow at Imperial College London and lead author of the study, said: "Pacemakers and defibrillators have improved the lives of millions of patients from around the world. However, in some rare cases these devices can fail and patients can deteriorate as a result. In these situations, clinicians must quickly identify the type of device a patient has so they can provide treatment such as changing the device's settings or replacing the leads. Unfortunately, current methods are slow and out-dated and there is a real need to find new and improved ways of identifying devices during emergency settings. Our new software could be a solution as it can identify devices accurately and instantly. This could help clinicians make the best decisions for treating patients."

More than one million people around the world undergo implantation of a cardiac rhythm device each year, with over 50,000 being implanted per year in the UK. These devices are placed under the patients' skin to either help the heart's electrical system function properly or measure heart rhythm. Pacemakers treat slow heart rhythms by 'pacing' the heart to beat faster, whilst defibrillators treat fast heart rhythms by delivering electric shocks to reset the heartbeat back to a normal rhythm.

However, in some rare cases these devices can lose their ability to control the heartbeat, either because the device malfunctions or the wires connecting it to the heart move out of the correct position. When this happens, patients may experience palpitations, loss of consciousness or inappropriate electric shocks.

In these situations, clinicians need to determine the model of a device to investigate why it has failed. Unless they have access to the records where implantation took place, or the patient can tell them, staff must use a flowchart algorithm to identify pacemakers by a process of elimination. The flowchart contains a series of shapes and circuit board components of different pacemakers designed to help clinicians identify the make and model of a patient's pacemaker. Not only is this time-consuming, but these flow charts are now outdated and therefore inaccurate. This can result in delays to delivering care to patients, who are often in critical conditions.

In the new study, researchers trained the software program called a neural network to identify more than 1,600 different cardiac devices from patients.

To use the neural network, the clinician uploads the X-ray image containing the device into a computer and the software reads the image to give a result on the make and model of the device within seconds.

The team used the programme to see if it could identify the devices from radiographic images of more than 1,500 patients at Hammersmith Hospital between 1998 and 2018. They then compared the results with five cardiologists who used the current flowchart algorithm to identify the devices.

The team found that the software outperformed current methods. It was 99 per cent accurate in identifying the manufacturer of a device, compared with only 72 percent accuracy for the flow chart. The team suggests the software could greatly speed up the care of patients with heart rhythm device problems.

The researchers will aim to carry out a further trial to validate the results in a larger group of patients and investigate ways to create a more portable device that can be used on hospital wards.

The research was funded by NIHR Imperial Biomedical Research Centre, the Medical Research Council, the Wellcome Trust and the British Heart Foundation.

Software Engineering / Safeguarding hardware from cyberattack
« on: May 06, 2019, 05:55:04 PM »
Researchers have developed an algorithm that safeguards hardware from attacks to steal data. In the attacks, hackers detect variations of power and electromagnetic radiation in electronic devices' hardware and use that variation to steal encrypted information.

Researchers with the University of Wyoming and the University of Cincinnati recently published their work in the Institute of Engineering and Technology Journal.

Electronic devices appear more secure than ever before. Devices that used to rely on passwords now use Touch ID, or even face recognition software. Unlocking our phones is like entering a 21st century Batcave, with high-tech security measures guarding the entry.

But protecting software is only one part of electronic security. Hardware is also susceptible to attacks.

"In general, we believe that because we write secure software, we can secure everything," said University of Wyoming assistant professor Mike Borowczak, Ph.D., who graduated from UC. He and his advisor, UC professor Ranga Vemuri, Ph.D., led the project.

"Regardless of how secure you can make your software, if your hardware leaks information, you can basically bypass all those security mechanisms," Borowczak said.

Devices such as remote car keys, cable boxes and even credit card chips are all vulnerable to hardware attacks, typically because of their design. These devices are small and lightweight and operate on minimal power. Engineers optimize designs so the devices can work within these low-power constraints.

"The problem is if you try to absolutely minimize all the time, you're basically selectively optimizing," Borowczak said. "You're optimizing for speed, power, area and cost, but you're taking a hit on security."

When something like a cable box first turns on, it's decoding and encoding specific manufacturer information tied to its security. This decoding and encoding process draws more power and emits more electromagnetic radiation than when all of the other functions are on. Over time, these variations in power and radiation create a pattern unique to that cable box, and that unique signature is exactly what hackers are looking for.

"If you could steal information from something like a DVR early on, you could basically use it to reverse engineer and figure out how the decryption was happening," Borowczak said.

Hackers don't need physical access to a device to take this information. Attackers can remotely detect frequencies in car keys and break into a car from more than 100 yards away.

To secure the hardware in these devices, Vemuri and Borowczak went back to square-one: these devices' designs.

Borowczak and Vemuri aim to restructure the design and code devices in a way that doesn't leak any information. To do this, they developed an algorithm that provides more secure hardware.

"You take the design specification and restructure it at an algorithmic level, so that the algorithm, no matter how it is implemented, draws the same amount of power in every cycle," Vemuri said. "We've basically equalized the amount of power consumed across all the cycles, whereby even if attackers have power measurements, they can't do anything with that information."

What's left is a more secure device with a more automated design. Rather than manually securing each hardware component, the algorithm automates the process. On top of that, a device created using this algorithm only uses about 5 percent more power than an insecure device, making the work commercially viable.

Software and hardware security is an ongoing game of cat and mouse: As security technologies improve, hackers eventually find ways around these barriers. Hardware security is further complicated by the expanding network of devices and their interactivity, also known as the Internet of Things.

Innovative research like the work by Vemuri and Borowczak can give people an extra layer of safety and security in a world of connected devices.

Tracking the firings of individual neurons is like trying to discern who is saying what in a football stadium full of screaming fans. Until recently, neuroscientists have had to tediously track each neuron by hand.

"People spent more time analyzing their data to extract activity traces than actually collecting it," says Dmitri Chklovskii, who leads the neuroscience group at the Center for Computational Biology (CCB) at the Flatiron Institute in New York City.

A breakthrough software tool called CaImAn automates this arduous process using a combination of standard computational methods and machine-learning techniques. In a paper published in the journal eLife in January, the software's creators demonstrate that CaImAn achieves near-human accuracy in detecting the locations of active neurons based on calcium imaging data.

CaImAn (an abbreviation of calcium imaging analysis) has been freely available for a few years and has already proved invaluable to the calcium imaging community, with more than 100 labs using the software. The latest iteration of CaImAn can run on a standard laptop and analyze data in real time, meaning scientists can analyze data as they run experiments. "My lab is excited about being able to use a tool like this," says Duke University neuroscientist John Pearson, who was not involved in the software's development.

CaImAn is the product of an effort initiated by Chklovskii within his group at CCB. He brought on Eftychios Pnevmatikakis and later Andrea Giovannucci to spearhead the project. Their aim was to help tackle the enormous datasets produced by a method called calcium imaging.

That technique involves adding a special dye to brain tissue or to neurons in a dish. The dye binds to the calcium ions responsible for activating neurons. Under ultraviolet light, the dye lights up. Fluorescence only occurs when the dye binds to a calcium ion, allowing researchers to visually track a neuron's activity.

Analyzing the data gathered via calcium imaging poses a significant challenge. The process generates a flood of data -- up to 1 terabyte an hour of flickering movies -- that rapidly becomes overwhelming. "One experimenter can fill up the largest commercially available hard drive in one day," says Michael Häusser, a neuroscientist at University College London whose team tested CaImAn.

The data are also noisy. Much like mingling voices, fluorescent signals from different neurons often overlap, making it difficult to pick out individual neurons. Moreover, brain tissue jiggles, adding to the challenge of tracking the same neuron over time.

Pnevmatikakis, now a research scientist at the Flatiron Institute's Center for Computational Mathematics, first began developing the basic algorithm underlying CaImAn as a postdoc in Liam Paninski's lab at Columbia University.

"It was elegant mathematically and did a decent job, but we realized it didn't generalize well to different datasets," Pnevmatikakis says. "We wanted to transform it into a software suite that the community can use." That was partly why he was drawn to the neuroscience group at Flatiron, which develops new tools for analyzing large datasets.

Pnevmatikakis later began working with Giovannucci, then a postdoc at Princeton University, on applying the algorithm to tracking the activity of cerebellar granule cells, a densely packed, rapid-firing group of neurons. "Existing analysis tools were not powerful enough to disentangle the activity of this population of neurons and implied that they were all doing the same thing," says Giovannucci, who joined the CCB neuroscience group for three years to help develop the software for broader use. "The algorithm subtracts the background voices and focuses on a few," revealing that individual granule cells do indeed have distinct activity patterns.

Further work at the Flatiron Institute honed CaImAn's abilities and made the software easier for researchers to use for a variety of experiments without extensive customization.

The researchers recently tested CaImAn's accuracy by comparing its results with a human-generated dataset. The comparison proved that the software is nearly as accurate as humans in identifying active neurons but much more efficient. Its speediness allows researchers to adapt their experiments on the fly, improving studies of how specific bundles of neurons contribute to different behaviors. The human dataset also revealed high variability from person to person, highlighting the benefit of having a standardized tool for analyzing imaging data.

In addition to benchmarking accuracy, the researchers used the human-annotated results as a training dataset, developing machine-learning-based tools to enhance the CaImAn package. They have since made this dataset public, so that the community can use it to further extend CaImAn or to create new tools.

MIT researchers have designed a novel flash-storage system that could cut in half the energy and physical space required for one of the most expensive components of data centers: data storage.

Data centers are server farms that facilitate communication between users and web services, and are some of the most energy-consuming facilities in the world. In them, thousands of power-hungry servers store user data, and separate servers run app services that access that data. Other servers sometimes facilitate the computation between those two server clusters.

Most storage servers today use solid-state drives (SSDs), which use flash storage -- electronically programmable and erasable memory microchips with no moving parts -- to handle high-throughput data requests at high speeds. In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems, the researchers describe a new system called LightStore that modifies SSDs to connect directly to a data center's network -- without needing any other components -- and to support computationally simpler and more efficient data-storage operations. Further software and hardware innovations seamlessly integrate the system into existing data center infrastructure.

In experiments, the researchers found a cluster of four LightStore units, called storage nodes, ran twice as efficiently as traditional storage servers, measured by the power consumption needed to field data requests. The cluster also required less than half the physical space occupied by existing servers.

The researchers broke down energy savings by individual data storage operations, as a way to better capture the system's full energy savings. In "random writing" data, for instance, which is the most computationally intensive operation in flash memory, LightStore operated nearly eight times more efficiently than traditional servers.

The hope is that, one day, LightStore nodes could replace power-hungry servers in data centers. "We are replacing this architecture with a simpler, cheaper storage solution ... that's going to take half as much space and half the power, yet provide the same throughput capacity performance," says co-author Arvind, the Johnson Professor in Computer Science Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory. "That will help you in operational expenditure, as it consumes less power, and capital expenditure, because energy savings in data centers translate directly to money savings."

Joining Arvind on the paper are: first author Chanwoo Chung, a graduate student in the Department of Electrical Engineering and Computer Science; and graduate students Jinhyung Koo and Junsu Im, and Professor Sungjin Lee, all of the Daegu Gyeongbuk Institute of Science and Technology (DGIST).

Adding "value" to flash

A major efficiency issue with today's data centers is that the architecture hasn't changed to accommodate flash storage. Years ago, data-storage servers consisted of relatively slow hard disks, along with lots of dynamic random-access memory circuits (DRAM) and central processing units (CPU) that help quickly process all the data pouring in from the app servers.

Today, however, hard disks have mostly been replaced with much faster flash drives. "People just plugged flash into where the hard disks used to be, without changing anything else," Chung says. "If you can just connect flash drives directly to a network, you won't need these expensive storage servers at all."

For LightStore, the researchers first modified SSDs to be accessed in terms of "key-value pairs," a very simple and efficient protocol for retrieving data. Basically, user requests appear as keys, like a string of numbers. Keys are sent to a server, which releases the data (value) associated with that key.

The concept is simple, but keys can be extremely large, so computing (searching and inserting) them solely in SSD requires a lot of computation power, which is used up by traditional "flash translation layer." This fairly complex software runs on a separate module on a flash drive to manage and move around data. The researchers used certain data-structuring techniques to run this flash management software using only a fraction of computing power. In doing so, they offloaded the software entirely onto a tiny circuit in the flash drive that runs far more efficiently.

That offloading frees up separate CPUs already on the drive -- which are designed to simplify and more quickly execute computation -- to run custom LightStore software. This software uses data-structuring techniques to efficiently process key-value pair requests. Essentially, without changing the architecture, the researchers converted a traditional flash drive into a key-value drive. "So, we are adding this new feature for flash -- but we are really adding nothing at all," Arvind says.

Adapting and scaling

The challenge was then ensuring app servers could access data in LightStore nodes. In data centers, apps access data through a variety of structural protocols, such as file systems, databases, and other formats. Traditional storage servers run sophisticated software that provides the app servers access via all of these protocols. But this uses a good amount of computation energy and isn't suitable to run on LightStore, which relies on limited computational resources.

The researchers designed very computationally light software, called an "adapter," which translates all user requests from app services into key-value pairs. The adapters use mathematical functions to convert information about the requested data -- such as commands from the specific protocols and identification numbers of the app server -- into a key. It then sends that key to the appropriate LightStore node, which finds and releases the paired data. Because this software is computationally simpler, it can be installed directly onto app servers.

"Whatever data you access, we do some translation that tells me the key and the value associated with it. In doing so, I'm also taking some complexity away from the storage servers," Arvind says.

One final innovation is that adding LightStore nodes to a cluster scales linearly with data throughput -- the rate at which data can be processed. Traditionally, people stack SSDs in data centers to tackle higher throughput. But, while data storage capacity may grow, the throughput plateaus after only a few additional drives. In experiments, the researchers found that four LightStore nodes surpass throughput levels by the same amount of SSDs.

Software Engineering / Machine learning for measuring roots
« on: May 06, 2019, 05:53:43 PM »
Researchers from the Centre for Research in Agricultural Genomics (CRAG) and La Salle-Ramon Llull University, both in Barcelona, Spain, have developed a software that, through image processing and machine learning, allows researchers to semi-automate the analysis of root growth of Arabidopsis thaliana seedlings growing directly in agar plates. The software, named MyRoot, has been made available to the research community free of charge. CRAG researchers have already saved significant labor and time using MyRoot. The high efficiency and accuracy offered by MyRoot has been demonstrated in an article that is recently published in The Plant Journal.

The root: a key element for the agriculture

The root, which is responsible for anchoring the plant to the soil, is an essential organ for overall plant growth and development. Roots provide the necessary structural and functional support for the incorporation of nutrients and water from the soil. Characterization of different root traits is therefore important not only for understanding organ growth, but also for evaluating the impact of roots in agriculture. At CRAG, the research group led by Ana I. Caño-Delgado studies steroid hormone signaling effects on root development, using the small model plant Arabidopsis thaliana. To do so, researchers at Caño-Delgado's laboratory must measure the root length of a large number of arabidopsis seedlings holding different genetic modifications and exposed to different conditions. Thanks to these investigations, they recently discovered how to create drought resistant plants, without affecting their growth.

Isabel Betegón-Putze has spent three years doing her doctoral thesis at CRAG, and during this time she has spent many hours measuring arabidopsis roots with the photos she takes. "I had tried some semi-automatic analysis softwares but they were not accurate enough and were very difficult to use," explains Betegón-Putze. Her thesis director, Ana I. Caño-Delgado, proposed to collaborate with the engineer Xavier Sevillano, from the Research Group in Media Technologies at La Salle, to develop a new software that streamlined this process.

MyRoot: artificial intelligence to save time

Following Caño-Delgado's indications, Isabel Betegón-Putze worked hand in hand with the computer engineer Alejandro González -from Sevillano's team- to design a user-friendly software. "One of the strengths of MyRoot is that, apart from being very precise, it is very usable for the end user. For this, it has been key to include plant biologists like Isabel in the development process, taking into account their opinions and needs ," explains Alejandro González.

One of the challenges that the new software had to overcome was to replace a researcher trained in differentiating the stems from the roots in the small arabidopsis seedlings, since at this stage their appearance is very similar. To do it, the researchers used machine learning techniques, training the algorithm with seedlings of different ages and characteristics. "We have trained the system by exposing it to many different situations," explains Xavier Sevillano. "Thanks to the use of machine learning in MyRoot, the software is very accurate when measuring roots," he adds.

In the article published in The Plant Journal, the CRAG and La Salle team compared the time needed for manual measurements with that of MyRoot, demonstrating that MyRoot reduces the time required to measure one plate by approximately half. MyRoot also give the most precise root length measurements when compared with similar softwares, suggesting that it could become widely used tool by the research community to perform high-throughput experiments in a less time-consuming manner.

The future: a tool at the service of agriculture

"We are very satisfied with the results obtained thanks to this collaboration with La Salle engineers, and we are already thinking of extending the project, taking it farther from the academic sphere," explains Ana I. Caño-Delgado. Actually, the researchers are already thinking of further automating this process by building a robot, which would limit even more the researcher intervention and would allow to analyse a large number of samples in a short time. "The next thing we want to do is to add hardware to the designed software," says Caño-Delgado. "If we also expand and train the software to be used with roots from different plant species, it could also be a very useful tool in the agricultural field," she adds.

A specially designed computer program can help diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices, a new study finds.

Published online April 22 in the journal Depression and Anxiety, the study found that an artificial intelligence tool can distinguish -- with 89 percent accuracy -- between the voices of those with or without PTSD.

"Our findings suggest that speech-based characteristics can be used to diagnose this disease, and with further refinement and validation, may be employed in the clinic in the near future," says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine.

More than 70 percent of adults worldwide experience a traumatic event at some point in their lives, with up to 12 percent of people in some struggling countries suffering from PTSD. Those with the condition experience strong, persistent distress when reminded of a triggering event.

The study authors say that a PTSD diagnosis is most often determined by clinical interview or a self-report assessment, both inherently prone to biases. This has led to efforts to develop objective, measurable, physical markers of PTSD progression, much like laboratory values for medical conditions, but progress has been slow.

Learning How to Learn

In the current study, the research team used a statistical/machine learning technique, called random forests, that has the ability to "learn" how to classify individuals based on examples. Such AI programs build "decision" rules and mathematical models that enable decision-making with increasing accuracy as the amount of training data grows.

The researchers first recorded standard, hours-long diagnostic interviews, called Clinician-Administered PTSD Scale, or CAPS, of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as those of 78 veterans without the disease. The recordings were then fed into voice software from SRI International -- the institute that also invented Siri -- to yield a total of 40,526 speech-based features captured in short spurts of talk, which the team's AI program sifted through for patterns.

The random forest program linked patterns of specific voice features with PTSD, including less clear speech and a lifeless, metallic tone, both of which had long been reported anecdotally as helpful in diagnosis. While the current study did not explore the disease mechanisms behind PTSD, the theory is that traumatic events change brain circuits that process emotion and muscle tone, which affects a person's voice.

Moving forward, the research team plans to train the AI voice tool with more data, further validate it on an independent sample, and apply for government approval to use the tool clinically.

"Speech is an attractive candidate for use in an automated diagnostic system, perhaps as part of a future PTSD smartphone app, because it can be measured cheaply, remotely, and non-intrusively," says lead author Adam Brown, PhD, adjunct assistant professor in the Department of Psychiatry at NYU School of Medicine.

"The speech analysis technology used in the current study on PTSD detection falls into the range of capabilities included in our speech analytics platform called SenSay Analytics™," says Dimitra Vergyri, director of SRI International's Speech Technology and Research (STAR) Laboratory. "The software analyzes words -- in combination with frequency, rhythm, tone, and articulatory characteristics of speech -- to infer the state of the speaker, including emotion, sentiment, cognition, health, mental health and communication quality. The technology has been involved in a series of industry applications visible in startups like Oto, Ambit and Decoded Health."

Researchers have discovered a simple, cost-effective, and accurate new method for equipping self-driving cars with the tools needed to perceive 3D objects in their path.

The laser sensors currently used to detect 3D objects in the paths of autonomous cars are bulky, ugly, expensive, energy-inefficient -- and highly accurate.

These Light Detection and Ranging (LiDAR) sensors are affixed to cars' roofs, where they increase wind drag, a particular disadvantage for electric cars. They can add around $10,000 to a car's cost. Despite their drawbacks, most experts have considered LiDAR sensors the only plausible way for self-driving vehicles to safely perceive pedestrians, cars and other hazards on the road.

Now, Cornell researchers have discovered that a simpler method, using two inexpensive cameras on either side of the windshield, can detect objects with nearly LiDAR's accuracy and at a fraction of the cost. The researchers found that analyzing the captured images from a bird's eye view rather than the more traditional frontal view more than tripled their accuracy, making stereo cameras a viable and low-cost alternative to LiDAR.

"One of the essential problems in self-driving cars is to identify objects around them -- obviously that's crucial for a car to navigate its environment," said Kilian Weinberger, associate professor of computer science and senior author of the paper, which will be presented at the 2019 Conference on Computer Vision and Pattern Recognition, June 15-21 in Long Beach, California.

"The common belief is that you couldn't make self-driving cars without LiDARs," Weinberger said. "We've shown, at least in principle, that it's possible."

The first author of the paper is Yan Wang, doctoral student in computer science.

Ultimately, Weinberger said, stereo cameras could potentially be used as the primary way of identifying objects in lower-cost cars, or as a backup method in higher-end cars that are also equipped with LiDAR.

The research was partly supported by grants from the National Science Foundation, the Office of Naval Research and the Bill and Melinda Gates Foundation

Tapping into the unique nature of DNA, Cornell engineers have created simple machines constructed of biomaterials with properties of living things.

Using what they call DASH (DNA-based Assembly and Synthesis of Hierarchical) materials, engineers constructed a DNA material with capabilities of metabolism, in addition to self-assembly and organization -- three key traits of life.

"We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that's alive, but we are creating materials that are much more lifelike than have ever been seen before," said Dan Luo, professor of biological and environmental engineering.

The paper published in Science Robotics.

For any living organism to maintain itself, there must be a system to manage change. New cells must be generated; old cells and waste must be swept away. Biosynthesis and biodegradation are key elements of self-sustainability and require metabolism to maintain its form and functions.

Through this system, DNA molecules are synthesized and assembled into patterns in a hierarchical way, resulting in something that can perpetuate a dynamic, autonomous process of growth and decay.

Using DASH, the Cornell engineers created a biomaterial that can autonomously emerge from its nanoscale building blocks and arrange itself -- first into polymers and eventually mesoscale shapes. Starting from a 55-nucleotide base seed sequence, the DNA molecules were multiplied hundreds of thousands times, creating chains of repeating DNA a few millimeters in size. The reaction solution was then injected in a microfluidic device that provided a liquid flow of energy and the necessary building blocks for biosynthesis.

As the flow washed over the material, the DNA synthesized its own new strands, with the front end of the material growing and the tail end degrading in optimized balance. In this way, it made its own locomotion, creeping forward, against the flow, in a way similar to how slime molds move.

The locomotive ability allowed the researchers to pit sets of the material against one another in competitive races. Due to randomness in the environment, one body would eventually gain an advantage over the other, allowing one to cross a finish line first.

"The designs are still primitive, but they showed a new route to create dynamic machines from biomolecules. We are at a first step of building lifelike robots by artificial metabolism," said Shogo Hamada, lecturer and research associate in the Luo lab, and lead and co-corresponding author of the paper. "Even from a simple design, we were able to create sophisticated behaviors like racing. Artificial metabolism could open a new frontier in robotics."

Facial recognition technology works even when only half a face is visible, researchers from the University of Bradford have found.

Using artificial intelligence techniques, the team achieved 100 per cent recognition rates for both three-quarter and half faces. The study, published in Future Generation Computer Systems, is the first to use machine learning to test the recognition rates for different parts of the face.

Lead researcher, Professor Hassan Ugail from the University of Bradford said: "The ability humans have to recognise faces is amazing, but research has shown it starts to falter when we can only see parts of a face. Computers can already perform better than humans in recognising one face from a large number, so we wanted to see if they would be better at partial facial recognition as well."

The team used a machine learning technique known as a 'convolutional neural network', drawing on a feature extraction model called VGG -- one of the most popular and widely used for facial recognition.

They worked with a dataset containing multiple photos -- 2800 in total -- of 200 students and staff from FEI University in Brazil, with equal numbers of men and women.

For the first experiment, the team trained the model using only full facial images They then ran an experiment to see how well the computer was able to recognise faces, even when shown only part of them. The computer recognised full faces 100 per cent of the time, but the team also had 100% success with three-quarter faces and with the top or right half of the face. However, the bottom half of the face was only correctly recognised 60 per cent of the time and eyes and nose on their own, just 40 per cent.

They then ran the experiment again, after training the model using partial facial images as well. This time, the scores significantly improved for the bottom half of the face, for eyes and nose on their own and even for faces with no eyes and nose visible, achieving around 90% correct identification.

Individual facial parts, such as the nose, cheek, forehead or mouth had low recognition rates in both experiments.

The results are promising, according to Professor Hassan:

"We've now shown that it's possible to have very accurate facial recognition from images that only show part of a face and we've identified which parts are most useful. This opens up greater possibilities for the use of the technology for security or crime prevention.

"Our experiments now need validating on a much larger dataset. However, in the future it's likely that image databases used for facial recognition will need to include partial images as well, so that the models can be trained correctly to recognise a face even when not all of it is visible."

Mobile-phone technology has changed the way humans understand and interact with the world and with each other. It’s hard to think of a technology that has more strongly shaped 21st-century living.

The latest technology—the fifth generation of mobile standards, or 5G—is currently being deployed in select locations around the world. And that raises an obvious question. What factors will drive the development of the sixth generation of mobile technology? How will 6G differ from 5G, and what kinds of interactions and activity will it allow that won’t be possible with 5G?

Today, we get an answer of sorts, thanks to the work of Razvan-Andrei Stoica and Giuseppe Abreu at Jacobs University Bremen in Germany. These guys have mapped out the limitations of 5G and the factors they think will drive the development of 6G. Their conclusion is that artificial intelligence will be the main driver of mobile technology and that 6G will be the enabling force behind an entirely new generation of applications for machine intelligence.

Pages: [1] 2 3