Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Tasnim_Katha

Pages: 1 [2] 3 4 ... 8
16
Thanks for sharing with us.

17
Thanks for sharing with us.

18
Thanks for sharing with us.

19
Kurt Luther, Virginia Tech assistant professor of computer science, has developed a free software platform that uses crowdsourcing to significantly increase the ability of algorithms to identify faces in photos.

Through the software platform, called Photo Sleuth, Luther seeks to uncover the mysteries of the nearly 4 million photographs of Civil War-era images that may exist in the historical record.

Luther will present his research surrounding the Photo Sleuth platform on March 19 at the Association for Computing Machinery's Intelligent User Interfaces conference in Los Angeles, California. He will also demonstrate Photo Sleuth at the grand opening of the expanded American Civil War Museum, in Richmond, Virginia, on May 4, 2019.

Luther, a history buff himself, was inspired to develop the software for Civil War Photo Sleuth in 2013 while visiting the Heinz History Center's exhibit called "Pennsylvania's Civil War" in Pittsburgh, Pennsylvania. There he stumbled upon a Civil War-era portrait of Oliver Croxton, his great-great-great uncle who served in Company E of the 134th Pennsylvania, clad in a corporal's uniform.

"Seeing my distant relative staring back at me was like traveling through time," said Luther. "Historical photos can tell us a lot about not only our own familial history but also inform the historical record of the time more broadly than just reading about the event in a history book."

The Civil War Photo Sleuth project, funded primarily by the National Science Foundation, was officially launched as a web-based platform at the National Archives in Washington, D.C., on Aug. 1, 2018, and allows users to upload photos, tag them with visual cues, and connect them to profiles of Civil War soldiers with detailed records of military history. Photo Sleuth's initial reference database contained more than 15,000 identified Civil War soldier portraits from public domain sources like the U.S. Military History Institute and other private collections.

Prior to the project's official launch in August, the software platform won the $25,000 Microsoft Cloud AI Research Challenge and the Best Demo Award at the Human Computation and Crowdsourcing 2018 conference in Zurich, Switzerland, for Luther and his team, which includes academic and historical collaborators, the Virginia Center for Civil War Studies, and Military Images magazine.

According to Luther, the key to the site's post-launch success has been the ability to build a strong user community. More than 600 users contributed more than 2,000 Civil War photos to the website in the first month after the launch, and roughly half of those photos were unidentified. Over 100 of these unknown photos were linked to specific soldiers, and an expert analysis found that over 85 percent of these proposed identifications were probably or definitely correct. Presently, the database has grown to over 4,000 registered users and more than 8,000 photos.

"Typically, crowdsourced research such as this is challenging for novices if users don't have specific knowledge of the subject area," said Luther. "The step-by-step process of tagging visual clues and applying search filters linked to military service records makes this detective work more accessible, even for those that may not have a deeper knowledge of Civil War military history."

Person identification tasks can be challenging in larger candidate pools because there is a larger risk for false positives. The novel approach behind Civil War Photo Sleuth is based on the analogy of finding a needle in a haystack. The data pipeline has three haystack-related components: building the haystack, narrowing down the haystack, and finding the needle in the haystack. When combined, they allow users to identify unknown soldiers while reducing the risk of false positives.

Building the haystack is done by incentivizing users to upload scanned images of the fronts and backs of Civil War photos. Any time a user uploads a photo to identify it, the photo gets added to the site's digital archive or "haystack," making it available for future searches.

Following upload, the user tags metadata related to the photograph such as photo format or inscriptions, as well as visual clues, such as coat color, chevrons, shoulder straps, collar insignia, and hat insignia. These tags are linked to search filters to prioritize the most likely matches. For example, a soldier tagged with the "hunting horn" hat insignia would suggest potential matches who served in the infantry, while hiding results from the cavalry or artillery. Next, the site uses state-of-the-art face recognition technology to eliminate very different-looking faces and sort the remaining ones by similarity. Both the tagging and face recognition steps narrow down the haystack.

Finally, users find the needle in the haystack by exploring the highest-probability matches in more detail. A comparison tool with pan and zoom controls helps users carefully inspect a possible match and, if they decide it's a match, link the previously unknown photo to its new identity and biographical details.

The military records used by the filters come from myriad public sources, including the National Park Service Soldiers and Sailors Database.

Retracing historical Civil War photos through facial recognition software like Photo Sleuth has broad applications beyond identifying historical photos, too. The software has the potential to generate new ways to think about building person identification systems that look beyond face recognition and leverage the complementary strengths of both human and artificial intelligence.


20
The artificial intelligence software, created by researchers at Imperial College London and the University of Melbourne, has been able to predict the prognosis of patients with ovarian cancer more accurately than current methods. It can also predict what treatment would be most effective for patients following diagnosis.

The trial, published in Nature Communications took place at Hammersmith Hospital, part of Imperial College Healthcare NHS Trust.

Researchers say that this new technology could help clinicians administer the best treatments to patients more quickly and paves the way for more personalised medicine. They hope that the technology can be used to stratify ovarian cancer patients into groups based on the subtle differences in the texture of their cancer on CT scans rather than classification based on what type of cancer they have, or how advanced it is.

Professor Eric Aboagye, lead author and Professor of Cancer Pharmacology and Molecular Imaging, at Imperial College London, said:

"The long-term survival rates for patients with advanced ovarian cancer are poor despite the advancements made in cancer treatments. There is an urgent need to find new ways to treat the disease. Our technology is able to give clinicians more detailed and accurate information on the how patients are likely to respond to different treatments, which could enable them to make better and more targeted treatment decisions."

Professor Andrea Rockall, co-author and Honorary Consultant Radiologist, at Imperial College Healthcare NHS Trust, added:

"Artificial intelligence has the potential to transform the way healthcare is delivered and improve patient outcomes. Our software is an example of this and we hope that it can be used as a tool to help clinicians with how to best manage and treat patients with ovarian cancer."

Ovarian cancer is the sixth most common cancer in women and usually affects women after the menopause or those with a family history of the disease. There are 6,000 new cases of ovarian cancer a year in the UK but the long-term survival rate is just 35-40 per cent as the disease is often diagnosed at a much later stage once symptoms such as bloating are noticeable. Early detection of the disease could improve survival rates.

Doctors diagnose ovarian cancer in a number of ways including a blood test to look for a substance called CA125 -- an indication of cancer -- followed by a CT scan that uses x-rays and a computer to create detailed pictures of the ovarian tumour. This helps clinicians know how far the disease has spread and determines the type of treatment patients receive, such as surgery and chemotherapy.

However, the scans can't give clinicians detailed insight into patients' likely overall outcomes or on the likely effect of a therapeutic intervention.

Researchers used a mathematical software tool called TEXLab to identify the aggressiveness of tumours in CT scans and tissue samples from 364 women with ovarian cancer between 2004 and 2015.

The software examined four biological characteristics of the tumours which significantly influence overall survival -- structure, shape, size and genetic makeup -- to assess the patients' prognosis. The patients were then given a score known as Radiomic Prognostic Vector (RPV) which indicates how severe the disease is, ranging from mild to severe.

The researchers compared the results with blood tests and current prognostic scores used by doctors to estimate survival. They found that the software was up to four times more accurate for predicting deaths from ovarian cancer than standard methods.

The team also found that five per cent of patients with high RPV scores had a survival rate of less than two years. High RPV was also associated with chemotherapy resistance and poor surgical outcomes, suggesting that RPV can be used as a potential biomarker to predict how patients would respond to treatments.

Professor Aboagye suggests that this technology can be used to identify patients who are unlikely to respond to standard treatments and offer them alternative treatments.

The researchers will carry out a larger study to see how accurately the software can predict the outcomes of surgery and/or drug therapies for individual patients.

The study was funded by the NIHR Imperial Biomedical Research Centre, the Imperial College Experimental Cancer Medicine Centre and Imperial College London Tissue Bank.

This research is an example of the work carried out by Imperial College Academic Health Science Centre, a joint initiative between Imperial College London and three NHS hospital trusts. It aims to transform healthcare by turning scientific discoveries into medical advances to benefit local, national and global populations in as fast a timeframe as possible.

21
Researchers have created new AI software that can identify cardiac rhythm devices in x-rays more accurately and quickly than current methods.

The team believes this software could speed up the diagnosis and treatment of patients with faulty devices in an emergency setting.

The software, created by researchers at Imperial College London, has been able to identify the make and model of different cardiac rhythm devices, such as pacemakers and defibrillators, within seconds. The study, published in JACC: Clinical Electrophysiology, took place at Hammersmith Hospital, part of Imperial College Healthcare NHS Trust.

Dr James Howard, Clinical Research Fellow at Imperial College London and lead author of the study, said: "Pacemakers and defibrillators have improved the lives of millions of patients from around the world. However, in some rare cases these devices can fail and patients can deteriorate as a result. In these situations, clinicians must quickly identify the type of device a patient has so they can provide treatment such as changing the device's settings or replacing the leads. Unfortunately, current methods are slow and out-dated and there is a real need to find new and improved ways of identifying devices during emergency settings. Our new software could be a solution as it can identify devices accurately and instantly. This could help clinicians make the best decisions for treating patients."

More than one million people around the world undergo implantation of a cardiac rhythm device each year, with over 50,000 being implanted per year in the UK. These devices are placed under the patients' skin to either help the heart's electrical system function properly or measure heart rhythm. Pacemakers treat slow heart rhythms by 'pacing' the heart to beat faster, whilst defibrillators treat fast heart rhythms by delivering electric shocks to reset the heartbeat back to a normal rhythm.

However, in some rare cases these devices can lose their ability to control the heartbeat, either because the device malfunctions or the wires connecting it to the heart move out of the correct position. When this happens, patients may experience palpitations, loss of consciousness or inappropriate electric shocks.

In these situations, clinicians need to determine the model of a device to investigate why it has failed. Unless they have access to the records where implantation took place, or the patient can tell them, staff must use a flowchart algorithm to identify pacemakers by a process of elimination. The flowchart contains a series of shapes and circuit board components of different pacemakers designed to help clinicians identify the make and model of a patient's pacemaker. Not only is this time-consuming, but these flow charts are now outdated and therefore inaccurate. This can result in delays to delivering care to patients, who are often in critical conditions.

In the new study, researchers trained the software program called a neural network to identify more than 1,600 different cardiac devices from patients.

To use the neural network, the clinician uploads the X-ray image containing the device into a computer and the software reads the image to give a result on the make and model of the device within seconds.

The team used the programme to see if it could identify the devices from radiographic images of more than 1,500 patients at Hammersmith Hospital between 1998 and 2018. They then compared the results with five cardiologists who used the current flowchart algorithm to identify the devices.

The team found that the software outperformed current methods. It was 99 per cent accurate in identifying the manufacturer of a device, compared with only 72 percent accuracy for the flow chart. The team suggests the software could greatly speed up the care of patients with heart rhythm device problems.

The researchers will aim to carry out a further trial to validate the results in a larger group of patients and investigate ways to create a more portable device that can be used on hospital wards.

The research was funded by NIHR Imperial Biomedical Research Centre, the Medical Research Council, the Wellcome Trust and the British Heart Foundation.

22
Software Engineering / Re: Java Abstraction
« on: May 06, 2019, 06:02:52 PM »
Thanks for sharing this information  :)

23
Software Engineering / Re: Data mining
« on: May 06, 2019, 06:02:41 PM »
Thanks for sharing this information  :)

24
Thanks for sharing this information  :)

25
Thanks for sharing this information  :)

26
Thanks for sharing this information  :)

27
Software Engineering / Re: Access Modifer in Java
« on: May 06, 2019, 06:02:01 PM »
Thanks for sharing this information  :)

28
Thanks for sharing this information  :)

29
Software Engineering / Re: Class and Object Concept in Java
« on: May 06, 2019, 06:01:26 PM »
Thanks for sharing this information  :)

30
Software Engineering / Re: Java Abstraction
« on: May 06, 2019, 06:01:20 PM »
Thanks for sharing this information  :)

Pages: 1 [2] 3 4 ... 8