Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - s.arman

Pages: 1 [2] 3 4
16
No human, or team of humans, could possibly keep up with the avalanche of information produced by many of today’s physics and astronomy experiments. Some of them record terabytes of data every day — and the torrent is only increasing. The Square Kilometer Array, a radio telescope slated to switch on in the mid-2020s, will generate about as much data traffic each year as the entire internet.

The deluge has many scientists turning to artificial intelligence for help. With minimal human input, AI systems such as artificial neural networks — computer-simulated networks of neurons that mimic the function of brains — can plow through mountains of data, highlighting anomalies and detecting patterns that humans could never have spotted.

Of course, the use of computers to aid in scientific research goes back about 75 years, and the method of manually poring over data in search of meaningful patterns originated millennia earlier. But some scientists are arguing that the latest techniques in machine learning and AI represent a fundamentally new way of doing science. One such approach, known as generative modeling, can help identify the most plausible theory among competing explanations for observational data, based solely on the data, and, importantly, without any preprogrammed knowledge of what physical processes might be at work in the system under study. Proponents of generative modeling see it as novel enough to be considered a potential “third way” of learning about the universe.

For more please visit:
https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/

17
Cloud computing is quickly becoming the standard way for technology companies to access IT infrastructure, software and hardware resources. The technology enables companies to be able to use applications and other resources managed by third party companies that are stored in high-end server computers and networks. Cloud computing systems are mainly set up for business or research purposes. In this article, we explore the different types of cloud computing solutions.

Cloud computing helps businesses to be more efficient and save on software and hardware that are important for different operations. The definition of cloud computing varies depending on your source but what is generally agreed is that it involves access of software or hardware that are in the “cloud” i.e. use of software or hardware remotely. If your company is using specialized applications where you did not have to set up server or buy hardware or software to run them, then you are probably using a cloud application.

Companies can use cloud computing to increase their IT functionality or capacity without having to add software, personnel, invest in additional training or set up new infrastructure. Below are the major types of cloud computing:

1. Infrastructure as a Service (IaaS)
IaaS is the lowest level of cloud solution and refers to cloud-based computing infrastructure as a fully-outsourced service. An IaaS provider will deliver pre-installed and configured hardware or software through a virtualized interface. What the customers accessing the cloud services do with the service is up to them. Examples of IaaS offerings are managed hosting and development environments.
Your web hosting company is an IaaS provider. Some of the major players offering infrastructure as a service solution include Google, IBM, Rackspace Cloud Servers, Amazon EC2 and Verizon.

Benefits of IaaS Solutions
Reduces total cost of ownership and capital expenditures
Users pay for the service that they want, on the go
Access to enterprise-grade IT resources and infrastructure
Users can scale up and down based on their requirements at any time

To read more:
https://www.chargebee.com/blog/understanding-types-cloud-computing/

18
Management Information Systems – MIS vs. Information Technology – IT: An Overview
Management information system (MIS) refers to a large infrastructure used by a business or corporation, whereas information technology (IT) is one component of that infrastructure that is used for collecting and transmitting data.


A management information system helps a business make decisions and coordinate and analyze information. Information Technology supports and facilitates the employment of that system.


For example, IT could be a particular interface that helps users input data into a corporate MIS operation. However, that isn't to say that the scope of IT is narrow. In some ways, IT is a broader field than MIS. The particular goals of a particular IT application can fit neatly into a larger MIS framework; however, the reverse is not necessarily true.

Management Information System
In terms of business decision-making, an information system (IS) is a set of data, computing devices and management methods that support routine company operations. A management information system (MIS) is a specific subset of IS.

A management information system, as used by a company or institution, might be a computerized system consisting of hardware and software that serves as the backbone of information for the company. Specifically, the computerized database may house all the company's financial information and organize it in such a way that it can be accessed to generate reports on operations at different levels of the company.

For more:
https://www.investopedia.com/ask/answers/040315/what-difference-between-mis-management-information-system-and-information-technology.asp

19
Robotics and Embedded Systems / The purpose of embedded systems
« on: April 19, 2019, 12:42:48 AM »
An embedded system is the use of a computer system built into a machine of some sort, usually to provide a means of control" (BCS Glossary of Computing and ICT). Embedded systems are everywhere in our lives, from the TV remote control to the microwave, to control the central heating to the digital alarm clock next to our bed. They are in cars, washing machines, cameras, drones and toys.

An embedded system has a microprocessor in it which is essentially a complete computer system with limited, specific functionality. As far as a user goes, they can usually interact with it through a limited interface. This typically will allow the user to input settings and make selections and also to receive output using text, video or audio  signals, for example.

Characterisitcs of embedded systems
There are a number of common characteristics we can identify in embedded systems.

 They are usually small, sometimes tiny and very light so they can be fitted into many products.
The computer system in an embedded system is usually a single microprocessor.
The microprocessor has been designed to do a limited number of very specific tasks in a product very quickly and efficiently.
The microprocessor can be mass-produced very cheaply.
They require a very tiny amount of power compared to a traditional computer.
They are very reliable because there are no moving parts.
Because the computer system is usually printed onto one board, if it does break down, you just swap the board; it is very easy to maintain.

For details visit here:
http://theteacher.info/index.php/systems-architecture/notes/4502-the-purpose-of-embedded-systems

20
Blockchain / How blockchains could change the world
« on: April 18, 2019, 11:59:31 PM »
What impact could the technology behind Bitcoin have? According to Tapscott Group CEO Don Tapscott, blockchains, the technology underpinning the cryptocurrency, could revolutionize the world economy. In this interview with McKinsey’s Rik Kirkland, Tapscott explains how blockchains—an open-source distributed database using state-of-the-art cryptography—may facilitate collaboration and tracking of all kinds of transactions and interactions. Tapscott, coauthor of the new book Blockchain Revolution: How the Technology Behind Bitcoin is Changing Money, Business, and the World, also believes the technology could offer genuine privacy protection and “a platform for truth and trust.” An edited and extended transcript of Tapscott’s comments follows.
In the early 1990s, we said the old media is centralized. It’s one way, it’s one to many; it’s controlled by powerful forces, and everyone is a passive recipient. The new web, the new media, we said, is one to one, it’s many to many; it’s highly distributed, and it’s not centralized. Everyone’s a participant, not an inert recipient. This has an awesome neutrality. It will be what we want it to be, and we can craft a much more egalitarian, prosperous society where everyone gets to share in the wealth that they create. Lots of great things have happened, but overall the benefits of the digital age have been asymmetrical. For example, we have this great asset of data that’s been created by us, and yet we don’t get to keep it. It’s owned by a tiny handful of powerful companies or governments. They monetize that data or, in the case of governments, use it to spy on us, and our privacy is undermined.

For details please visit:
https://www.mckinsey.com/industries/high-tech/our-insights/how-blockchains-could-change-the-world

21
Internet of Things / 4 Layers Of The Internet Of Things
« on: April 18, 2019, 11:43:23 PM »
In today’s age of fast track technology growth, it’s becoming very difficult to keep track of the rise of different technologies. However, there is a common theme underlying most of the modern technology trends. This constant theme is of ‘convergence of technologies’ and the internet of things is the perfect example of this phenomenon.

It’s very nature itself lends to the notion of a convergence of different technologies working together in unison to solve a real business problem or enable new products and services. But the problem is that the various players involved in the IOT ecosystem view the IOT technology stack from their own specific perspective, ending up confusing the audience.

So, “What is the link between IOT, cloud, Analytics, Data Science?” This is still a common question!

This article tries to allay this confusion by describing the 4 layers of an IOT technology stack.

The first layer of Internet of Things consists of Sensor-connected IOT devices:

The second layer consists of IOT gateway devices:

The Third layer of IOT is the Cloud:

And the Final layer is IOT Analytics:

For details visit here:
https://analyticstraining.com/4-layers-of-the-internet-of-things/

22
Data Mining and Big Data / DATA MINING FOR BIG DATA
« on: April 18, 2019, 11:35:51 PM »
Data mining involves exploring and analyzing large amounts of data to find patterns for big data. The techniques came out of the fields of statistics and artificial intelligence (AI), with a bit of database management thrown into the mix.

Generally, the goal of the data mining is either classification or prediction. In classification, the idea is to sort data into groups. For example, a marketer might be interested in the characteristics of those who responded versus who didn’t respond to a promotion.

These are two classes. In prediction, the idea is to predict the value of a continuous variable. For example, a marketer might be interested in predicting those who will respond to a promotion.

Typical algorithms used in data mining include the following:

Classification trees: A popular data-mining technique that is used to classify a dependent categorical variable based on measurements of one or more predictor variables. The result is a tree with nodes and links between the nodes that can be read to form if-then rules.

Logistic regression: A statistical technique that is a variant of standard regression but extends the concept to deal with classification. It produces a formula that predicts the probability of the occurrence as a function of the independent variables.

Neural networks: A software algorithm that is modeled after the parallel architecture of animal brains. The network consists of input nodes, hidden layers, and output nodes. Each unit is assigned a weight. Data is given to the input node, and by a system of trial and error, the algorithm adjusts the weights until it meets a certain stopping criteria. Some people have likened this to a black–box approach.

Clustering techniques like K-nearest neighbors: A technique that identifies groups of similar records. The K-nearest neighbor technique calculates the distances between the record and points in the historical (training) data. It then assigns this record to the class of its nearest neighbor in a data set.

For more details:
https://www.dummies.com/programming/big-data/engineering/data-mining-for-big-data/

23
A child who has never seen a pink elephant can still describe one — unlike a computer. “The computer learns from data,” says Jiajun Wu, a PhD student at MIT. “The ability to generalize and recognize something you’ve never seen before — a pink elephant — is very hard for machines.”

Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa’s latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits. It requires tons of data, has trouble explaining its decisions, and is terrible at applying past knowledge to new situations; It can’t comprehend an elephant that’s pink instead of gray. 

To give computers the ability to reason more like us, artificial intelligence (AI) researchers are returning to abstract, or symbolic, programming. Popular in the 1950s and 1960s, symbolic AI wires in the rules and logic that allow machines to make comparisons and interpret how objects and entities relate. Symbolic AI uses less data, records the chain of steps it takes to reach a decision, and when combined with the brute processing power of statistical neural networks, it can even beat humans in a complicated image comprehension test.

For more : http://news.mit.edu/2019/teaching-machines-to-reason-about-what-they-see-0402?utm_campaign=Artificial%2BIntelligence%2BWeekly&utm_medium=web&utm_source=Artificial_Intelligence_Weekly_103

24
Before you can analyze and visualize data, you have to get that data into R. There are various ways to do this, depending on how your data is formatted and where it’s located.

Usually, the function you use to import data depends on the data’s file format. In base R, for example, you can import a CSV file with read.csv(). Hadley Wickham created a package called readxl that, as you might expect, has a function to read in Excel files. There’s another package, googlesheets, for pulling in data from Google spreadsheets.

[ Get Sharon Machlis’s R tips in our how-to video series. | Read the InfoWorld tutorials: Learn to crunch big data with R. • How to reshape data in R. • R data manipulation tricks at your fingertips • Beginner’s guide to R. | Stay up to date on analytics and big data with the InfoWorld Big Data Report newsletter. ]
But if you don’t want to remember all that, there’s rio.

The magic of rio
“The aim of rio is to make data file I/O [import/output] in R as easy as possible by implementing three simple functions in Swiss-army knife style,” according to the project’s GitHub page. Those functions are import(), export(), and convert().

Source: infoworld

25
After revelations that political consulting firm Cambridge Analytica allegedly appropriated Facebook user data to advise Donald Trump’s 2016 U.S. presidential campaign, many are calling for greater regulation of social media networks, saying a “massive data breach” has occurred.

The idea that governments can regulate their way into protecting citizen privacy is appealing, but I believe it misses the mark.

What happened with Cambridge Analytica wasn’t a breach or a leak. It was a wild violation of academic research ethics. The story is still developing, but a college researcher has now acknowledged that he harvested Facebook users’ data and gave it to another company.

A scholar and his company failed to protect sensitive research data. A university did not do enough to stop him. Regulating Facebook won’t solve these problems.

What Kogan did wrong
I am a professor of media and information policy at the Quello Center at Michigan State University, and I was one of the first academics to study the internet. The quality and integrity of digital research is of great concern to me.

I think the Cambridge Analytica-Facebook incident is a total disaster. I just don’t think it’s a government regulatory failure.

Here’s the story, at least what the media has confirmed so far.

Aleksandr Kogan is a Cambridge University data scientist and psychology department lecturer. Outside of the university, Kogan also collected and analyzed Facebook user data – presumably with the knowledge of Facebook – for his company Global Science Research.

Through online surveys, he was reportedly able to gather sensitive personal information on tens of millions of American Facebook users, including demographic data, private messages, information about their friends and possibly even information about the friends of their friends.

Kogan then provided this data to a political consulting firm, Cambridge Analytica. According to the New York Times, the company analyzed that information, aiming to help shape the 2016 Trump campaign’s messages and identify potential Trump voters.

That was never his intent, Kogan said in a March 21 BBC radio interview. He reports being “stunned” that his “perfectly legal” research on the happiness and well-being of Facebook users was deployed as a political tool.

What Facebook did wrong
So did Facebook do something wrong, then? In my opinion, not really.

Facebook already has strict guidelines outlining what can and can’t be done with user data, which the researcher appears to have violated by passing the personal data he collected to Cambridge Analytica.

When Facebook launched in 2004, it quickly became a goldmine for social researchers. Suddenly, studies that previously relied only on survey data to gather information about individuals could directly observe how people connected to one another, what they liked, and what bound groups together.

In the early years, the company took an open and experimental attitude toward this kind of data mining, even teaming up with researchers to study how tweaking certain features of individual’s Facebook pages affected voter turnout, say, or impacted their moods.

Those studies, conducted without the informed consent of its participants – Facebook users – were widely criticized by social science researchers. In 2014, Facebook strengthened its existing guidelines on how user data can be gathered, analyzed and used.

Today, the company requires an extensive internal review of every request to extract personal data from users for research purposes.

In other words, Facebook self-regulated.

It may have been lax in enforcing its guidelines, though. The company says that once it learned that Cambridge Analytica had used Kogan’s data set for unauthorized purposes, it insisted that the data be deleted.

According to current press reports, Cambridge Analytica did not comply. For a while, it seems, Facebook did nothing to punish the company.

I believe this fallout from this scandal – including a Federal Trade Commission investigation – will push Facebook to take enforcement much more seriously.

After all, as CEO Mark Zuckerberg said in a March 21 Facebook post, the company “made mistakes” and it “has a responsibility to protect” its users.

Cambridge Analytica’s Facebook account has now been suspended. And under both U.S. and U.K. law, individuals or firms accused of unauthorized disclosure of personal information can face prosecution.


Cambridge Analytica CEO Alexander Nix has been suspended over the Facebook scandal. Henry Nicholls/Reuters
What academia does wrong
For me, what the Cambridge Analytica fiasco exposes is that university ethical review processes are not yet equipped for the digital age.

University researchers are bound by strict ethical guidelines. Across the world – particularly in the U.K., with its strong social research traditions – academics who want to study the attitudes or behavior of private individuals must first pass a stringent review process. They must also obtain explicit, informed consent from those who participate in their research.

It is impossible for me to imagine that an ethics board at the University of Cambridge would have ever approved of Kogan sharing his data with Cambridge Analytica.

Universities around the globe actually encourage faculty to develop entrepreneurial companies, as Kogan did. That helps their research reach beyond campus to foster innovation in business, industry and government.

But the norms and rules that protect participants in academic research – such as not sharing identifiable personal data – do not stop at the door of the university.

Kogan’s exploits show that professors’ outside jobs may raise conflicts of interest and may have escaped the reach of institutional review. This is an area of academic work-for-hire that universities need to review with an eye toward updating how they enforce research ethics.

I’ve briefed institutional review boards at a number of universities, and I can attest that members often don’t understand how the internet has been transforming the way data is created, gathered, analyzed and shared on the internet and social media networks.

Frequently, the authorities who grant professors and students permission to conduct their studies are anchored in the standards of medical research, not modern social science.

Many schools also generally fail to understand how cutting-edge some academic fields have become. Big data and computational analytics is one of the most innovative scientific fields today.

Legitimate, company-approved access to social media user data allows researchers to study some of the most urgent issues of the 21st century, including fake news, political echo chambers and technological trends. So it is not surprising that political campaigns would want to appropriate these research practices.

Until they come up with new rules, I fear universities’ lack of digital savvy will remain a threat to online privacy.


Source: theconversation.com

26
Software Engineering / What’s coming in TensorFlow 2.0
« on: January 17, 2019, 08:01:30 PM »
ensorFlow has grown to become one of the most loved and widely adopted ML platforms in the world. This community includes:

Researchers (e.g., to forecast earthquake aftershocks and detect breast cancer).
Developers (e.g., to build apps to identify diseased plants and to help people lose weight).
Companies (e.g., by eBay, Dropbox and AirBnB to improve their customer experience).
And many others.
In November, TensorFlow celebrated its 3rd birthday with a look back at the features added throughout the years. We’re excited about another major milestone, TensorFlow 2.0.

TensorFlow 2.0 will focus on simplicity and ease of use, featuring updates like:

Easy model building with Keras and eager execution.
Robust model deployment in production on any platform.
Powerful experimentation for research.
Simplifying the API by cleaning up deprecated APIs and reducing duplication.
Over the last few years, we’ve added a number of components to TensorFlow. With TensorFlow 2.0, these will be packaged together into a comprehensive platform that supports machine learning workflows from training through deployment. Let’s take a look at the new architecture of TensorFlow 2.0 using a simplified, conceptual diagram as shown below:


Note: Although the training part of this diagram focuses on the Python API, TensorFlow.js also supports training models. Other language bindings also exist with various degrees of support, including: Swift, R, and Julia.
Easy model building
In a recent blog post we announced that Keras, a user-friendly API standard for machine learning, will be the central high-level API used to build and train models. The Keras API makes it easy to get started with TensorFlow. Importantly, Keras provides several model-building APIs (Sequential, Functional, and Subclassing), so you can choose the right level of abstraction for your project. TensorFlow’s implementation contains enhancements including eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines.

Here’s an example workflow (in the coming months, we will be working to update the guides linked below):

Load your data using tf.data. Training data is read using input pipelines which are created using tf.data. Feature characteristics, for example bucketing and feature crosses are described using tf.feature_column. Convenient input from in-memory data (for example, NumPy) is also supported.
Build, train and validate your model with tf.keras, or use Premade Estimators. Keras integrates tightly with the rest of TensorFlow so you can access TensorFlow’s features whenever you want. A set of standard packaged models (for example, linear or logistic regression, gradient boosted trees, random forests) are also available to use directly (implemented using the tf.estimator API). If you’re not looking to train a model from scratch, you’ll soon be able to use transfer learning to train a Keras or Estimator model using modules from TensorFlow Hub.
Run and debug with eager execution, then use tf.function for the benefits of graphs. TensorFlow 2.0 runs with eager execution by default for ease of use and smooth debugging. Additionally, the tf.function annotation transparently translates your Python programs into TensorFlow graphs. This process retains all the advantages of 1.x TensorFlow graph-based execution: Performance optimizations, remote execution and the ability to serialize, export and deploy easily, while adding the flexibility and ease of use of expressing programs in simple Python.
Use Distribution Strategies for distributed training. For large ML training tasks, the Distribution Strategy API makes it easy to distribute and train models on different hardware configurations without changing the model definition. Since TensorFlow provides support for a range of hardware accelerators like CPUs, GPUs, and TPUs, you can enable training workloads to be distributed to single-node/multi-accelerator as well as multi-node/multi-accelerator configurations, including TPU Pods. Although this API supports a variety of cluster configurations, templates to deploy training on Kubernetes clusters in on-prem or cloud environments are provided.
Export to SavedModel. TensorFlow will standardize on SavedModel as an interchange format for TensorFlow Serving, TensorFlow Lite, TensorFlow.js, TensorFlow Hub, and more.
Robust model deployment in production

for more visit : https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0-d3663832e9b8?linkId=62351082&fbclid=IwAR17At-YLut-nz0FAnZL317N1gSrR0hHXqQzHmmzHxx6Em4zoe3JxZWGHH0

27
Machine translation is the problem of translating sentences from some source language to a target language. Neural machine translation (NMT) directly models the mapping of a source language to a target language without any need for training or tuning any component of the system separately. This has led to a rapid progress in NMT and its successful adoption in many large-scale settings.

for more visit : https://blog.ml.cmu.edu/2019/01/14/contextual-parameter-generation-for-universal-neural-machine-translation/

28
By Richard Socher:
"Three road blocks to solve for artifical general intelligence (#AGI)
1) Massive Multitask learning with a single joint model
2) Ability of algorithms to update their objective functions in continuous learning
3) Learnable combination of fuzzy/fluid and symbolic reasoning"

29
Researchers from MIT and Massachusetts General Hospital (MGH) have developed a predictive model that could guide clinicians in deciding when to give potentially life-saving drugs to patients being treated for sepsis in the emergency room.

Sepsis is one of the most frequent causes of admission, and one of the most common causes of death, in the intensive care unit. But the vast majority of these patients first come in through the ER. Treatment usually begins with antibiotics and intravenous fluids, a couple liters at a time. If patients don’t respond well, they may go into septic shock, where their blood pressure drops dangerously low and organs fail. Then it’s often off to the ICU, where clinicians may reduce or stop the fluids and begin vasopressor medications such as norepinephrine and dopamine, to raise and maintain the patient’s blood pressure.

That’s where things can get tricky. Administering fluids for too long may not be useful and could even cause organ damage, so early vasopressor intervention may be beneficial. In fact, early vasopressor administration has been linked to improved mortality in septic shock. On the other hand, administering vasopressors too early, or when not needed, carries its own negative health consequences, such as heart arrhythmias and cell damage. But there’s no clear-cut answer on when to make this transition; clinicians typically must closely monitor the patient’s blood pressure and other symptoms, and then make a judgment call.

In a paper being presented this week at the American Medical Informatics Association’s Annual Symposium, the MIT and MGH researchers describe a model that “learns” from health data on emergency-care sepsis patients and predicts whether a patient will need vasopressors within the next few hours. For the study, the researchers compiled the first-ever dataset of its kind for ER sepsis patients. In testing, the model could predict a need for a vasopressor more than 80 percent of the time.

Early prediction could, among other things, prevent an unnecessary ICU stay for a patient that doesn’t need vasopressors, or start early preparation for the ICU for a patient that does, the researchers say.

“It’s important to have good discriminating ability between who needs vasopressors and who doesn’t [in the ER],” says first author Varesh Prasad, a PhD student in the Harvard-MIT Program in Health Sciences and Technology. “We can predict within a couple of hours if a patient needs vasopressors. If, in that time, patients got three liters of IV fluid, that might be excessive. If we knew in advance those liters weren’t going to help anyway, they could have started on vasopressors earlier.”

In a clinical setting, the model could be implemented in a bedside monitor, for example, that tracks patients and sends alerts to clinicians in the often-hectic ER about when to start vasopressors and reduce fluids. “This model would be a vigilance or surveillance system working in the background,” says co-author Thomas Heldt, the W. M. Keck Career Development Professor in the MIT Institute of Medical Engineering and Science. “There are many cases of sepsis that [clinicians] clearly understand, or don’t need any support with. The patients might be so sick at initial presentation that the physicians know exactly what to do. But there’s also a ‘gray zone,’ where these kinds of tools become very important.”

Co-authors on the paper are James C. Lynch, an MIT graduate student; and Trent D. Gillingham, Saurav Nepal, Michael R. Filbin, and Andrew T. Reisner, all of MGH. Heldt is also an assistant professor of electrical and biomedical engineering in MIT’s Department of Electrical Engineering and Computer Science and a principal investigator in the Research Laboratory of Electronics.

Other models have been built to predict which patients are at risk for sepsis, or when to administer vasopressors, in ICUs. But this is the first model trained on the task for the ER, Heldt says. “[The ICU] is a later stage for most sepsis patients. The ER is the first point of patient contact, where you can make important decisions that can make a difference in outcome,” Heldt says.

The primary challenge has been a lack of an ER database. The researchers worked with MGH clinicians over several years to compile medical records of nearly 186,000 patients who were treated in the MGH emergency room from 2014 to 2016. Some patients in the dataset had received vasopressors within the first 48 hours of their hospital visit, while others hadn’t. Two researchers manually reviewed all records of patients with likely septic shock to include the exact time of vasopressor administration, and other annotations. (The average time from presentation of sepsis symptoms to vasopressor initiation was around six hours.
for more visit : http://news.mit.edu/2018/machine-learning-sepsis-care-1107

30
WHAT THE RESEARCH IS:
Zero-shot learning (ZSL) is a process by which a machine learns to recognize objects it has never seen before. Researchers at Facebook have developed a new, more accurate ZSL model that uses neural net architectures called generative adversarial networks (GANs) to read and analyze text articles, and then visually identify the objects they describe. This novel approach to ZSL allows machines to classify objects based on category, and then use that information to identify other similar objects, as opposed to learning each object individually, as other models do.

HOW IT WORKS:
Researchers trained this model, called generative adversarial zero-shot learning (GAZSL), to identify more than 600 classes of birds across two databases containing more than 60,000 images. It was then given web articles and asked to use the information there to identify birds it had not seen before. The model extracted seven key visual features from the text, created synthetic visualizations of these features, and used those features to identify the correct class of bird.

Researchers then tested the GAZSL model against seven other ZSL algorithms and found it was consistently more accurate across four different benchmarks. Overall, the GAZSL model outperformed other models by between 4 percent and 7 percent, and in some cases by much more.

WHY IT MATTERS:
To become more useful, computer vision systems will need to recognize objects they have not specifically been trained on. For example, it is estimated that there are more than 10,000 living bird species, yet most computer vision data sets of birds have only a couple hundred categories. This new ZSL model, which has been open-sourced, has been shown to produce better results and offers a promising path for future research into machine learning. Much of the research into AI remains foundational, but work that improves how systems are able to understand text and correctly identify objects continues to lay the groundwork for better, more reliable AI systems.



Source: https://code.fb.com/ai-research/zero-shot-learning/?utm_campaign=Artificial%2BIntelligence%2BWeekly&utm_medium=web&utm_source=Artificial_Intelligence_Weekly_90

Pages: 1 [2] 3 4