Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Nazia Nishat

Pages: [1] 2 3 ... 9
Stories are really an interesting one to read, especially kids love to listen to stories. It grabs the reader‘s attention and gives them great pleasure or excitement or thrill or suspense while reading it. Readers go along with stories and may have the expectation and predict the flow of story based on the previous situations narrated by the authors. Predicting the flow of story requires reasoning capacity to analyze the same. Human beings can easily reason the story based on their cognitive process whereas reasoning the stories by the system is not as easy and it requires a lot of intelligence to perform the same. This paper concentrates on to provide an environment for analyzing the stories on the basis of characters, events and the situations. It aims for reasoning the stories sentence by sentence based on the real world description using ontology. Ontology helps to investigate the stories by extracting the characters and events from the given story and provides the semantic relation among them. Ontology is formal explicit shared conceptualization. Ontology provides the domain knowledge which can be utilized for reasoning the stories semantically. Reasoning the stories based on the characters acts as a lead for the construction of the new different variety of stories with change in the characters, their nature and the events.

Artificial Intelligence / Ontology-based Text Document Clustering
« on: March 29, 2019, 01:16:10 AM »
Text  clustering  typically  involves  clustering  in  a  high  dimensional  space,  which  appears  difficult  with  regard  to  virtually  all  practical  settings.  In  addition,  given  a  particular  clustering  result it is typically very hard to come up with a good explanation of why the text clusters have been  constructed  the  way  they  are.  In  this  paper,  we  propose  a  new  approach  for  applying  background knowledge during preprocessing in order to improve clustering results and allow for selection between results. We preprocess our input data applying an ontology-based heuristics for feature  selection  and  feature  aggregation.  Thus,  we  construct  a  number  of  alternative  text  representations. Based on these representations, we compute multiple clustering results using K-Means.  The  results  may  be  distinguished  and  explained  by  the  corresponding  selection  of  concepts   in   the   ontology.   Our   results   compare   favourably   with   a   sophisticated   baseline   preprocessing strategy.


Natural Language Processing / Relation Extraction
« on: July 13, 2018, 01:01:49 PM »
Many NLP applications require understanding relations between word senses: synonymy, antonymy,hyponymy, meronymy. WordNet is machine-readable database of relations between word senses, and an indispensable resource in many NLP tasks.
But WordNet is manually constructed, and has many gaps!
In WordNet 3.1
Not in WordNet 3.1

Relation extraction: 5 easy methods
1. Hand-built patterns
2. Bootstrapping methods
3. Supervised methods
4. Distant supervision
5. Unsupervised methods


What does DeepDive do?
DeepDive is a trained system that uses machine learning to cope with various forms of noise and imprecision. DeepDive is designed to make it easy for users to train the system through low-level feedback via the Mindtagger interface and rich, structured domain knowledge via rules. DeepDive wants to enable experts who do not have machine learning expertise. One of DeepDive's key technical innovations is the ability to solve statistical inference problems at massive scale.

DeepDive asks the developer to think about features—not algorithms. In contrast, other machine learning systems require the developer think about which clustering algorithm, which classification algorithm, etc. to use. In DeepDive's joint inference based approach, the user only specifies the necessary signals or features.

DeepDive systems can achieve high quality: PaleoDeepDive has higher quality than human volunteers in extracting complex knowledge in scientific domains and winning performance in entity relation extraction competitions.

Further Reading:

Relation extraction:
Relation extraction plays an important role in extracting structured information from unstructured sources such as raw text. One may want to find interactions between drugs to build a medical database, understand the scenes in images, or extract relationships among people to build an easily searchable knowledge base.

For example, let's assume we are interested in marriage relationships. We want to automatically figure out that "Michelle Obama" is the wife of "Barack Obama" from a corpus of raw text snippets such as "Barack Obama married Michelle Obama in...". A naive approach would be to search news articles for indicative phrases, like "married" or "XXX's spouse". This would yield some results, but human language is inherently ambiguous, and one cannot possibly come up with all phrases that indicate a marriage relationship. A natural next step would be to use machine learning techniques to extract the relations. If we have some labeled training data, such as examples of pairs of people that are in a marriage relationship, we could train a machine learning classifier to automatically learn the patterns for us. This sounds like a great idea, but there are several challenges:

   How do we disambiguate between words that refer to the same entity? For example, a sentence may refer to "Barack Obama" as "Barack" or "The president".
 How do we get training data for our machine learning model?
How do we deal with conflicting or uncertain data?

Entity linking:

Before starting to extract relations, it is a good idea to determine which words refer to the same "object" in the real world. These objects are called entities. For example, "Barack", "Obama" or "the president" may refer to the entity "Barack Obama". Let's say we extract relations about one of the words above. It would be helpful to combine them as being information about the same person. Figuring out which words, or mentions, refer to the same entity is a process called entity linking. There are various techniques to perform entity linking, ranging from simple string matching to more sophisticated machine learning approaches. In some domains we have a database of all known entities to link against, such as a dictionary of all countries. In other domains, we need to be open to discovering new entities.

Dealing with uncertainty:

Given enough training data, we can use machine learning algorithms to extract entities and relations we care about. There is one problem left: human language is inherently noisy. Words and phrases can be ambiguous, sentences are often ungrammatical, and spelling mistakes are frequent. Our training data may have errors in it as well, and we may have made mistakes in the entity linking step. This is where many machine learning approaches break down: they treat training or input data as "correct" and make predictions using this assumption.

DeepDive makes good use of uncertainty to improve predictions during the probabilistic inference step. For example, DeepDive may figure out that a certain mention of "Barack" is only 60% likely to actually refer to "Barack Obama", and use this fact to discount the impact of that mention on the final result for the entity "Barack Obama". DeepDive can also make use of domain knowledge and allow users to encode rules such as "If Barack is married to Michelle, then Michelle is married to Barack" to improve the predictions.


Information Extraction:When we are doing Information Retrieval , we care about each document individually and our intention is to look what is there in it. The intention is to assimilate the documents quickly and hence we rely on removing stop words, use stemming and synonyms and then we are left with important features in each document.If we are performing a text classification, then we are likely to train our model based on these features after doing careful feature selection, the output would be a classified labeled for that document. we can get useful information like whether it was a spam, related to product, related to service experience, related to complaint, advocate of a brand, a customer likely to purchase, etc.

Relation Extraction: It is about finding relation between two documents or two entities. The most simple relation extraction is finding similarity between two documents, clustering can also be associated as relation extraction, but similarity and clustering doesn't actually extract relation based on which we can take actions. So, we need something where we know the relation between the entities. Relation extraction is mostly done for the entities like People, Cities, Zipcodes, Movies, Restaurant, dealerships, etc.
See details in:

The speaker is Christine Robson, a product manager for Google’s internal machine learning efforts. Here are the 7 applications and products that Christine described as the coolest use of machine learning at Google:

1.Google Translate
2.Google Voice Search
3. Gmail Inbox’s Smart Reply
4.RankBrain:If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.
5.Google Photos:More recently, the app also automatically creates an album that collect photos taken during a specific period organized into an album of showing the “best” photos from the trip. In order to identify the “best” photos, the app uses machine learning where a computer has been trained to “learn” to recognize images.
6.Google Cloud Vision API:A more technical and enterprise product, Google Cloud Vision API enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy to use REST API. It quickly classifies images into thousands of categories, detects individual objects and faces within images, and finds and reads printed words contained within images. As a developer, you can build metadata on your image catalog, moderate offensive content, or enable new marketing scenarios through image sentiment analysis.
 7.DeepDream: DeepDream is a computer vision program created by Google which uses a convolutional neural networks to find and enhance patterns in images via algorithmic pareidolia, thus creating a dreamlike hallucinogenic appearance in the deliberately over-processed images.

Food Habit / Re: Common mistakes in Ramadan
« on: May 17, 2018, 09:39:27 PM »
Thank you for your information...

Alumni / Re: Life in Death
« on: March 31, 2018, 10:11:25 PM »
Great approach of parents..They became parents of other children..

We should put small food shop(chips,chocoletes,cake,water bootle,pen,paper all necessary things) at the basement of all buildings, cause we need to walk to canteen every time we need anything.

How do you define yourself? | Lizzie Velasquez | TEDxAustinWomen

A link-

This Teaching Tip outlines one procedure for having
students build their own quiz. This procedure was
designed for a large undergraduate classroom. The
steps are as follows:
Step 1: Approximately two class periods before the
quiz or exam, instructors should provide a brief in-class
review of the material to be covered on the quiz or exam.
Then, give each student an index card, preferably a card
that is at least four inches by six inches. Instruct students
to create one potential quiz question each, and to write
that question on the notecard. The question may be of
any format (i.e., multiple choice, true/false, essay, etc.).
Students must also write the answer. Students may
work alone or in pairs, but must write their name on the
card. When finished, students turn in the index cards to
the instructor. The cards can be used to note attendance
and/or award participation points.
Step 2: During the next class period, the instructor
can use the students’ suggested questions to help
students prepare for the quiz or exam. This can be done
by displaying the best questions on a PowerPoint and
discussing the answers as a class. The instructor should
take care to praise the students’ questions and to note
any patterns the instructor observed when reviewing the
students’ questions. For example, the instructor might
note that many of the questions revolved around a
particular topic or theme, or that none
Step 3: Develop and administer the quiz. In developing
the quiz, the instructor will want to use as many of
the students’ suggested questions on the quiz as possible.
Of course, the instructor may edit, adapt, and/or
combine the students’ suggestions as needed.
Step 4: During the class period immediately following
the quiz, ask students about their experience developing
and, then, taking the quiz. Some students will
appreciate the learning challenge and will feel a sense
of accomplishment. Some students will appreciate the
shift in dynamic from teacher-driven assessment to student-
driven assessment. Other students will be uncomfortable
with this process and the ambiguity inherent in
such a shift in roles. Take care to encourage both positive
and negative responses, and to validate all students’

The epic story of Prophet Musa (AS) narrated in Quran and lifechanging lessons derived from it.

Software Engineering / Its all about life!
« on: May 05, 2017, 12:40:51 PM »

Software Engineering / Life Quotes
« on: May 05, 2017, 12:28:55 PM »


Pages: [1] 2 3 ... 9