Sie sind auf Seite 1von 114

Plagiarism Checker X Originality Report

Similarity Found: 12%

Date: Friday, July 05, 2019


Statistics: 2354 words Plagiarized / 31206 Total words
Remarks: Low Plagiarism Detected - Your Document needs Optional Improvement.
-------------------------------------------------------------------------------------------

ABSTRACT In the course of the most recent two decades the world has seen a fast
development in different types of information on the World Wide Web.The online flow
of information is getting more and more wobbling. The web content is growing at
lightning fast speed with social media as a huge part. E-commerce sites are integral
parts of social media which allowed online shopping community for scripting their
telugu reviews.

The need for better product recommendations to the users by using the sentiments
analyzed from those product telugu reviews with the help of domain knowledge has to
be reflected which is the way to improving the business in the online market. The
extraction of such high quality, crucial and meaningful information from the online
telugu reviews and presenting them in a more useful way achieves the aforementioned
task. However, gathering such information by carrying out these activities is difficult.

This is because the telugu reviews often contain all possible words and phrase types that
are available in the language. The ambiguous and context sensitive words from the
telugu reviews also make the telugu reviews analysis complex. A lot of research
contributions were observed in the direction of developing methods for machines to
take decisions based on their ability to learn, understand and classify sentiments of the
extracted features from the unstructured Telugu reviews.

Keeping in mind the existing problem of recommending products by using online


telugu reviews information, it is important to have a model that carries out this task and
use the extracted words in the analysis of machine interpretable Ontology for
meaningful learning of feature sentiments. Natural Language Processing based Telugu
reviews analysis is such a model that captures the word combination patterns like
product features and opinions from the Telugu reviews.

The product features and opinions from Telugu reviews collection are also extracted
using the concept of topic modelling with “part-of” and “is-a” product feature
taxonomy. The motivation of this research work is to mine the Ontology for intelligent
discovery of feature specific sentiment rules towards improving the product
recommendations to the client. In the present research work an endeavour is made to
extricate greatest item includes from the Telugu audits.

Further, the corresponding opinions are also identified and extracted. Finally, ontology is
developed to automatically identify the features and opinions by the machine from the
Telugu reviews, to classify the product features by mining the conclusions from the
philosophy and to utilize these machine level notions for improving the item
proposals.This supports the end user to adopt wise purchase decisions without
compromise on the product feature set in a fast and accurate manner.

The performance of the enhanced model is compared with the plain sentiment analysis
model in order to elevate the significance of ontology in recommendation systems.
TABLE OF CONTENTS _Abstract _viii _ _ _List of tables _xvi _ _ _List of figures _xix _ _1
_Introduction _1 _ _ _1.1 Focus on telugu reviews in E-Commerce _2 _ _ _1.2 Analysis of
telugu reviews in the pretext of opinions and sentiments _3 _ _ _1.3

Importance of polarity in the context of telugu reviews _4 _ _ _1.4Need of product


recommendations using machine learned sentiments _5 _ _ _1.5 Challenges in opinion
mining _6 _ _ _ 1.5.1 Contrasting nature of facts and opinions _7 _ _ _ 1.5.2 Factors that
make opinion mining difficult _8 _ _ _ 1.5.2.1 Human versus Algorithm based opinion
word identification _8 _ _ _ 1.5.2.2 Identification of context sensitive and domain
dependent opinionated sentences _9 _ _ _1.6

Motivations and Aims _9 _ _ _1.7 Objectives _10 _ _ _1.8 Organization of the thesis _12 _
_2 _Machine Learning in computational treatment of opinions _15 _ _ _2.1 Introduction
_15 _ _ _2.2 Review based Opinion Mining _15 _ _ _ 2.2.1 Tasks involved in Opinion
Mining _16 _ _ _ 2.2.1.1 Feature Extraction _16 _ _ _ 2.2.1.2 Opinion Extraction _17 _ _ _
2.2.1.3 Subjectivity Classification _17 _ _ _ 2.2.1.4 Sentiment Classification _17 _ _ _ 2.2.1.5

Opinion Holder Identification _17 _ _ _ 2.2.2 Implications of online telugu reviews in


E-Commerce field _18 _ _ _2.3 Various explored techniques on opinion mining _19 _ _ _
2.3.1 Pre-processing of online telugu reviews _22 _ _ _ 2.3.2 Feature Extraction
approaches _23 _ _ _ 2.3.2.1 Developing natural language rules _24 _ _ _ 2.3.2.2 Sequence
Models _28 _ _ _ 2.3.2.3 Topic Models _30 _ _ _ 2.3.3 Opinion word extraction approaches
_33 _ _ _ 2.3.4

Opinion word orientation approaches _34 _ _ _ 2.3.4.1 Dictionary based models _34 _ _ _
2.3.4.2 Corpus based models _35 _ _ _ 2.3.5 Ontology support to feature level opinion
mining _36 _ _ _ 2.3.6 Ontology based sentiment classification – A Semantic Data Mining
approach _38 _ _ _ 2.3.7 Recommender Systems in E-Commerce _40 _ _ _ 2.3.7.1
Sentiments based product recommendations _41 _ _ _ 2.3.7.2

Ontologies utilization for better product recommendations _42 _ _ _2.4 Evaluation


techniques in opinion mining _42 _ _ _ 2.4.1Measures in Opinion Mining _43 _ _ _2.5
Problem Definition _46 _ _ _2.6 Proposed Methodology _47 _ _ _2.7 Conclusions _48 _ _3
_Influence of NLP on review based opinion mining _49 _ _ _3.1 Introduction _49 _ _ _3.2

Drawbacks of the existing methods in opinion mining _50 _ _ _3.3 Proposed Model _56 _
_ _3.4 Review Datasets for the implementation of the model _60 _ _ _3.5 Empirical
evaluation of the data with small dataset _62 _ _ _3.6 Conclusion _63 _ _4 _Feature level
opinion mining of online telugu reviews _64 _ _ _4.1 Introduction _64 _ _ _4.2 Opinion
mining of product telugu reviews using language rules _65 _ _ _ 4.2.1 Input telugu
reviews pre-processing module _66 _ _ _ 4.2.2

NLP based product features extraction and opinions analysis module _71 _ _ _ 4.2.2.1
Step-by-step product features extraction _71 _ _ _ 4.2.2.2 Opinion extraction and
orientation _74 _ _ _ 4.2.3 Influence of natural language rules on feature and opinion
extraction _76 _ _ _ 4.2.4 Results and Observations _77 _ _ _4.3 Opinion mining of
product telugu reviews using traditional LDA _86 _ _ _ 4.3.1

LDA based product features extraction and opinions analysis module _86 _ _ _ 4.3.1.1
Product features extraction using LDA and FOT _87 _ _ _ 4.3.1.2 Opinion extraction and
orientation on LDA and FOT product features _90 _ _ _ 4.3.2 Influence of LDA priors on
learning product features _92 _ _ _ 4.3.3 Results and Observations _93 _ _ _4.4 Conclusion
_98 _ _5 _Sentiment based semantic rule learning for improved product
recommendations _99 _ _ _5.1Introduction _99 _ _ _5.2 Role of semantic rules _100 _ _
_5.3 Influence of sentiments on the semantic rules _101 _ _ _5.4

Semantic rules learning based on ontology mining _101 _ _ _ 5.4.1 Development of


Product Review Opinion Ontology – (PROO) _102 _ _ _ 5.4.1.1 The domain and scope of
PROO Ontology _103 _ _ _ 5.4.1.2 Classes in the PROO Ontology _104 _ _ _ 5.4.1.3
Arrangement of classes in a taxonomic hierarchy _104 _ _ _ 5.4.1.4 Slots and describing
allowed values for these slots _105 _ _ _ 5.4.1.5
The creation of instance values for classes _105 _ _ _ 5.4.2 Semantic Data Mining the
PROO Ontology for learning semantic rules _107 _ _ _ 5.4.3 Inductive Logic
Programming for verification of the mined semantic rules _114 _ _ _5.5 Improving
product recommendations using the learned semantic sentiments _116 _ _ _ 5.5.1

Improving the product recommendations using rule based sentiments from ontology
_118 _ _ _5.6 Design decisions in the implementation of the ontology _123 _ _ _5.7
Evaluation of results _124 _ _ _5.8 Conclusion _133 _ _6 _Comparative Models _135 _ _
_6.1 Introduction _135 _ _ _6.2 Comparison based on similar product telugu reviews by
different vendors _136 _ _ _6.3

Comparison between semi-automatic feature extraction and Ontology based feature


extraction in NLP _140 _ _ _6.4 Comparison between semi-automatic feature extraction
and Ontology based feature extraction in LDA _142 _ _ _6.5 Comparison between
ontology less product sentiments and ontology supported product sentiments for
product recommendations _145 _ _ _6.6

Comparison of proposed model with existing methods in Ontology based sentiment


classification _152 _ _ _6.7 Conclusion _153 _ _7 _Conclusions and Future Directions _154
_ _ _7.1 Conclusions _154 _ _ _7.2 Contributions of this work _155 _ _ _7.3 Future
Directions _156 _ _References _158 _ _ LIST OF TABLES 2.1 _Comparative results among
different feature extraction techniques _44 _ _2.2

_Sentiment evaluation metrics applied on various classifiers _45 _ _3.1 _Polarity terms
and synonyms in assigning polarity score _61 _ _4.1 _Impact of stemming on the product
features _67 _ _4.2 _PoS Tagger performance details on Nouns _68 _ _4.3 _PoS Tagger
performance details on Adjectives _68 _ _4.4 _Number of feature, opinion pairs and
orientations from verbs _69 _ _4.5 _Number of feature, opinion pairs and orientations
from adverbs _70 _ _4.6

_Final statistics on nouns and adjectives available in the datasets _70 _ _4.7 _Number of
implicit telugu reviews and implicit feature indicators in the datasets _73 _ _4.8 _Dataset
details _77 _ _4.9 _Telugu reviews Duration and telugu reviews count information _77 _
_4.10 _Information Retrieval measures at each step in NLP model _78 _ _4.11
_Percentage of product nouns extracted as frequent nouns at different sizes _79 _ _4.12
_Percentage overlap between various kinds of product features _81 _ _4.13 _Opinion
lexicon details _82 _ _4.14 _Accuracy of extracted opinions _85 _ _4.15 _Dataset details
_93 _ _4.16 _Data recovery measures on removed item highlights utilizing LDA and FOT
_96 _ _4.17 _Percentage of non extracted and uncommon product features using LDA
and FOT approach _97 _ _5.1 _Decision Tree Model learning _114 _ _5.2 _C4.5
rule learning _115 _ _5.3 _Telugu reviews dataset details _125 _ _5.4 _List of k-features
_127 _ _5.5 _Cosine similarity values for small k _129 _ _5.6 _Better and Cosine
comparability measures for breaking down likenesses between items _130 _ _5.7 _Item
Recommendations _132 _ _6.1 _Precision, recall and F-score comparison of frequent
nouns _136 _ _6.2 _Precision, recall and F-score comparison of relevant nouns _138 _ _6.3

_Precision, recall and F-score comparison of implicit nouns _139 _ _6.4 _Average
accuracy comparison of semi-automatic features extraction in NLP _141 _ _6.5 _Average
accuracy comparison of ontology based features extraction in NLP _142 _ _6.6 _Average
accuracy comparison of semi-automatic features extraction in LDA _143 _ _6.7

_Average accuracy comparison of ontology based features extraction in LDA _144 _ _6.8
_MAE Comparison of k-features sentiments based product recommendations _151 _ _6.9
_Comparison with state-of-art ontology based sentiment classification techniques _152 _
_ LIST OF FIGURES 2.1

_Standard taxonomy for feature level opinion mining _20 _ _2.2 _Considered taxonomy
for feature level opinion mining _21 _ _2.3 _Rules for product features and opinion
words extraction _28 _ _3.1 _Proposed Model _57 _ _4.1 _Empirical evaluation for
frequent nouns threshold _79 _ _4.2 _Accuracy of various extracted product features _81
_ _4.3 _Accuracy of extracted opinions _85 _ _4.4 _Feature and Opinion Topic Matrix _88
_ _4.5

_Probabilistic Document Generation using words from Topic Matrix _89 _ _4.6 _Excerpt
of the Feature Ontology Tree (FOT) _90 _ _4.7 _Line Graph of Log Likelihood on held-out
telugu reviews collection _94 _ _5.1 _Semantic Data Mining _102 _ _5.2 _PROO Ontology
class taxonomy _104 _ _5.3 _Opinion slots with allowed values _105 _ _5.4 _Feature
instance and its slots values _106 _ _5.5 _Visualization of PROO Ontology _106 _ _5.6

_Class hierarchies and related classes in PROO _112 _ _5.7 _Independent t-test and
Binary Logistic Regression to know the influence of new property on sentiment class
_113 _ _5.8 _Model for improving the sentiments of the product features _117 _ _5.9
_Area under ROC Plot with classifier accuracy _126 _ _5.10 _Scatter plot for the
percentage of products with different k values _128 _ _5.11 _Sentiments of k-features of
similar products in the absence of ontology _130 _ _5.12 _Products comparison with the
searched product in the absence of ontology _131 _ _5.13 _Sentiments of k-features of
similar products in the presence of ontology _131 _ _5.14 _Products comparison with the
searched product in the presence of ontology _132 _ _6.1 _Precision, recall and F-score
comparison of frequent nouns _137 _ _6.2
_Precision, recall and F-score comparison of relevant nouns _139 _ _6.3 _Precision, recall
and F-score comparison of implicit nouns _140 _ _6.4 _Count based sentiments of
Iphone X plus product features _145 _ _6.5 _Count based sentiments of Oppo f7 plus
product features _146 _ _6.6 _Count based sentiments of Samsung Galaxy S9 product
features _146 _ _6.7 _Sentiments of k-features in the absence of ontology _147 _ _6.8

_Sentiments of k-features in the presence of ontology _148 _ _6.9 _Products comparison


in the absence of ontology _149 _ _6.10 _Products comparison in the presence of
ontology _149 _ _6.11 _MAE Comparison on proposed recommendations _151 _ _
Chapter 1 Introduction In the course of the most recent two decades, the measure of
web data has detonated in a quick manner in all arrangements of information. The
online progression of data is on a consistent ascent.

The web content is developing at an extremely quick speed. The requirement for sharing
data among the web clients is additionally expanded. The adjustments in the web use
examples have prompted a tremendous correspondence and social association among
the web network. The term web 2.0 is authored by Tim O' Reilly [120] has this social web
as a piece of it. Well known web 2.0 web based shopping sites like Amazon, Flipkart and
so on.,

which use E-trade Business-to-Consumer (B2C) plan of action for leading on the web
exchanges are adding to the content advancement over the web. This is by giving data
on enormous measure of products and enterprises. They are giving the clients to
compose their telugu reviews on the as of now acquired items that are offered from
their site.

The customers who are in need of purchasing a product visit the online shopping
website, search for the relevant information and evaluate the product using the online
telugu reviews. The statistics reveal the customer behavior online. The statistics also
reflect on the number of buyers in terms of number which is subjected to grow in the
coming years.

The online telugu reviews have profound impact on online buyers. More than 30% of
the internet users posted online telugu reviews on a product or service and 87% of the
online shoppers report that telugu reviews have a significant influence on their purchase
[78]. The online telugu reviews contain opinions and sentiments of users towards the
products purchased and the services experienced.

The online buyers look at the summary of all these telugu reviews to take the purchase
decision. 1.1 Focus on telugu reviews in E-Commerce Textual content is extensively
gathered into two noteworthy classifications, for example, actualities and opinions.
Certainties are the target articulations about items, occasions and their properties.

Opinions are all around communicated as emotional explanations that uncover the
sentiments of individuals toward elements, occasions and their properties. Prior to the
existence of the web, the customers used to make a decision on a particular product by
consulting their friends and family members by obtaining suitable recommendations.

When the business organizations want to know the opinion of the people on their
products and services, they used to adopt the mechanisms namely clipping services,
survey using field agents and getting verbal views by concentrating on focused groups.
The number of participants in these groups were less, so the business organizations
were not able to improve their products in the market with the obtained statistical
information. As the web is enrooted in various diversified fields, it has created a place to
provide user curiosity and content as an involvement.

The E-commerce field has involved many users to write their online telugu reviews for
exhibiting their experience on a specific product. Online product telugu reviews are the
comments on those products that are purchased by the consumers. These are written in
free form text. These telugu reviews contain crucial information like product features
and corresponding opinions.

Telugu reviews are based on user interests, purchase habits, user contexts. Some telugu
reviews that were written may not contain grammar but the essence of the review need
to be extracted. The analysis of telugu reviews is important to business organizations for
understanding interests and perspectives of the users.

The features of the product and the opinions of the buyers who have purchased the
product are also helpful in highlighting the product in terms of recommendations. 1.2
Analysis of telugu reviews in the pretext of opinions and sentiments __ The term opinion
mining was first appeared in literature [19] which is the “process resulted by a lot of
looks for a given item by creating a rundown of item includes and comparing
assessments”.

The phrase sentiment analysis appeared in literature [18] is the essence of “classifying
the review (document) using the identified opinion words to positive, negative or
neutral”. Opinion Mining concentrates on the polarity classification of the feature at
word level and Sentiment Analysis classifies polarity of the feature by aggregating its
opinions at word level [65]. The phrases Opinion Mining and Sentiment Analysis are
used synonymously in the telugu reviews analysis literature.

In the actual sense, there is a narrowly defined task that separates both of them. Pang
and Lee [78] in their survey on Opinion Mining and Sentiment Analysis provided the
definitions of opinion and sentiment available. 1.3 Importance of polarity in the context
of telugu reviews The analysis of consumer views from product telugu reviews is the key
in determining the overall sentiment of the product. In order to determine the polarities
of the product features from online telugu reviews, certain steps are to be carried out.

The task of polarity detection starts by pre-processing the online telugu reviews. The
steps involved in pre-processing are review sentences tokenization, stopwords removal
and parts-of-speech tagging. The task of stemming is performed in very specific
operations like creating the summary [76], to make the machine to learn the topics
present in the underlying document collection [34]. The analysis of online telugu reviews
for features extraction is carried out at the stage of pre-processing [114] itself.

Product features are also extracted after pre-processing the telugu reviews. Popular data
mining algorithms [40, 49, 70] are applied to perform the previously mentioned task.
The opinion words are also extracted as the product feature has no use without the
opinion word. Dictionary based [54, 74] approaches are used to support this job.

The extracted features are linked with the corresponding opinion words to form
feature-opinion pair. Finally the polarity of the feature is determined by either
supervised machine learning algorithms [5, 97, 105] or advanced classification models
like ontology [43, 44]. 1.4

Need of product recommendations using machine learned sentiments The count of


both positive polarities and negative polarities of the product features are aggregated
to determine the actual sentiment of the features [88]. The overall sentiment of the
product is determined by calculating the sum of all sentiments of the product features.

This product sentiment is actually providing the opinions of many consumers in terms of
recommendations in the concise form. The current recommender systems are mainly
based on customers personal information and rating behavior. These systems lack
efficiency and accuracy in providing the recommendations to the customers [113] as the
product sentiment information is not taken into consideration.

These systems when catered with sentimental information will significantly improve the
accuracy and reliability in recommendations as the opinions of numerous customers are
analyzed for the sentiments. It is observed that the additional support provided to the
sentimental recommendations in the form of ontology as the background knowledge
improves the recommendations in a better way [106]. The knowledge mined from the
constructs of the ontology provides important information of utilizing the relationships
among the product features.

This mined knowledge with the properties of the ontology tree helps in improving the
product sentiments thereby the product recommendations. These improved product
recommendations help the consumer to make the accurate purchase decisions in
satisfactory manner. 1.5 Challenges in Opinion Mining The possible applications
developed to visualize and summarize the important features and associated opinions
of a particular product have made the people to gain interest in the research field of
opinion mining.

The product feature and opinion statistics in the form of non textual summaries are
reasonably appropriate to visualize clearly and to understand the mining results. Gamon
et al. [29] used colors to represent the degree of polarity of the feature. Morinaga et al.
[72] clustered the associated product features and opinion words using Principal
Components Analysis (PCA).

Feature and opinion based product telugu reviews summarization is different from
traditional text summarization. Features and opinion words are extracted from the huge
collection of telugu reviews and are aggregated and represented in the form of
at-a-glance presentation of the most important features and associated sentiments.

In [68] the process of telugu reviews summarization is carried out by primarily


identifying the features of the product and opinion words from the telugu reviews and
secondarily counting the number of positive and negative telugu reviews of the
associated features and finally these are linked with individual telugu reviews. On the
contrary, there are new interesting and intellectual challenges presented in this research
field.

The general challenges in opinion mining are namely contrasting nature of facts and
opinions, human versus algorithm based opinion word identification, and identification
of context sensitive and domain dependent opinionated sentences. 1.5.1 Contrasting
nature of facts and opinions Fact based information extraction deals with the universally
accepted objective information.

There is no need to use a generalized template to apply to the objective data.


Reasoning is carried out in a simple and effective manner by the machine to draw
conclusions from the factual data. Opinions, as opposed to the mere facts, are subjective
in nature and these opinions vary from person to person.

Opinions are considered subjective as they reflect one’s own belief, perspective, and
attitude towards a particular object. There is a need for learning generalized templates
[11] by the machine in order to help retrieve opinion query relevant data. Reasoning on
this kind of opinionated data is to be carried out by imposing meaningful and relations
based restrictions on the data so that the appropriate possible conclusions are drawn.
1.5.2 Factors that make opinion mining difficult 1.5.2.1

Human versus Algorithm based opinion word identification The study on movie telugu
reviews coming up with right set of keywords for review level opinion classification is
not a trivial work [8]. In order to show this, two human subjects were asked to pick
keywords from the movie corpus which they thought were the good markers of positive
and negative opinion words. The difference found in both the word lists was 6% and the
common words in both the lists were 39.

In contrast to human based opinion word list generation, a simple algorithm for
counting the distinct opinion words from the movie corpus was employed and it was
observed that there was an increase of 5% in the generated word list. 1.5.2.2
Identification of context sensitive and domain dependent opinionated sentences The
writings of web telugu reviews by the users are challenging to a machine to analyze.

The reviewer writing a review comment like ‘Go read the book’ with respect to the
movie domain is a negative opinion which implies that it is better to go and read the
book instead of coming and watching the movie. The same review comment is
understood as a positive opinion in books domain. The order in which different opinions
are presented in a review will result in a complete different way of interpreting the
review.

Modeling sequential information is more crucial in opinion mining. 1.6 Motivations and
Aims The span of the telugu reviews database is getting scaled every once in a while
because of the E-trade sites giving an office to content shopper opinions. It is seen that
there are assorted web hotspots for composing opinions.

These telugu reviews are consistently nourished into the framework and are not helpful
for certain cross segment of individuals for finding the significant wellsprings of audit
data, and it is by all accounts an impressive errand. This wonder made ready for opinion
mining. The accessible component extraction approaches in the writing [94, 98, 119]
were extricated either dependent on unequivocal item highlights or understood
highlights.

Scarcely any works [9, 39] focused on removing the two sorts of highlights. The
momentum opinion mining methodologies are backed off by serious issues, for
example, the nonexistence of interfacing ideas of semantic relationship in highlight
inquiry process, downsides in prepared AI model for adapting a wide range of opinion
words and the absence of cutting edge numerical strategies in learning the opinions.

The scientists [44] have clarified the need of philosophy for the extraction of
programmed item includes as it is designed by utilizing item telugu reviews area which
is mulled over. Be that as it may, the learning picked up is regularly inadequate in light
of the fact that the element and opinion information are just recognized utilizing
metaphysics.

Further, the AI calculations have never abused the develops of cosmology for offering
better item expectations as suggestions to the client. The point of this proposal is to
mine the area philosophy by utilizing the separated item highlight and opinion
information towards machine level notions for improved item suggestions to the client.
1.7

Objectives The objective of this work is to improve the accuracy of customer purchase
decision by understanding the sentiment patterns in the product telugu reviews domain
and recommending the products using these sentiments. The use of various algorithms
helps to improve the number of product features extracted and the sentiments of the
features.

Accordingly, various calculations are proposed in this postulation. The primary targets of
this work are outlined as pursues. • To contemplate the different opinion mining
calculations and methods on item telugu reviews accessible in current writing with their
benefits, bad marks, and impediments and furthermore to think about the different
presentation measures.

To develop a domain free step-by-step product features extraction model, to identify


the opinions and to calculate their orientations and to study the performance of the
proposed model in terms of information retrieval measures. To develop a domain
independent features extraction model based on the product feature hierarchy to apply
on the generated topic clusters, to identify the opinions and calculate their orientations
and to study the performance of the proposed model in terms of information retrieval
measures.

To develop algorithms based on the semantic data model annotated with product
features and opinion data for inducing a set of rules towards target sentiments of the
product features and to improve the product recommendations by using the target
sentiments. To study the performance of the semantic data mining algorithm in terms of
Receiver Operating Characteristic (ROC) curve parameters.

To study the performance of the product recommendation algorithm in terms of


popular similarity measures for sentiments. To compare the performance of the
proposed models with the baseline models for evaluating the extracted product
features. Also, to compare the performance of ontology based sentiments in product
recommendations with the performance of ontology less sentiments in product
recommendations.

1.8 Organization of the thesis This thesis is divided into SEVEN chapters. This chapter
provides the background on online telugu reviews, its focus in E-commerce, difference
between opinion and sentiment, need for product recommendations by using the
machine learned sentiments, analysis of the telugu reviews and the challenges. The aims
and objectives of this work are presented in this chapter.

The remaining chapters are organized as follows. The study of existing techniques is
important to develop new product feature extraction and sentiment analysis algorithms.
A classification of different levels of opinion mining approaches is presented. Various
feature level opinion mining algorithms in addition to the utilization of aggregated
opinions in product recommendations and corresponding algorithms are reviewed
under this classification. The utilization of ontology for better recommendations is also
reviewed.

The advantages, disadvantages, and the limitations of each algorithm are discussed and
presented in chapter 2. The influence of NLP on review based opinion mining is
discussed with modeling the data useful for natural language analysis and Latent
Dirichlet Allocation (LDA) based analysis.

It explains the model from the perspective of annotating the extracted product features
and opinions to the constructs of the Ontology for machine interpreted sentiments for
better product recommendations. The dataset used for carrying out the experiments in
the research work is presented. It highlights the existing popular dataset used in
Information Retrieval (IR) and the reasons to choose it.

The properties of the chosen dataset are discussed in chapter 3. The first two
experiments that are carried out in this research are elaborated. The experiments to
evaluate various types of extracted features using NLP based approach and LDA based
approach are discussed. The observations from the results are presented in chapter 4.
The final experiment carried out in this research is presented.

The role of semantic rules in sentiment learning and the influence of sentiments on
semantic rules are discussed. It also focuses on the algorithms to improve product
recommendations by using the rule based sentiments that are learned on the product
features. Several observations are tabulated and conclusions derived from the
experiments are delineated in chapter 5.

The comparison of the extracted product features based on similar product telugu
reviews by different vendors is discussed. Further, the comparison of feature extraction
measures with NLP and with LDA in both semi-automatic feature extraction and
Ontology supported feature extraction against the baseline models are discussed.

Comparison of the performance of ontology based sentiments in product


recommendations with the performance of ontology less sentiments in product
recommendations is discussed. Finally, the comparison of sentiment classification with
and without semantic data mining the ontology is also discussed in chapter 6. The
conclusions on the present research work and the scope of the work to be carried out in
future are presented in chapter 7.

Chapter 2 Machine learning in computational treatment of opinions 2.1 Introduction


The rising field of opinion mining was explored by Natural Language Processing (NLP)
people group for about two decades. The development in this exploration field has
appeared different sub territories wherein each sub territory manages an alternate
dimension of research question.

This overview is slanted to highlight level opinion mining of online telugu reviews, in
which the fundamental reason for existing is to distinguish and extricate item highlights
and opinions referenced in them and to decide the conclusion of each component. The
centrality of philosophy in programmed recognizable proof of item highlights and
opinions are likewise examined with current cutting edge.

Extraordinary consideration is paid towards item proposals utilizing philosophy as future


research is centered around using the develops of the element ideas and comparing
connections from the cosmology. 2.2 Review based Opinion Mining In the context of
online telugu reviews, opinion mining specifies vital facets. These are termed as online
telugu reviews that are written on the components and component features of the
product, opinions written on the component and telugu reviews which are directly
written on the product itself.

Opinion mining on online item telugu reviews is comprehended as the improved data
recovery as it includes learning the highlights and slants by the machine from the audit
sentences following the standards of the normal language [64]. 2.2.1 Tasks involved in
Opinion Mining The area of opinion mining is actually a broader field of study. There are
various tasks in mining the opinions.

They are namely feature extraction, opinion extraction, subjectivity classification,


sentiment classification and opinion holder identification. The description of individual
tasks is presented in the following sub sections. 2.2.1.1 Feature Extraction The errand of
highlight extraction is to recognize and remove different sorts of item includes.

These item highlights incorporate pieces of the item, properties of the item, related
ideas of the item, capacities given by the item and properties and parts of the related
ideas of the item [82]. 2.2.1.2 Opinion Extraction The task of opinion extraction is to
identify and extract the feature specific opinions from the telugu reviews.

It is known that the opinion and the opinion target (feature) coexist and entwined with
each other. The absence of anyone generally depletes the significance of the other [65].
The issue of conclusion mining works better when the significance of highlights and
assessments are figured it out. 2.2.1.3

Subjectivity Classification The task of subjectivity classification is to find out whether the
telugu reviews contain any subjective pieces of information or not. 2.2.1.4 Sentiment
Classification The task of sentiment classification is carried out at all levels of opinion
mining [64]. The aim is to classify the online telugu reviews based on the apriori
knowledge of opinions present in them.

Sentiment classification has the basic connected task as the subjectivity classification of
the telugu reviews. 2.2.1.5 Opinion Holder Identification The task of opinion holder
identification is to discover whether the holder of the opinion present in the review is a
human or a machine. 2.2.2

Implications of online telugu reviews in E-commerce field Online telugu reviews are
fundamental for associations since more customers rely upon the opinions of others
when settling on their purchase decisions. These telugu reviews help to assemble the
legitimacy of the item brand and conviction of the customers who intend to buy the
item.

Telugu reviews moreover help the business relationship to immaculately appreciate the
points of view and opinions of the customers so they can overview the 'customer mind'
and be successful in serving them. E-commerce websites for goods marketing provide
service for online purchase and selling of products. These marketing websites ask the
customers to give telugu reviews on their purchased products. These telugu reviews
contain opinions which vary from person to person.

This variation in the opinions elevates the need for understanding the overall opinion of
the features of the product which leads to the concept of opinion mining. The web
applications based on opinion situated data access present two noteworthy worries that
are to be managed. The first is the security of the substance assembled so as to know
the inclinations of the general population.

These inclinations must be remembered by the web application engineers as algorithms


that are running out of sight of web application customize the substance without the
worry of the substance holder. This is called as Filter Bubble [23]. Control of online
telugu reviews is another desperate issue which needs genuine concern since digging
for opinions from these sort of telugu reviews regularly lead to one-sided results. Nan
Hu et al.

[73] in their examination uncovered that simply above 10% of the books from Amazon
have controlled telugu reviews composed for them. 2.3 Various explored techniques on
opinion mining Various algorithms on opinion mining of product telugu reviews were
created previously yet it remains an unpredictable and testing task. A given opinion
mining technique may perform well on one issue space yet ineffectively on an alternate
area.

This is on the grounds that the setting where the opinion words are utilized changes the
analysis procedure to a noteworthy degree. In this manner, it is difficult to accomplish a
nonexclusive opinion mining strategy that is all around material for an expansive scope
of issue spaces. An enormous number of books and diary distributions are accessible on
opinion mining.

Selective books and book sections [65, 100], studies and review articles [78, 119] are
accessible in the writing. The standard scientific classification for highlight level opinion
mining [55] is delineated in Fig. 2.1 beneath. _ Fig. 2.1. Standard Taxonomy for feature
level opinion mining The considered taxonomy for this survey is slightly updated
according to the requirements of this thesis.

The updated taxonomy for feature level opinion mining is illustrated in Fig. 2.2 below.
Fig. 2.2. Considered Taxonomy for feature level opinion mining This overview is at first
gone for investigating the effect of measurable techniques that are utilized in natural
language preparing approaches, in which directed methodologies and in unsupervised
learning methodologies are to separate the product highlights and opinion words so as
to group the opinions.

The motivation behind the review additionally includes the outline of utilizing ontology
and digging it for computerized and significant extraction of product highlights. The
review is additionally stretched out to accumulate opinions and explore individual
element sentiments. At last it is finished with examining the impact of sentiments in
prescribing the products to the client and abusing the builds of ontology for giving
better proposals to the client. 2.3.1 Preprocessing of online telugu reviews Opinion
mining of online telugu reviews begins with the fundamental and significant assignment
of preprocessing.

In the point of view of online telugu reviews, preprocessing includes intelligent


rebuilding of the unstructured telugu reviews. By and large clients look for the product
highlights from the telugu reviews. The significant bits of data accessible in online
telugu reviews are product highlights and opinions.

In order to identify these processing tokens [31] initially the review is tokenized into
individual words. The tokenized words fall into three classes namely valid words,
inter-words and special words [31]. The potential product features may exist in all the
word classes. The parts of the product and the parts of the features are written in the
form of inter-words (e.g., Smart Phone , camera, camera zoom) in the telugu reviews.

Some of the capabilities of the product are written in the form of special words (e.g.,
wi-fi) in the telugu reviews. When the handling tokens were resolved, stop words that
are arranged from the telugu reviews are connected to expel the semantically less
noteworthy words and much of the time happening articles that have no an incentive in
the element seek.

The words after the utilization of stop rundown are given to the stemming algorithm to
acquire the standard semantic portrayal of the word. The choice to perform stemming
relies upon the accuracy in the product highlight seek. When the words are gotten after
discretionary stemming activity, Parts-of-Speech (PoS) labeling is done.

This is performed to recognize the word extraordinarily at the season of extricating the
product highlights and opinions. The greater part of the opinion mining research
network use Stanford Log-Linear Maximum Entropy Parts-of-Speech tagger structure
[56] to do this errand. The PoS labeled words are given as contribution to the procedure
of highlight extraction and opinion extraction.

The exploration directed by Liu [63] in 2007 indicated that 60% to 70% of the product
highlights are express things that are accessible from the telugu reviews. Ache et al. [77]
in 2002 accomplished an exactness of 82.8% in motion picture review areas utilizing just
descriptors in them. A solid relationship among's descriptors and subjectivity was found.

From these discoveries obviously express things and descriptive words are valuable in
the element extraction and opinion extraction. 2.3.2 Feature Extraction Approaches
Highlight (viewpoint) level opinion mining goes for getting the highlights from the
unstructured telugu reviews and finding the opinion direction of the component. This
analysis uncovers the impressions of the clients about the product whether they are
captivated by the product or something else.

On the other hand review level opinion mining works on the assumption of review
written on only one product by one person. Moreover, the sentence level opinion
mining works on the assumption of determining opinion of one product. These two
forms of analysis never provide useful insight into personalized experience on the
product features.

In light of the writing [19] accessible, highlight extraction methodologies are basically
partitioned into three different ways. These are to be specific creating natural language
rules, utilizing grouping models and utilizing theme models .2.3.2.1 Developing natural
language rules Natural language principle based strategies have a broad history of
training in data extraction.

The guidelines depend on foundation designs which restrict to different properties of at


least one terms and their relations in the sentences. In product telugu reviews, the
linguistic relations among highlights and opinion words or different terms are used to
learn extraction rules. The natural language standards are additionally arranged into two
classes. These are explicitly recurrence based strategies and connection based
techniques.

Frequency based methods An element can be communicated by a thing, descriptor,


action word or intensifier. In telugu reviews, individuals will in general discussion about
highlights which propose that highlights are visit things however all successive things
are not highlights. By applying limitations on high recurrence thing phrases the
recurrence based strategies help in distinguishing successive things that are
comprehended as product highlights. Hu and Liu utilized [40] Apriori algorithm to
separate product highlights utilizing things and thing phrases.

A portion of the rare highlights were additionally recognized. Blair-Goldensohn et al.


improved [7] the Hu and Liu [40] approach by considering those thing phrases that are
in sentiment bearing sentences. WordNet is utilized [12] to assemble sentiment
dictionary and direction depends on Maximum Entropy [1] (ME) classifier.

There is no summed up method for learning conditions (legitimate) among highlights


and sentiment words in this work. Popescu et al. created [82] OPINE instrument to
extricate normal examples to recognize potential highlights from online telugu reviews.
The potential highlights are the express highlights distinguished by OPINE.
OPINE utilizes data framework named KnowItAll to discover visit thing phrases that
contain the product includes in them. These are product properties, parts, highlights of
parts, related ideas, and related ideas highlights. Scaffidi et al. created [91] Red Opal
framework to find online products dependent on the removed highlights.

They utilized language model which concentrates product highlights dependent on its
event recurrence utilizing the binomial appropriation. Raju et al. separated [90] product
highlights utilizing the idea of bunching. They have utilized Dice closeness coefficient to
gauge the likeness between the two thing phrases first.

At that point the Group Average Agglomerative bunching (GAAC) algorithm was utilized
to bunch the comparable thing phrases. The thing with most elevated score is
proclaimed as the component. Mining continuous things and thing phrases as product
highlights is straightforward and progressively successful.

The impediments with recurrence based procedures are those which produce such a
large number of non-highlights and maintain a strategic distance from low recurrence
highlights which are the genuine highlights. Many info parameters are to be physically
provided in the extraction procedure. Connection based strategies The relationship
designs among the product highlights and the comparing sentiments are found out
from the preparation data.

The educated layouts are connected on the test data to separate the product highlights
from the telugu reviews. The extraction is conceivable as every sentiment communicates
an opinion on the objective highlights. Therefore the relationship is misused. Reliance
relations are the linguistic relations in a sentence. These relations are utilized to relate
the highlights and the opinions in a review sentence.

Zhuang et al. worked [59] on the possibility of reliance connection to remove the
highlights from online film telugu reviews. The composed conditions utilized for
highlight extraction in this work are indicated by Marie-Catherine and Christopher
Manning [21]. The unigram words in the review sentences are just distinguished. Wu et
al. improved [112] crafted by Zhuang et al.

[59] wherein the expressions in the review sentences were connected as opposed to
considering the relations to the word level. They expanded the current Support Vector
Machine (SVM) [10] classifier so as to prepare the framework on the expression reliance
tree to arrange the highlights and sentiments. They utilized the language model to
relate the product include with product review.
This strategy never considered the relationship between the highlights and opinion
words. Wang and Wang proposed [108] an algorithm to distinguish and to extricate the
product highlights and opinion words in the concurrent way. They utilized seed opinion
words to bootstrap the component extraction process from the gathering of telugu
reviews.

Modifying the direction of the product highlights utilizing opinion words was not moved
in that work. Qiu et al. proposed [84] an algorithm which is the improvement over Wang
and Wang [108] approach. The recommended algorithm is named as Double
Propagation. It deals with the snippet of data that opinion words are utilized to change
the direction of the product include.

This algorithm takes the seed opinion words as info and recognizes the product
highlights and opinion words. The algorithm is executed by investigating the immediate
and aberrant reliance relations between the product highlights and opinions. The
principles dependent on these relations are introduced in Fig. 2.3 given underneath. _
Fig. 2.3.

Rules for product features and opinion words extraction [84] The algorithm functions
admirably for medium-sized corpuses yet not on little and enormous corpora. This is on
the grounds that the examples learned on direct conditions have separated numerous
highlights which are not identified with the product. The capacity to discover low
recurrence product includes that are the real highlights is the potential favorable
position of the connection based strategies.

The significant disadvantage of these techniques is that they produce numerous


non-highlights which would coordinate with educated examples. 2.3.2.2 Sequence
models The sequence modeling is a supervised machine learning method as the
statistics of both the transitions and the possible emissions in the review sequence come
from the PoS templates available in the training corpus.

The current word in the sentence emits many hidden states but the word must be
related to a single state only. This is called as a problem of uncertainty. This problem is
solved using the concept of probability distribution with the simple graphical
representation of the model. All the supervised and unsupervised machine learning
methods follow this concept to maximize the association of the word with the specific
hidden state.

There are two grouping models specifically Hidden Markov Model (HMM) [85] and
Conditional Random Fields (CRF) [57]. Jin et al. expanded [109] fundamental HMM. They
included Part of Speech (PoS) data to the perceptions to remove the concealed word as
a segment or a capacity or a substance which is comprehended as the product highlight.
Low accuracy was accomplished because of the requirement for more ground truth data
for preparing.

Jakob and Gurevych used [75] the idea of direct chain CRF and extricated highlights
from sentences. They thought about the accompanying things for recognizing the
shrouded express: the word or the token, PoS tag, short reliance way and the word
remove from Inside-Outside-Beginning (IOB) naming plan. This methodology isn't
functioned admirably for long separation conditions having conjunctions in the review
sentences. Fangtao et al.

proposed [26] Skip-Tree CRF way to deal with defeat the issue looked in Jakob and
Gurevych [75] work. This strategy skirts the conjunctions utilizing the combination
structure and sentence structure tree structure data from preparing grouping of data to
extricate product highlights.

The quality of the managed learning procedures is that they defeated the recurrence
based impediments by learning the model parameters from the preparation data. The
main impediment in these methodologies is that they require immense physically
named (ground truth) data for preparing. 2.3.2.3 Topic models Supervised learning
requires physically clarified immense size corpus for learning the model from the
algorithm.

In unsupervised learning, no earlier learning happens on the corpus. The pre-prepared


corpus is given legitimately as a contribution to the algorithm and it creates the
bunches. Subject Modeling is an unsupervised learning strategy that produces point
bunches containing the blend of words in the archives.

Both PLSA and LDA models were utilized for product highlights extraction as these two
methodologies lessen the archive (telugu reviews) space which has 'n' measurements to
'k' bunches by separating the concealed topical structure from the record accumulation.
PLSA and LDA both use pack of-words documentation of reports. LDA produces the
subjects superior to PLSA. This is on the grounds that the LDA portrays about creating
point distribution for the inconspicuous archive. Lu et al.

proposed [118] a technique for highlight discovery and gathering in short remarks
utilizing PLSA Algorithm. In this work, the specialists actualized both unstructured PLSA
and organized PLSA on the expressions in the telugu reviews. They found that a decent
number of separated highlights were extricated from organized PLSA. Blei et al.

proposed [20] a model to create inert themes from the fundamental archive
accumulation (LDA). The issue with LDA is that it can't create neighborhood themes.
Titov et al. altered [45] the as of now proposed model of Blei [20] and removed the
highlights by examining the words which are explicit to worldwide and neighborhood
points.

The specialists named the changed model as Multi-grain Latent Dirichlet Allocation
(MG-LDA). This model does not have the correspondence among highlights and points.
Titov et al. expanded [104] their MG-LDA model by building progressively lucid model
which is named as MultiAspect Sentiment (MAS) for highlight extraction. Inferable from
low component rating esteem, MAS couldn't separate some neighborhood highlights.
Lin et al.

distinguished [62] includes by considering highlight disseminations for every sentiment


word in their Joint Sentiment/Topic (JST) model. This model never worked for highlight
explicit sentiment words. Yohan et al. expanded [116] JST and recognized the product
highlights of related sentiment word. To do this undertaking the subject models need
enormous volume of data as the human translated themes and the machine took in
points vary from one another from various perspectives. Least level of online telugu
reviews were communicated as far as the product include in the roundabout way.

These highlights are called as verifiable highlights. The element pointers which are
available in these telugu reviews help to distinguish the inferred product highlight. Less
measure of research was occurred on recognizing verifiable highlights when contrasted
and unequivocal highlights. The short review on distinguishing and removing verifiable
highlights is talked about underneath as an overview.

Wang and Wang recognized [9] certain highlights from online telugu reviews by utilizing
the property of setting relationship between product highlights and opinions. They
utilized a factual measure named as Revised Mutual Information (RMI) and concluded
the certain highlights by learning the property. Qi et al. used [83] COP-KMeans grouping
algorithm to bunch and connection the perfect product highlight and opinion words.

The understood highlights were recognized from the bunches by discovering the
unlinked highlight words in the product highlight group. Yu and Zhu proposed [117] a
novel co-event affiliation based strategy to extricate verifiable highlights from the client
telugu reviews. This is done by figuring the contingent likelihood of the hopeful
component words on the related notional words.
A hopeful component word is considered as certain element when the contingent
likelihood of that applicant highlight word among others is high. Ivan et al. separated
[16] understood product includes by distinguishing the certain element markers utilizing
CRF classifier first and after that removed the verifiable element utilizing SenticNet [94]
learning base administration. 2.3.3

Opinion word extraction approaches Separating highlights from the online telugu
reviews is the initial segment of the opinion mining task. The second part is to extricate
the opinion words related with the removed highlights. The component and opinion
pair is valuable for synopsis of the telugu reviews and for visual correlation of opinions.

The errand of opinion extraction is performed by Yi et al. [101]. They gathered sentiment
words from one of the different sentiment dictionaries to be specific General Inquirer
(GI) [97], Dictionary of Affect of Language (DAL) [17], and WordNet [12]. They refined
the sentiment examples acquired from preparing dataset by learning conditions among
the words in a sentence..

They connected the extended sentiment dictionary to the removed abstract sentences
to separate the real opinion words. The work is refined by Kim and Hovy [93]. The
specialists extended the chose sentiment words by gathering equivalent words from the
WordNet [12]. In any case, it isn't right to accept that every one of the equivalent words
of positive words are sure as the greater part words may have synonymy association
with both positive and negative word classifications. It computes the closeness of an
offered word to every class and decides the most plausible classification.

They connected the extended sentiment dictionary to the extricated emotional


sentences to separate the real opinion words. 2.3.4 Opinion word orientation
approaches The third part is to dissect the direction of the separated opinion word. The
expansion of this direction finishes synopsis undertaking of the telugu reviews.

Opinion direction learning or sentiment classification at highlight (word) level were


completed in two different ways in particular lexicon based models and corpus based
models. 2.3.4.1 Dictionary based models Minqing Hu and Bing Liu abused [68] WordNet
bipolar modifier structure to discover the direction of the descriptors from the online
telugu reviews. Esuli and Sebastiani utilized [3] the glossary definitions to get familiar
with the opinion bearing word direction.

Esuli and Sebastiani created [24] a lexical asset for opinion mining called SentiWordNet
1.0. Kennedy and Inkpen utilized [52] General Inquirer (GI) lexicon to recognize the word
sentiment class. Baccianella et al. improved [5] the opinion mining lexical asset and
called it as SentiWordNet 3.0.

The two renditions vary from one another in the variants of WordNet utilized for
explaining the opinion direction of the sentiment words and in the algorithm utilized for
naturally commenting on WordNet synsets. The impediment with lexicon based models
is that these word references are not ready to discover opinion words with space and
setting explicit directions. 2.3.4.2

Corpus based models Hatzivassiloglou and Weibe broke down [107] the conjunctions
between modifiers utilizing a log direct classifier and made descriptive word groups of
comparative direction for learning opinion word directions. Turney grouped [81] the
sentiment of the telugu reviews utilizing the Semantic Orientation (SO) of the expression
utilizing Pointwise Mutual Information (PMI) [14, 86, 110].The creator utilized two
reference words "amazing" and "poor" in the SO figuring procedure as these are the two
scale limits for rating the product.

Turney and Littmann characterized [105] the word level sentiments by figuring the SO of
the word utilizing PMI. The scientists utilized two seed sets (regarded as positive and
negative seed sets) with seven terms in each. These terms are not setting delicate. The
impediment with corpus based models is that it is difficult to set up an immense corpus
to cover every single English word. 2.3.5

Ontology support to feature level opinion mining The ebb and flow opinion mining
techniques are backed off by serious issues, for example, nonexistence of semantic
associations among different ideas (include, opinion, extremity, and so on.) in highlight
inquiry process and the absence of cutting edge machine learning strategies while
learning the opinions. Isidro Peñalver-Martínez et al.

determined [43] the need of ontology for programmed product highlight extraction as it
is built by taking product telugu reviews space into thought. The programmed machine
classification of online telugu reviews is likewise conceivable utilizing the ontology by
designing it on the removed product highlights and opinions. Ian Horrocks [41]
characterizes Ontology as a formal, unequivocal specialization of a common
conceptualization.

Ontology describes the learning as a lot of ideas inside a space, and set of ideas inside
an area comprising of the relationship among them and bolster thinking for ideas. A
portion of the motivations to create Ontology are to share regular comprehension of
the structure of data among individuals or programming specialists, to empower the
reuse of area learning, to make space suspicions express, to isolate area information
from the operational information, and to dissect area learning.

For a product and its characteristics space, building the Ontology can obviously help in
looking through the product highlights from the corpus. Ontology supported feature
level opinion mining has received a good amount of research work in the recent years.
Lili Zhao and Chunping Li developed [61] an Ontology, based on the selection of
relevant sentences which contains a conjunction word and possessing with atleast one
concept word. The ontology helps to extract the unknown features from the sentences
as it analyzes the conjunction between the noun phrases.

They used SentiWordNet 3.0 corpus to classify the opinion orientation of the feature
word. Jantima Polpinij and Aditya K.Ghose built [47] lexical variation ontology from
dictionary, irregular verb, and raw texts. The product features are identified with this
Ontology and opinion orientation is carried out by building a SVM classifier utilizing the
Ontology and testing the feature specific telugu reviews. Khin Phyu Phyu Shein
constructed [53] Ontology using Formal Concept Analysis (FCA) [30].

The ontology is helpful in identifying product features from the product telugu reviews.
Opinion words were classified by using linear SVM classifier. Larissa A. de Freitas and
Renata Vieira constructed [58] ontology using FCA to identify the movie features from
the telugu reviews. Polarity words are identified with the support of SentiWordNet 3.0
lexical resource. Isidro Peñalver-Martínez et al.

constructed [43] ontology to identify the features present in the review and also
calculated the score of the feature to represent its importance. They also identified the
polarity of the review using SentiWordNet 3.0 and calculated the actual orientation of
the review by using the vector model developed by them. 2.3.6

Ontology supported feature level sentiment classification – A semantic data mining


approach Sentiment Classification is a content order issue. Content Categorization as
given by Fabrizio Sebastiani [25] is the errand of doling out a Boolean incentive to each
match <dj,ci> ? D * C, where D is a space of reports and C is a lot of predefined classes.
An estimation of 'T' alloted to <dj,ci> shows a choice to a document dj under ci , and 'F'
if not.

The review records are to be arranged for learning the direction of the opinion. The
fundamental points of interest of telugu reviews arrangement are, it is anything but
difficult to discover or look through the telugu reviews dependent on client list of
capabilities, to know the quantity of positive and negative telugu reviews of the product,
and to expect the basic leadership as quick and exact as would be prudent. Ontolgy is
utilized to demonstrate a space and bolster thinking about ideas.

Ontology based component level opinion mining enables the machine to naturally
distinguish the product highlights and sentiment words. In any case, the reason for
ontology in opinion mining is past this dimension of show expecting it to sort the telugu
reviews dependent on the sentiments of the separated highlights. This is towards
building up the better proposal frameworks. Machine learning algorithms are to be
improved toward this path.

Ontology contains bounty of information about a specific space with the example data.
The mined information from the improved machine learning algorithms concentrates set
of principles that are valuable for learning the objective ideas. This is investigated to
order the product telugu reviews dependent on the objective sentiment guidelines
gained from the Ontology.

A particular overview on ontology based sentiment classification of online telugu


reviews is directed underneath. Colace .F et al. given [28] a way to deal with
consequently separate positive and negative sentiments with ontological sifting. The
word synsets were not considered in the ontology improvement. Li et al.

proposed [92] an ontology based sentiment analysis strategy for learning arranged
popular opinions. There is a degree to improve the expressiveness of the ontology for
increasingly compelling sentiment class learning. Matic Perovšek utilized [67] the idea of
Inductive Logic Programming (ILP) on the social databases to take in the affiliation rules
from the literary corpus by applying the Wordification system to locate the most
pertinent element from the given word.

There is a degree to improve the work towards ontology for target sentiment class
learning. As of late Alberto Salguero and Macarena Espinilla connected [2] an ontology
based Description Logic (DL) class articulation student on the review archives to group
their sentiments. The work never focused on the element level sentiment classification.
2.3.7

Recommender Systems in E-Commerce The recommender frameworks (RS) are the data
separating frameworks which manage the huge measure of data that is powerfully
created dependent on clients inclinations, interests and watched practices. These
conventional recommender frameworks fall into three classifications. They are;
cooperative separating based RS, content based RS and learning based RS.
The community oriented recommender frameworks are total evaluations from the
arrangement of clients on the thing and prescribe it. It likewise recognizes the clients
who are comparable with the client from whom suggestions are to be given. Resnick et
al. created [79] a framework called GroupLens to help individuals to discover articles
they are most intrigued by.

Anna Stavrianou and Caroline Brun created [95] an application to prescribe products
dependent on the opinions and recommendations written in the online product telugu
reviews. The substance based recommender frameworks learns the client profile
dependent on the product include where the client has focused on. Lang created [51] a
framework called NewsWeeder which uses the expressions of the content as the
highlights.

Jia Zhou and Tiejian Luo created [48] a substance based recommender framework that
perspectives client shopping history to prescribe the comparative products dependent
on the similitude between the product highlights. The information based recommender
frameworks give the substance proposals dependent on the reasonings from clients
needs and inclinations.

These frameworks have the information about how a specific product meets the client
necessity dependent on the true data. The client profile is additionally required to give
great product proposals to the client. Case based thinking (CBR) is a sort of learning
based recommender framework. Kolodner utilized [46] CBR to suggest the cafés
dependent on the client's selection of highlights. Stefan et al.

worked [96] on client log data to mine the product inclinations dependent on the like or
aversion data accessible in the log. 2.3.7.1 Sentiments based product recommendations
Sentiment based product suggestions have picked up research significance in the
ongoing occasions. The information found as far as product highlights and opinions
from online product telugu reviews among the classification of products are helpful to
the client in customized suggestions. These element level sentiments are totaled to
frame the product sentiment.

Li Chen and Feng Wang proposed [33] a novel clarification interface that breakers the
element sentiment data into the suggestion content. They additionally gave the help to
different products examination as for similitude utilizing the normal component
sentiments. Gurini et al. proposed [35] companions suggestion system in Twitter
utilizing a novel weighting capacity which is called Sentiment-Volume-Objectivity (SVO).

proposed [60] a recommender framework that perceives the sentiment articulations


from the telugu reviews, measured with the sentiment quality and suitably prescribe
products as indicated by client needs. As of late, Dong et al. created [89] a product
proposal methodology that joins both likeness and sentiments to recommend products.
2.3.7.2 Ontologies utilization for better product recommendations The utilization of
ontologies for better product proposals is a creating examination zone.

Uzun and Christian made [106] a semantic extension to FOKUS recommender system.
This growth is prepared for organizing intelligent and semantic data in the proposition.
Hadi and Mohammadali exhibited [36] a semantic recommendation method using
ontology on online products reliant on the usage instances of the customers. 2.4
Evaluation techniques in opinion mining Assessing the review at highlight level by a
machine is a troublesome undertaking due to the differing natures of composed telugu
reviews in the review destinations.From the papers surveyed in the previous sections, it
was found that accuracy of the opinion words identified by the human and machine
differs in a considerable level.

Another important point while observing the online telugu reviews evaluation is the
prevalent use of different metrics. In this area the measurements used to assess the
exhibition of opinion mining frameworks and classifiers are talked about. 2.4.1 Measures
in Opinion Mining Three popular information retrieval measures are used extensively in
the opinion mining literature. They are Precision, Recall and F-measure.

Precision Exactness is characterized as the portion of number of recovered things which


are important to the question to the complete number of recovered things.
Number_Retrieved_Relevant Precision = --------------------------------------------------
Number_Total_Retrieved The precision measure is used in various research works for
evaluating the percentage of retrieved features from the online telugu reviews.

Precision is also used to learn the accuracy of the opinion classification based on the
human annotated features. Recall Recall is characterized as the portion of number of
recovered things which are pertinent to the inquiry to the quantity of applicable things.
Number_ Retrieved_Relevant Recall= ----------------------------------------------------
Number_Possible_Relevant The recall measure is calculated in combination with the
metric precision to understand the significance of exact product features from the
telugu reviews.

Recall is also used to learn the accuracy of system predicted opinions with the human
annotated opinions. F1-score The meaning of F1-score is the consonant mean of
Precision and Recall. 2 * Precision * Recall F1-score= ------------------------------------
Precision + Recall Table 2.1 and Table 2.2
tabulate the results of feature extraction techniques and sentiment classification models.
Table 2.1. Comparative results among different feature extraction techniques. Bold face
indicates the best performer on the data Feature Extraction Technique _Results reported
by _No. of telugu reviews _Precision (in %) _Recall (in %) _ _Association Rule Mining _Hu
and Liu [40] _4254 _72 _80 _ _Static and Dynamic Aspects learning _Blair-Goldensohn et
al. [7] _3492 _85 _66 _ _Dependency Grammar Graph _Zhuang et al. [59] _1000 _48.3
_58.5 _ _Phrase Dependency Tree _Wu et al. [112] _4254 _66.5 _65.75 _ _Double
Propagation _Qiu et al. [47] _4334 _88 _83 _ _CRF _Jacob and Gurevych [75] _2578 _64
_37.75 _ _MG-LDA _Ivan and Titov [45] _10000 _90 _61.66 _ _Works on Implicit features
extraction _ _COP-KMeans _Qi et al. [83] _18867 _78.9 _90.73 _ _Rules based on
Dependency Grammar & SenticNet _Cambria et al.

[94] _4254 _93.25 _94.15 _ _ Table 2.2. Sentiment Evaluation metrics applied on various
classifiers. Bold face indicates the best performer on the data Classifier/System _Type
_Results reported by _Evaluation Metric and value in % _ _Log linear classifier
_Regression _Vasileios et al. [107] _Accuracy(Yes label) – 82.05 _ _SO-PMI _Cluster
Accuracy _Turney and Littman [105] _Accuracy of sentiment classes – 71 _ _SVMLIGHT
_NB, SVM _Esuli and Sebastiani [3] _Accuracy – 88.05 _ _ The distributed exploratory
results that are recorded in Table 2.1 and Table 2.2

permit endeavoring a few contemplations on the exhibition of the different component


extraction procedures and the sentiment classifiers talked about. Cautious translation of
these methodologies regarding the outcomes can help in making fascinating inferences.
By looking at the unequivocal component extraction procedures based on the test did
on a similar gathering (sub segment 2.4.1), the best entertainer on the data is observed
to be the Double Propagation approach [84] regarding review.

Ivan and Titov recommended [45] that the LDA variation is observed to be the best
entertainer on the data regarding exactness. By looking at the procedures utilized in
certain component extraction the best entertainer on the data is observed to be
Cambria et al. [94] standard based methodology. Ivan and Titov [45] and Esuli and
Sebastiani utilized [3] the SVMLIGHT framework to learn SVM classifier to characterize
the sentiments of the telugu reviews.

Esuli and Sebastiani considered [3] Kamps [50] WordNet lexical relations subordinate
data to appraise the precision of the SVM model. Esuli and Sebastiani [3] sentiment
classifier works superior to all others. 2.5 Problem Definition The present work is gone
for separating unequivocal and certain product highlights from online telugu reviews,
the extraction of product includes by improving the conventional point model, the
learning of semantic principles by mining the ontology and cooking the sentiments in
the products proposal process.

The marvel of extraction depends on the inconveniences displayed by the specialists at


different dimensions. It is seen that there is no well ordered component extraction
approach accessible utilizing NLP, which spreads both unequivocal and certain
highlights. There is a need in using the "part-of" and "is-a" connections among the
different sorts of product highlights to apply on demonstrated points for extricating the
certifiable product highlights.

There is a requirement for canny revelation of sentiments of the product includes by


mining the develops of the ontology towards improving product proposals. This
encourages the clients to settle on a quicker and better basic leadership while buy. The
thesis accentuates the requirements in processing online telugu reviews for structures
information access.

It also analyses the challenges in terms of paradigm shift faced by the machine learning
research community regarding knowledge mining and suggests methods to meet these
requirements and challenges. 2.6 Proposed methodology The proposed methodology
starts from product telugu reviews datasets creation. Once this is carried out, the first
step is to represent the telugu reviews in a form that are readable by the machine.

The subsequent advance is to distinguish and remove both express and verifiable item
includes from the telugu surveys. The third step is to distinguish and extricate
conclusion words and decide the extremity of the feeling words. The fourth step is to
develop ontology and annotate the obtained product features and opinion words.

The next step is to mine the constructs of the ontology to learn the semantic rules of the
target sentiment class to provide better product recommendations by the machine. The
final step is to compare the enhanced model with the original one to evaluate the
improvements in providing the product recommendations to the customer. 2.7

Conclusions The speed at which the telugu reviews are expanding quickly in the review
databases has required a need to dissect and outline them for powerful opinion
representation and recovery. Despite the fact that the examination began in opinion
mining around twenty years prior, this field is as yet seen to be in developing stage.

This review underlined different component extraction methodologies utilizing both


factual and machine learning strategies. Additionally, opinion word extraction and
direction of the separated opinion words are reviewed. Ontology support for opinion
mining has been considered. The ongoing works in learning the extremity of the opinion
by mining the ontology is likewise accentuated.

The impact of sentiments in suggesting the products utilizing recommender frameworks


is additionally examined. The proposals that opinion mining examination have given to
the field of E-business are featured. For assessing opinion mining, regularly utilized
estimates like accuracy, review and F1-score are often utilized in the examination
writing.

The issue definition is determined by featuring the holes distinguished in the writing. At
last, the proposed technique is given. Chapter 3 Influence of Natural Language
Processing on review based opinion mining 3.1Introduction Opinions are vital to all
human actions. It forms thought, has a structure and carries semantics when writing
online telugu reviews.

The consumers present their views on the product in the review by using the constructs
of the language. The examination of such obstinate substance is worried about the
improvement of computational models. This has prompted the usage of Natural
Language Processing (NLP) systems.

The reasons for choosing NLP are to automate the language processing and to
understand the meaning present in the review in a better way. Natural Language
Processing is one of the most experimental components of current Artificial Intelligence
(AI) technology. Mining opinions and sentiments from online telugu reviews are
challenging because they require profound understanding of the various syntactic and
semantic language rules. These are needed to learn the explicit, implicit, regular and
irregular words from the review sentences.

The specific task of classifying the opinions is a restricted problem in NLP because the
machine only needs to learn the positive or negative sentiments of each review and the
target features. Therefore the area of opinion mining is an opportunity to NLP
researchers to make substantial progress in all its borders and create a huge practical
impact. 3.2

Drawbacks of the existing methods in opinion mining Opinion mining is carried out at
three levels of analysis. These are namely document level, sentence level and feature or
word level. Both the document level and sentence level opinion mining methods [65]
identify the subjective sentences present in the review.

The classifier [105] will then perform the task of opinion classification when the
subjective sentences are found. These methods and classifiers never concentrate on the
feature or word level analysis for opinions as the analysis at feature level reveals the
most important information in opinion extraction which emphasize on the likes and
dislikes of the people on the product feature.

Feature level opinion mining is a fine grained analysis as the analysis is carried out
directly on the opinion words itself. It is based on the idea that an opinion consists of
the polarity and the target entity. Therefore, the tasks in feature level opinion mining are
extraction of product features, extraction of opinions, and determining the polarity of
the opinion.

There are various methods and classifiers available in the literature [40, 50, 75, 84, 116]
which are used to carry out the previously mentioned tasks. In the context of online
telugu reviews, a classifier can be a target function which contains set of patterns and
labels to extract product features, recognize implicit feature indicators and to categorize
the telugu reviews under predefined polarity labels.

In the context of online telugu reviews, the method is a feature extraction procedure
and an opinion analysis procedure. Various explicit and implicit feature extraction
methods from online telugu reviews are discussed in the literature. The major explicit
features written in telugu reviews are frequent nouns and relevant nouns.

The disadvantages of the current techniques in express element extraction are given
below. The frequency based methods [40][82] extract some of the frequent nouns which
are not the actual product features. The relation based methods [59][84][112] extract
many non features that match the noun-adjective patterns learned from the data
corpus.

Traditional LDA based methods [20][45] extracts features which are topically related with
one another. A very less percentage of online telugu reviews are written without the
explicit mention of product features. These are called implicit telugu reviews and the
features that are implied from these telugu reviews are called implicit features.

The disadvantage of the current strategies [9][16][117] in verifiable element extraction is


given below. • The verifiable component extraction strategies are not ready to
distinguish a portion of the certain The opinions are also extracted from the telugu
reviews as there is a limited use with the extracted product features. Opinions
emphasize on the like or dislike information on the product features.

They have taken the support of sentiment lexicons for identifying the opinions from the
telugu reviews. The notion of seed set is used in order to bootstrap the process of
opinion extraction. Opinions are extracted in most of the relation based methods
[84][112] that exploit feature-opinion relationships.

The limitation with these methods is that the size of the initial seed set is to be
considered high for better identification of the opinions. The works on opinion
classification available from the literature [5][68][105] categorized the opinions into
either of the binary classes namely positive or negative. There are extreme limitations
[122] while classifying the opinions available in the telugu reviews. These are specified
below.

• The arrangement of the sentiment was completed based on the word chose yet not
based on setting of the word. The misinterpretation of the orientation of the opinion
when multiple emotions are averaged. The misinterpretation of the orientation of the
opinion by providing equal weights to all opinions falling within a category.

The work on semantic data mining based sentiment analysis [2] classified the telugu
reviews based on the target sentiments learned on the telugu reviews using ontology as
background knowledge. This work never concentrated on sentiments at feature level.
The works on sentiments based recommender systems available from the literature
[33][89] considered the sentiments calculated from the product features from the telugu
reviews and recommends the products using the similarities of the products based on
the calculated sentiments.

These works were never concentrated on the importance of the features which are
useful in improving the prediction accuracy of the recommendation algorithm. The deals
with ontology based recommender frameworks [36][106] was neither focused on using
the profundity data of the space highlight hubs from the ontology tree nor on tallness
of the ontology tree.

These properties go about as managed loads in improving the sentiment of the element
and in this manner help in improving the suggestions. Motivated by the limitations
mentioned above, a model is proposed which utilizes the methods that are best
performers on the data for various tasks of opinion mining. Under explicit product
features extraction using NLP techniques, the frequent features extraction method as
proposed by the researchers [39] extracts frequent nouns by analyzing the data corpus
and determining the empirical threshold. This helps in accurate identification of
authentic features.

The relevant features extraction method clearly specifies the relation among product
features and opinions. The relation specified is as follows: Firstly the adjectives adjacent
to frequent nouns are to be identified from the data corpus. Secondly the nouns
adjacent to these adjectives are to be collected. This helps to overcome the extraction of
non features to a major extent.

The explicit product features extraction is also carried out using unsupervised machine
learning algorithms. LDA topic modeling [20] is used to learn the hidden topics present
in the product telugu reviews collection. The learned topics contain product features
and opinions. In order to extract the actual product features, the ‘is-a’ and ‘part-of’
semantic relations among the product features are to be explored.

The semantic model which is used to extract the product features is to be developed as
a domain independent, light weight Feature Ontology Tree (FOT). This work is aimed as
an improvement over previous works in which all the features of the product are just
specified as attributes of the product but there is no internal relation among the related
attributes of the product. The implicit features extraction as proposed [16][94] was
performed better on the corpus.

The implicit features are identified in two step process. Firstly the indicator words
present in the implicit telugu reviews are identified. Secondly the implicit features
corresponding to the identified indicator words are extracted. The implicit features are
extracted using the feature knowledge base which is developed using WordNet [12] and
SenticNet [16].

The tasks of opinion word extraction and its orientation are to be carried out from the
telugu reviews in the following way. First the adjectives adjacent to the extracted
features are to be analyzed for opinions using the standard opinion lexicon. Next, the
identified opinions are to be searched for the polarity score under adjectives category in
SentiWordNet 3.0 [5] to determine its orientation.

In order to take a wise purchase decision on the product, a Description Logic (DL) based
Ontology is to be engineered with the annotation of the above extracted product
features and opinions. These rules are useful to clearly classify the sentiments of the
annotated product features. 3.3 Proposed Model The principal objective of improving
the accuracy of customer purchase decision by understanding the sentiment patterns in
the product telugu reviews domain and recommending products using these sentiments
is by exploiting the relationships between product features and improving the
sentiments of important features.

In order to achieve this goal, a framework is presented in Figure 3.1. <=2.5 >2.5 Fig 3.1.
Proposed Model The framework displayed above is composed of three major modules.
These are: telugu reviews pre-processing module, product features and opinions
extraction module and semantic sentiments based products recommendation module.

A general overview of the proposed model is as follows: i) The input to the model is the
product name specified as a query in E-Commerce website by the user. ii) The model
collects the product telugu reviews based on the searched product. These collected
telugu reviews are provided to the first module. The preprocessing module takes these
telugu reviews and tokenizes them first. Next, the stop words compiled from these
telugu reviews are used to remove them.

Then the updated telugu reviews are given to the PoS tagger to assign the
part-of-speech tag to each word in the review sentence. The PoS tagged words are
given to second module to extract product features and corresponding opinions. The
second module uses the PoS tagged nouns and adjectives for identifying and extracting
the product features and opinions.

The models employed to carry out the aforementioned tasks are namely natural
language analysis based model and the topic modeling based model. Under natural
language analysis based model, in order to extract the explicit and implicit product
features, the lexical resources namely WordNet and SenticNet are used. The tasks of
opinion extraction and opinion orientation were carried out using WordNet and
semantic similarity statistical measurement.

Under topic modeling based model, in order to extract the explicit product features
from the generated topic clusters, a light weight Feature Ontology Tree (FOT) is applied
against these clusters. The errand of opinion extraction is done by dissecting the LDA
subject framework to outline recognized product highlight with the review.

At that point, by utilizing the opinion vocabulary the descriptive words present in the
telugu reviews are thought about. The assignment of opinion direction is done with the
assistance of Sentiwordnet 3.0 which is a lexical asset for computing sentiment of a
review, or for an announcement or for a word individually.

The separated product highlights and opinions are clarified with the ideas of the
Product Review Opinion Ontology (PROO) in the third module. The comparative
products having a place with a similar product class additionally experience these three
phases of handling. This PROO ontology fills in as the foundation information to learn
guideline put together sentiments communicated with respect to product highlights.
This sort of guideline learning is named as Semantic Data Mining (SDM). These
sentiment put together semantic guidelines are found out with respect to both
taxonomical and non taxonomical relations accessible in PROO. So as to check the
mined standards, Inductive Logic Programming (ILP) is connected on PROO. The
educated ILP standards are observed to be among the mined guidelines.

The standard based sentiments that are found out by utilizing the develops of the
ontology gives the significant data of using the relationship among the normal
highlights of the comparable products concerning the questioned product. This
translates the various leveled highlights "as-a-unit" to improve the sentiments of the
parent highlights.

These parent highlights are available at the more elevated amount close to the base of
the ontology. Likewise, the sentiments of the related highlights of the comparable
products regarding the questioned product are improved. These improved sentiments
of the parent highlights and the related highlights in the long run improve the amassed
sentiment of the product.

This leads either to change in the situation of the product in the rundown of comparable
products prescribed or show up in the suggested rundown. The yield got after the third
module from the model is the rundown of improved product suggestions dependent on
improved sentiments with the assistance of mined semantic principles from ontology.
This enables the client to settle on right buy choices. 3.4

Review Datasets for the implementation of the model The product telugu reviews
obtained from Amazon were used for the implementation of the model. Three product
telugu reviews were considered. Iphone X plus, Oppo f7 plus and Samsung Galaxy
S9were the products for which the telugu reviews were considered for analysis.

The Amazon online product telugu reviews were only considered because majority of
the E-commerce customers trust Amazon online telugu reviews [123] for their purchase
decisions. Every review is manually tagged with symbols. The symbol set is
{xxxx,+n,-n,##}. Every review is annotated with ‘xxxx’ which is the product feature was
commented in the review. ‘+n’ denotes the positive opinion and ‘-n’ denotes the
negative opinion.

‘n’ is the opinion strength. The symbol ‘##’ represents the start of each sentence. This
clearly specifies that when a long review is present, the program will easily understand
that the sentence is a part of the review. The opinion strength of the feature is
determined by using [71] seven level polarity system. The seven level polarity values are
{-3, -2, -1, 0, +1, +2, +3}.

The seven level polarity system influences the purchase decisions of the customer in a
more relevant manner [71]. The strength score is applicable to the unique synonyms of
the identified opinion words in the work. These synonyms were considered from Google
search process. The equivalent words for the seven level extremity terms are arranged in
Table 3.1 below. Table 3.1.

Polarity Terms and Synonyms in assigning polarity score Terms _Synonyms _ _Excellent
_{fantabulous, superb, outstanding, magnificent, exceptional, marvellous, wonderful, sub
lime, perfect, eminent, pre-
eminent, matchless, peerless, supreme, first-rate, first-class, superior, superlative, splendi
d, admirable, worthy, sterling, fine} _ _Distinguish _{distinctive} _ _Accept _{bear, assume,
manage, believe, trust, credit, suffer, support, follow, adopt, receive} _ _Neutral
_{ordinary, objective, unbiased, fair} _ _Reject _(faulty, refuse, negative, discard, exclude,
abandon, avoid, ignore, cut) _ _Poor _{inferior, defective, bad, substandard, sad, crap} _
_Worst _{tough, sorry, regretful, risky, unfit} _ _ The symbol set is confined to four
notations and these notations make the analysis of the telugu reviews easy.

The task of opinion extraction is also carried out by the opinion lexicon on English
language dataset. 3.5 Empirical evaluation of the data with small dataset An opinion of a
single person is often not sufficient for taking a decision. It is required to analyze the
opinions of many people to understand the holistic view on a particular product.

The analysis of opinions for learning their orientations is carried out on both large
datasets [98] involving 18867 review documents and also on the small datasets [115]
involving only 249 review documents. The analysis for sentiments on small dataset
outperformed the bag-of-words model in predicting the sentiment of the review with a
nearest neighbor classifier [115].

The number of collected telugu reviews for carrying out the experiments is limited to
300 telugu reviews. The number of telugu reviews considered for NLP based analysis is
limited to the telugu reviews of three products. This is to understand the tasks of
extracting maximum number of explicit and implicit features and opinions clearly.

The number of telugu reviews considered for LDA based analysis is limited to the telugu
reviews of three products from the benchmark dataset. This is to understand the task of
extracting the explicit product features from LDA topics using lightweight ontology and
extracting the opinions clearly. The influence of sentiments on semantic rules for better
product recommendations requires similar product telugu reviews data.
The similar product telugu reviews are obtained from Amazon to perform the task of
product recommendations. This helps to understand the improved sentiment expressed
on the product in a clearer manner. 3.6 Conclusion The influence of NLP on review
based opinion mining is discussed by modeling the data useful for natural language
analysis and Latent Dirichlet Allocation (LDA). The telugu reviews dataset used for
carrying out the experiments further is presented.

Finally, the reasons to use the small dataset in the evaluation of conducted experiments
are provided. Chapter 4 Feature level opinion mining of online telugu reviews 4.1
Introduction Web based business sites furnish clients with the required product data by
giving different sorts of administrations to look over them.

One such administration is to enable the clients to peruse the online telugu reviews
indicated by the end clients. Online telugu reviews contain highlights and opinions
which are helpful for the analysis in opinion mining. The majority of the frameworks
work with the synopsis of the telugu reviews by taking the normal highlights and their
opinions that prompts organized review data.

Many of the times the analysis of the telugu reviews is carried out for a specific problem.
To make this analysis generic, a natural language processing based feature extraction
process is carried out first by exploring the language rules from the telugu reviews and
discovering the themes from the underlying telugu reviews collection.

Further the tasks of opinion extraction and orientation are carried out by first identifying
the opinion words from the telugu reviews. The extracted opinion words are then
analyzed for the orientation whether the opinions are positive, negative or neutral. 4.2
Opinion mining of product telugu reviews using language rules Natural Language
Processing allows machines to understand how humans speak.

To understand the human language is, to understand the words that are either written
or spoken and to know how the concepts among the words are linked together to
create meaning. The task of opinion mining on product telugu reviews begins with
preprocessing the telugu reviews. Then, the product features are extracted. Further,
opinions on these features are extracted. Lastly, the orientation of the extracted
opinions is determined.

The component extraction systems are specific to the telugu reviews analysis issue. A
portion of the works focus on the extraction of regular highlights, some on the pertinent
highlights and some on distinguishing and extricating certain highlights. A general well
ordered technique for extricating these product highlights from the telugu reviews is an
inventive measurement that is investigated in the proposed methodology. The
proposed method is comprises with the first two modules from the proposed model as
discussed in chapter-3.

They are the input telugu reviews pre-processing module and, product features
extraction and opinions analysis module. In these modules, WordNet is used to
disambiguate and to understand the potential features and opinion words.
SentiWordNet is used in the second module to assign scores to the identified opinions
for calculating their orientations. 4.2.1

Input telugu reviews pre-processing module Initially the approaching product telugu
reviews are pre-prepared. The means in pre-handling module are depicted beneath. This
module is utilized to institutionalize or preprocess the approaching telugu reviews by
concentrating on the sensible rebuilding of the review to a standardized organization.

Content institutionalization involves fundamental tasks in particular tokenization,


stopwords expulsion, a discretionary stemming activity and PoS labeling. At first during
the time spent tokenization all accentuation imprints are expelled from the review. In
the review, "Handsfree highlight is great, however not very energizing." , the
accentuation mark comma (,) is expelled from the review.

At that point the review is tokenized into a succession of words. The tokens are
recorded in a set as {Handsfree, include, is, great, be that as it may, not, as well, exciting}.
The stop words are then connected to the rundown of tokens to sift through the low
continuous words and which convey no significance in further analysis.

The stop words are dataset explicit and these are accumulated from the dataset itself.
The assemblage is done by hand separating the successive terms for their semantic
substance less pertinent to the area. The subsequent arrangement of words is
Handsfree, highlight, great, be that as it may, not, energizing.

The third operation is stemming, finding the words to their root form. This operation is
optional to apply and is specific to the problem under analysis. Stemming often reduces
precision in the information retrieval. In order to understand the point of stemming
reducing the precision in the number of extracted product features under the domain of
product telugu reviews analysis, an empirical evaluation is performed by applying
Snowball stemming algorithm [66] on the manually annotated product features.

The impact of stemming algorithm on the datasets is tabulated in Table 4.1 below. Table
4.1. Impact of stemming on the product features Stemming applied/Not applied
_Iphone X plus _Oppo f7 plus _Samsung Galaxy S9prime _ _Number of product features
retained after stemming is applied _11 _20 _25 _ _Total number of manual product
features _24 _36 _41 _ _% of product features retained their original form _45.8% _55.5%
_60.9% _ _ The observation from the results tabulated in the above table is that there is a
considerable loss in the original form of the product features when stemming operation
is carried out. This eventually reduces the precision in number of extracted product
features in the later stages of telugu reviews analysis.

Lexical analysis is carried out next after stemming. Lexical or word level analysis consists
of a major task called Part of Speech (PoS) tagging. PoS tagging is carried out to relate
the word with the corresponding word class tag. Stanford Part-of-Speech Tagger [56]
framework is used to tag the words. The word ‘exciting’ is tagged as gerund verb, ‘good’
as Adjective, the set of words {Handsfree, feature} as Nouns and the word ‘not’ as
Adverb.

There are word category errors that are observed when tagged with Stanford
Part-Of-Speech Tagger. The error analysis on the nouns tagged by the PoS tagger on
the three datasets is tabulated in Table 4.2 below. Table 4.2. PoS Tagger performance
details on Nouns PoS Tag/Entity Category _Iphone X plus _Oppo f7 plus _Samsung
Galaxy S9prime _ _Number of Nouns (NN and NNS) _811 _1402 _2360 _ _No. of nouns
wrongly tagged as adjectives _9 _40 _61 _ _% of nouns wrongly tagged as adjectives
_1.09% _2.77% _2.52% _ _ The error analysis on the adjectives tagged by the PoS tagger
on the three datasets is tabulated in Table 4.3 below. Table 4.3.

PoS Tagger performance details on Adjectives PoS Tag _Iphone X plus _Oppo f7 plus
_Samsung Galaxy S9prime _ _Number of Adjectives (JJ and JJS) _323 _575 _ 884 _ _No. of
adjectives wrongly tagged as nouns _0 _1 _9 _ _% of adjectives wrongly tagged as nouns
_0% _0.18% _1.08% _ _ This error analysis performed for finding the number of wrongly
tagged nouns and adjectives in the datasets highlights the drawbacks of the PoS tagger
[13].

These are as follows. The PoS tagger wrongly tags the words because of lexicon gap
that has no PoS tag for contextual word. Difficult linguistics for the PoS tagger as the
broad contextual knowledge is beyond the awareness of the tagger. The impact of verbs
on opinion orientation of the product features is carried out on the datasets to find out
whether verbs have a significant influence on the opinion analysis. The number of
feature, opinion pairs identified using verbs and the positive or negative implications of
these pairs are tabulated in Table 4.4 below.
Table 4.4. Number of feature,opinion pairs and their orientations from verbs Dataset/No.
of feature, opinion pair identified using verbs _Positive/Negative Impact _Total Number
of verbs _% of implied positive/negative opinions _ _Iphone X plus/22 _16+/6- _306 _5%
/ 2% _ _Oppo f7 plus/19 _13+/6- _807 _2% / 1% _ _Samsung Galaxy S9prime/20 _15+/5-
_764 _2% / 1% _ _ The percentage of implied positive opinions and negative opinions
identified by verbs tabulated above provides the information that a very few number of
opinions are identified from verbs on the product features.

Therefore, the verbs are not included in the further analysis. The impact of adverbs on
opinion orientation of the product features is carried out on the datasets to find out
whether adverbs have a significant influence on the opinion analysis. The number of
feature, opinion pairs identified using adverbs and the positive or negative implications
of these pairs are tabulated in Table 4.5

below. Table 4.5. Number of feature, opinion pairs and their semantic orientations from
adverbs Dataset/No. of feature, opinion pair identified using adverbs _Positive/Negative
Impact _Total Number of adverbs _% of implied positive/negative opinions _ _Iphone X
plus/30 _28+/2- _102 _ 27% / 2% _ _Oppo f7 plus/28 _18+/10- _365 _5% / 3% _
_Samsung Galaxy S9prime/32 _21+/11- _371 _6% / 3% _ _ The percentage of implied
positive opinions and negative opinions identified by adverbs tabulated above provides
the information that a very few number of opinions are identified from adverbs on the
product features. Therefore, the adverbs are not included in the further analysis.

The percentage of remaining number of nouns and adjectives available after removing
the wrongly tagged nouns and adjectives in the datasets is tabulated in Table 4.6 below.
Table 4.6. Final statistics on nouns and adjectives available in the datasets Pos Tags
_Iphone X plus _Oppo f7 plus _Samsung Galaxy S9prime _ _Nouns _98.9% _97.2% _97.4%
_ _Adjectives _100% _99.8% _98.9% _ _ The statistics from above table specify that the
potential product features are extracted from nouns and potential opinions are
extracted from adjectives.

These finalized nouns and adjectives are provided to the NLP based product features
extraction and opinions analysis module. 4.2.2 NLP based product features extraction
and opinions analysis module Feature level opinion mining is a complex task as it
requires carrying out two tasks namely identifying and extracting the words that specify
the actual product feature and the corresponding opinions with their orientations.

The step-by-step approach for extracting product features is described below. 4.2.2.1
Step-by-step product features extraction This segment shows a well ordered component
extraction approach which starts with the successive highlights distinguishing proof, at
that point discovering important highlights, and afterward the certain highlights are
removed lastly rare highlights are recognized.

The highlights separated after each progression are added to the list of capabilities from
the underlying advance itself. This methodology beats a specific method for highlight
extraction. It is a mix of both recurrence based methodology and connection based
methodology. The model is nonexclusive and is appropriate to any space telugu reviews
dataset.

Frequent nouns discovery A review sentence is comprised of a thing expression and a


descriptive word state. The modifier expression precedes or after the word expression.
The application ascertains the recurrence check of every thing from the info telugu
reviews which are prior labeled by the PoS tagger. A thing is viewed as continuous if its
event in the telugu reviews is more prominent than or equivalent to three percent from
the arrangement of things that are found. The got regular things are put away in a
document and are utilized for further handling.

Relevant nouns discovery The frequent nouns obtained thus are still found to have
fewer relevancies. It is observed from the telugu reviews dataset that few nouns specify
associated information on some of the frequent nouns of the product. These nouns are
termed as ‘relevant nouns’.

The analysis of the dataset further reveals that adjectives which are present near to the
frequent nouns are good indicators of relevant nouns. The collection of these adjectives
is carried out by searching whether the adjective is present before or after the frequent
feature. When such adjectives are found, then the corresponding nouns are identified.

The obtained relevant nouns are added to the existing set of frequent nouns to facilitate
the count. These are used for further analysis. Implicit nouns discovery Least percentage
of online telugu reviews is written without explicitly mentioning the product features.
These features are called implicit features. The extraction of such nouns is a complex
task.

The implicit feature indicators present in the telugu reviews are identified using the
Conditional Random Filed (CRF) log-linear classifier. SenticNet knowledgebase service is
used to extract the implicit nouns. The number of implicit review sentences in the
datasets is tabulated in Table 4.7 below. Table 4.7.

Number of implicit review sentences and implicit feature indicators in the datasets
Dataset _Number of implicit review sentences _Number of implicit feature indicators in
implicit review sentences _Percentage of implicit feature indicators _ _Iphone X plus _13
_3 _23% _ _Oppo f7 plus _27 _5 _18% _ _Samsung Galaxy S9prime _38 _10 _26% _ _ Once
these implicit nouns are identified, they are updated to the feature set. The obtained list
is then analyzed for extracting the infrequent nouns.

Infrequent nouns discovery The couple of things written in online telugu reviews are
applicable things and inconsistent things. The significant things give related data on the
real highlights of the product. An example for a relevant noun is ‘signal reception’ of the
phone. The infrequent nouns are those that are written on the features of the product.
Certain section of customers considers these nouns interesting.

An example for an infrequent noun is ‘backlight’ of the phone. A thing is viewed as rare
if its event in the telugu reviews is under three percent from the arrangement of things
that are found. When these rare things are distinguished, these things are refreshed to
the list of capabilities.

The opinions corresponding to the above extracted product features are analyzed in the
next task. 4.2.2.2 Opinion extraction and orientation The second task in feature level
opinion mining is to identify, extract and calculate the orientation of the opinion words
from the extracted features. The steps to achieve these tasks are specified below. The
seed opinion lexicon which was provided by Hu and Liu [68] is used in this module.

It contains 2004 positive words and 4782 negative words. These are considered as seed
sets. The words ‘good’ and ‘bad’ are deemed as seed terms for the positive and negative
seed sets. All the adjectives identified from the telugu reviews are collected and created
as a set. The set is termed as Candidate Opinion Set (CandOpSet).

Two subsets are created from CandOpSet. These are namely Positive Adjectives (PA) and
Negative Adjectives (NA). The synonyms obtained from WordNet of the two seed sets
and the two adjective sets are then mapped. This completes the extraction of opinion
words. Weight values (scores) are assigned to the extracted opinion words and seed
terms based on the senses of the opinion words and the seed terms.

The sense is learnt from the gloss of the opinion word. The scores are considered from
Adjectives category in SentiWordNet 3.0. The semantic orientation (SO) of a term “t” is
determined by choosing an adjective either from PA or from NA and is divided with the
relative distance measure. This measure is calculated from the two seed terms, “good”
and “bad,” and is given by SO(t) = + dist(t, bad) – dist(t, good) dist(good, bad) … (3.1)
where “dist” is the measurement between two terms “t1” and “t2” and is given by dist(t1,
t2) = score(t1) – score(t2) … (3.2) The given term is considered to be positive if the
orientation measurement is greater than zero, or otherwise negative.

For the adjective “beautiful” the calculated SO value is provided by the SentiWordNet
3.0 based on its glossary and the sense number. The scores of the seed terms “good”
and “bad” are substituted in the formula 3.1 based on the formula 3.2 and the value ‘+1’
is obtained. This value is greater than zero and the orientation of the term is positive.
4.2.3

Influence of natural language rules on feature and opinion extraction Natural Language
Processing offers a good working platform in order to carry out the aforementioned
tasks in opinion mining. It considers the important aspects of the natural language
namely negation handling, word sense disambiguation, syntax analysis and semantic
analysis in the review sentences. It is also important to realize that the opinion mining is
a restricted NLP problem as the system needs to understand some portions of the
review i.e.,

the product features, entities, opinions and the orientation of the opinions. The
precision in retrieving the product features and opinions mainly depends on the various
facets of these pieces of data available from the telugu reviews. These are term
presence, term frequency, term position, n-gram combinations and PoS.

A large number of consumers of the product give telugu reviews which reflect the
writing skills of the consumers with facet representation as a crucial point in it. This
motivation specifies the need for understanding the relationships among various
product features and opinions. The relationship patterns among the product features
and opinions are based on the aforementioned facets that are extracted from the data
set to perform the tasks of opinion mining in an effective manner. 4.2.4 Results and
Observations The product telugu reviews obtained from Amazon are used for this
experiment.

Three product telugu reviews were considered for conducting this experiment. Iphone X
plus, Oppo f7 plus and Samsung Galaxy S9prime smart phones were the products for
which the telugu reviews were considered for analysis. The labels specified to these
three datasets are D1, D2 and D3 respectively. Table 4.8 presents the details of the
dataset used for this experiment. Table 4.9

presents the first available online date of the product and the telugu reviews obtained
date for the analysis. The table also contains the count of the telugu reviews on current
date. Table 4.8. Dataset details Document attributes _Values _ _Number of review
documents _300 _ _Minimum sentences per review _5 _ _Maximum sentences per review
_19 _ _ Table 4.9.

Telugu reviews Duration and Telugu reviews count Information Product _First Available
online to telugu reviews obtained date (Duration) _Telugu reviews count as on 17th
–December 2016 _ _Apple Iphone X Plus _16-Oct-15 to 28-Nov-16 (13 months) _149 _
_Oppo f7 plus _14-Apr-16 to 28-Nov-16 (8 months) _117 _ _Samsung Galaxy S9Prime
_12-Sept-16 to 28-Nov-16 (3 months) _162 _ _ The pre-preparing of data is completed
by expelling stop words and non English words.

PoS labeling is performed on the got set of words. The things and descriptors are
examined for further element extraction and opinion extraction. The well ordered
element extraction strategy is done on the things. The assessment of the separated
highlights is completed at each progression of the element extraction process.

To examine the presentation of the proposed technique, the standard measures in the
data recovery writing to be specific accuracy, review and F1-score are utilized. The
assessment estimates determined are displayed in the Table 4.10 given beneath. Table
4.10. Information Retrieval Measures at each step using NLP model Datasets _Precision
(%) _Recall (%) _F1-Score (%) _ _ _FN _RN _IN _Inf. N _FN _RN _IN _Inf.

N _FN _RN _IN _Inf. N _ _D1 _86.3 _93.4 _76.4 _87.4 _79.1 _77.1 _84.3 _91.6 _82.5 _84.4
_80.1 _89.4 _ _D2 _87.8 _92 _85.7 _90 _80.5 _79.2 _82.1 _85 _83.9 _85.1 _83.8 _87.4 _ _D3
_85 _92.8 _80 _86.9 _82.9 _72.6 _85.8 _89.5 _83.9 _81.4 _82.7 _88.1 _ _ The Table 4.10
demonstrates the exhibition insights of the proposed highlight extraction approach.

The analysis of opinions from the online telugu reviews dataset is completed based on
the commitment of each progression determined in the component extraction process.
The assessment procedure begins by playing out the assignment of extricating more
prominent than or equivalent to 3% of most incessant things from the telugu reviews of
every product. The experimental assessment for separating regular things is done on the
three datasets at four unique rates.

The exactness (in %) in the quantity of continuous things separated is arranged for the
datasets in Table 4.11 beneath. Table 4.11. Percentage of Product Nouns extracted as
frequent nouns at different sizes. Size _D1 Precision (%) _D2 Precision (%) _D3 Precision
(%) _ _Greater than or equal to 1 _61.32% _57.32% _59.1 % _ _Greater than or equal to 2
_75.8% _64.48% _68.4% _ _Greater than or equal to 3 _86.3% _87.8% _85% _ _Greater
than or equal to 4 _67.28% _59.8% _50.7% _ _ The results of the empirical evaluation for
extracting frequent nouns carried out on the three datasets at four different percentages
are shown in Fig. 4.1 below.
_ Fig. 4.1. Empirical Evaluation for frequent nouns threshold The observation from Figure
4.1 was that at greater than or equal to 4% size, the precision in the number of extracted
frequent nouns started to decrease. The frequent nouns extraction task mined product
features with acceptable levels of precision and recall on the three datasets.

As continuation, the relevant nouns were extracted from the online telugu reviews by
using the adjectives identified on the frequent nouns. It was observed that some of the
relevant nouns which were extracted were the parts of the features that are already
obtained in the previous step. It was also observed that there was a considerable
increase in the precision after implementing this step.

The subsequent stage of finding understood things from the telugu reviews, it was
additionally seen that there was a decent increment in the accuracy as the normal
number of recognized verifiable things from the gathered telugu reviews dataset was
52%. At long last, the assignment of removing rare highlights was executed by applying
the invert state of incessant highlights.

This undertaking extensively expanded the exactness of the three product datasets. This
last advance of separating rare highlights guarantees greatest recovery of the product
highlights. The outcomes are appeared in Fig. 4.2 beneath. _ Fig. 4.2. Accuracy of various
extracted product features The percentage overlap of the product features between
frequent nouns and relevant nouns and the percentage overlap of the product features
between relevant nouns and implicit nouns in the three datasets specify that these
nouns are deemed to be the important product features.

The percentage overlap between various kinds of product features is tabulated in Table
4.12 below. Table 4.12. Percentage overlap between various kinds of product features
Dataset _% Overlap between frequent nouns and relevant nouns _% Overlap between
relevant nouns and implicit nouns _ _Iphone X plus _4% _3% _ _Oppo f7 plus _3% _3% _
_Samsung Galaxy S9prime _4% _2% _ _ By accomplishing a normal accuracy of 87% over
every one of the means of highlight extraction on all the three datasets, it is reasoned
that the thorough component extraction approach performs superior to anything the
specific route for removing the product includes in the semantic condition.

The Hu and Liu opinion lexicon dataset was used in this experiment to extract the
opinions from the telugu reviews. Table 4.13 presents the details of the dataset used for
this experiment. Table 4.13. Opinion Lexicon details Opinion word attributes _Values _
_Number of positive words _2004 _ _Number of negative words _4782 _ _ The product
telugu reviews obtained from Amazon are used for this experiment. Three product
telugu reviews were considered for conducting this experiment.

Iphone X plus, Oppo f7 plus and Samsung Galaxy S9prime smart phones were the
products for which the telugu reviews were considered for opinions analysis. Ten telugu
reviews from the three datasets were taken to extract the opinions and determine their
orientations. The telugu reviews are given below. Apple Iphone X plus Siri is awesome
do most of the work smoothly. Very happy to get an Iphone X plus. Really good phone
and its my first iPhone.

Great screen size and good battery life. Oppo f7 plus Oppo f7 plus is superior than
Iphone X plus. Very good phone. Battery, camera, screen all are awesome. Price is bit
high. Should be around 24000. Samsung Galaxy S9prime Fingerprint sensor is a bit
erratic. Best phone. good camera. It's been three days I received this product. Awful.
Have asked for the replacement. Good phone.

After PoS tagging, the noun, adjective pairs found from the three datasets are as
follows. All the adjectives were compared with the opinion lexicon. All were identified.
The identified adjectives were deemed as opinion words. In order to determine the
orientations of the opinion words, the senses of the opinion words were disambiguated
by learning the context using typed dependencies [21] and WordNet gloss. The
obtained sense is used in searching for the SentiWordNet score under adjectives
category. The scores were substituted in SO formula.

When the obtained value after SO calculation is greater than zero, then the opinion
word is termed as positive, otherwise negative. The opinion orientations of the opinion
words on the product features and the number of mentions for the product telugu
reviews are given below. The evaluation on the orientation of the opinions using the
proposed approach as compared with the baseline approaches is presented in Table
4.14. Table 4.14.

Accuracy (in %) of extracted opinions Opinion Orientation Method _Pos./Neg. Adjectives


_Accuracy(%) _ _Log-linear regression [107] _657/679 _87.38 _ _Orientation based on
Pointwise mutual info. [105] _1915/2291 _83.09 _ _Lexical relations and geodesic
distance[50] _663 of [107] _88.05 _ _Proposed Work _2004/4782 _88.6 _ _ The results
obtained in terms of accuracy with the published techniques are as shown in the Table
4.14, However, the accuracy of the proposed method when compared to the method of
[50] has increased in a significant manner. The comparative results are shown in Fig.

4.3 below. _ Fig. 4.3. Accuracy of extracted opinions 4.3 Opinion mining of product
telugu reviews using traditional LDA Product features and corresponding opinions form
a major part in analyzing the online product telugu reviews. It is uncertain to decide
whether the users were writing their opinions on a specific feature or not.

In order to organize, search and understand this digitized collection, the probabilistic
topic modeling algorithms of machine learning are useful. Idle Dirichlet Allocation (LDA)
is one such theme displaying approach that bunches the survey words into points
utilizing priors of dirichlet process. The words so grouped are the highlights and
assessments in the item telugu audits area.

These bunches additionally contain a portion of the words which are not the real
highlights of the item. To distinguish proper item includes from these groups, a
progressive, space autonomous Feature Ontology Tree (FOT) is connected which
adventures the "part-of"and “is-a” hierarchy among the product features is applied.

The telugu reviews associated with the extracted features are indexed earlier in the topic
matrix with the topic indicator. The telugu reviews of these features are processed for
extracting the opinions. The extracted opinions are analyzed for orientations using
SentiWordNet lexical resource. 4.3.1 LDA based product features extraction and
opinions analysis module The product telugu reviews are provided to the
pre-processing module.

After pre-processing, the obtained telugu reviews are given to product features
extraction and opinion analysis module. LDA model is used to carry out the task of
product features extraction and opinion lexicon is used with SentiWordNet for analyzing
the opinions. 4.3.1.1 Product features extraction using LDA and FOT The extraction of
product features from telugu reviews using LDA starts with generating the topic matrix
from the underlying telugu reviews vocabulary. Further, the topic clusters are formed
using the words probabilistically chosen from the previously generated topic matrix.

Now, a light weight Feature Ontology Tree (FOT) is applied on the learned topics to
extract the actual product features. Generation of Topic Matrix The acquired telugu
audits are given as a contribution to the LDA theme model. LDA learns client indicated
points by arbitrarily allotting the vocabulary of expressions of the telugu audits to the
subjects utilizing dirichlet dispersion.

At that point, it ascertains the Term Frequency-Inverse Document Frequency (TF-IDF)


esteem as word likelihood to appoint the themes to the words so that LDA expands the
age of subject record with close proper co-occuring words. These words with comparing
word position in a record and TF-IDF worth are put away in a network called Topic
Matrix. The pseudo code is clarified in Fig. 4.4which is presented below. 1: LDA model
learns ‘k’ input topics. 2: It randomly assigns topics to every word in the document.

This assignment is based on the dirichlet distribution. 3: To refine the topic assignment,
each word in the every document is checked with the current topic assignment and is
updated based on the following two steps. 3.1: Whenever the probability of topic of a
word exists in the given document (Term Frequency), then P(topic | document) 3.2:
Whenever the probability of a word exists in the document collection given the current
topic assignment (Inverse Document Frequency), then P(word | topic) __4: The weighing
conclusion of assignment of a topic to word is the probability of the word. A topic
matrix is generated with word and probability values. _ _ Fig. 4.4.

Feature and Opinion Topic Matrix Probabilistic Document Generation – Topic Clusters
LDA learns the quantity of words in a created archive utilizing Poisson appropriation. At
the point when the extent of the vocabulary is tremendous, Poisson conveyance
changes to Multinomial and at times this changes to the dirichlet appropriation. At that
point, LDA produces earlier theme extents in each review record utilizing dirichlet
conveyance.

For each word in the telugu reviews gathering, a solitary subject is picked aimlessly
utilizing multinomial circulation connected on the produced point extents. Presently,
from the picked subject, a word is picked utilizing multinomial dispersion to create the
report. This is clarified in Fig.4.5 as a procedure to guarantee the record age utilizing the
words from the point lattice. 1: Choose N ~ Poisson(), where N represents sequence of
words in a document.

2: Choose ?d ~ Dir(a). 3: For each of the N words Wn: 3.1: Choose a topic zd,n ~
multi(?d). P(zd,n | ?d ) 3.2: Choose a word wd,n ~ multi(ßZd,n), a multinomial distribution
conditioned on the topic. P(wd,n | ßZd,n) _ _ Fig.4.5. Probabilistic Document Generation
using words from Topic Matrix Product features extraction using domain independent
ontology on topics Definition: FOT is an abbreviation for Feature Ontology Tree that is a
tree like ontology structure T (v, T’). v is the root node of T which represents a feature of
a given product.

T’ is a set of subtrees. Each element of T’ is also a FOT T’(v1, T’’) which represents a sub
feature of its parent feature node. FOT is a domain independent OWL-lite Ontology that
specifies product features and their sub features in a hierarchy. The excerpt of FOT is
illustrated in Fig. 4.6. Part-of Part-of Is-a Is-a Is-a Is-a Is-a Is-a _ _ Fig. 4.6.

Excerpt of the Feature Ontology Tree (FOT) The means in extricating the precise product
highlights begins with mapping the OWL class with the relating points by considering
the OWL class Feature and ObjectPartFeature to delineate the LDA created theme
groups and after that the ontology cases are contrasted and the subject words in the
bunches. The words are removed from the group when a match is found and they are
put away as the acquired product highlights. 4.3.1.2

Opinion extraction and orientation on LDA and FOT product features The identified
product features from the LDA topics are used in searching the associated telugu
reviews. The opinions of the features are identified from the telugu reviews and
orientation of these words is also determined. These tasks are carried out by the
following steps.

Relevant review documents search in Topic Matrix The search for the appropriate review
of the identified product features is an important task as these telugu reviews contain
the extracted features. The telugu reviews are obtained from the topic matrix (ß). This
topic matrix was generated prior for learning the pre-specified topics. Extraction of
adjective adjacent to feature The valence words that are written along with the product
features are found to be the adjectives in the telugu reviews.

The adjectives enclosing the feature are extracted by the Stanford Log Linear Part of
Speech Tagger [56] from the telugu reviews. The extracted adjectives are paired with the
corresponding features for further analysis. Identification of adjective as opinion The
identification of an adjective as an opinion is carried out by considering the opinion
lexicon as specified by the Hu and Liu [68] in their work. The extracted adjectives from
the telugu reviews dataset form the CandidateOpinionLexicon (CandOpLex).

The synonyms obtained from WordNet [12] on the words of two seed sets and the
adjectives of CandOpLex are mapped. The matched adjectives are considered as
opinions and are stored separately with features and the review indicator. Opinion
orientation of the opinion The identified opinions are analyzed for determining the
orientation of the opinion.

This task is performed with the help of a lexical resource called SentiWordNet 3.0 [5].
The learned opinion orientations are counted to find out the number of positive and
negative views on the feature. 4.3.2 Influence of LDA priors on learning product features
Careful selection of values for LDA priors ‘a’ and ‘ß’ will affect the learning of the topics
in a significant way. The prior ‘a’ represents topic distributions per review and the prior
‘ß’ represents feature distributions per topic.

In order to have fewer topics distributed in the telugu reviews, the value of ‘a’ must be
low. When the ‘a’ value is low, the number of topics to be learned from the telugu
reviews collection has less impact on their distribution in the review. In order to have
more co-occuring features in the topic, the value of ‘ß’ must be high.

Care must be taken at the time of pre-processing the telugu reviews collection for
removing the stop words. The corpus specific stop words must be used in order to learn
the coherent topics in ‘ß’. The number of topics to be learned on the same telugu
reviews collection will differ from person to person as the knowledge levels of the
persons vary in a significant manner.

To learn more effective topics, the LDA priors ‘a’ and ‘ß’ are to be analyzed for sufficient
topic proportions in the telugu reviews and near co-occuring word distributions in the
topics. The symmetric dirichlet distributions for ‘a’ assigns equal probabilities and the
asymmetric dirichlet distribution for ‘a’ assigns random probabilities to all the topic
proportions in the review document.

The symmetric dirichlet distribution for ‘ß’ specifies the number of words that are
assigned to the topics from the telugu reviews. The asymmetric dirichlet distribution for
‘ß’ tells whether the words and topics are less related or more related to each other. The
work of Wallach et al. specified [38] that the low ‘a’ value with asymmetric topic
distribution and high ‘ß’ value with symmetric feature distribution in the topic, the
learned topics through traditional LDA are stable when the number of topics are to be
learned are more.

The number of iterations must be high in order to generate the most probable review
document. The generated document is useful in making the important inferences to
know about the list of topics that are used by the LDA to generate the document and to
classify the telugu reviews using the learned topics. 4.3.3 Results and Observations The
product telugu reviews obtained from Amazon are used for this experiment.

Three product telugu reviews were considered for directing this examination. Iphone X
also, Oppo f7 in addition to and Samsung Galaxy S9prime cell phones were the products
for which the telugu reviews were considered for analysis. The marks determined to
these three datasets are D1, D2 and D3 individually. Table 4.15 presents the subtleties of
the dataset utilized for this investigation.

Table 4.15. Dataset details Document attributes _Values _ _Number of review documents
_300 _ _Minimum and maximum sentences per review _5 and 19 _ _Held-our telugu
reviews collection _30 _ _ The assessment of separated product highlights from the
telugu reviews is completed utilizing the customary LDA point model.
The dataset explicit stop words are connected on the telugu reviews at the season of
pre-preparing before learning the points. So as to remove most extreme product
includes, the quantity of points to be scholarly ought to be more. The quantity of
subjects 'k' to learn by the LDA model on the pre-handled telugu reviews is determined
as 50.

This incentive for 'k' is experimentally dictated by breaking down the log probability per
token on the held-out telugu reviews gathering. The log likelihood of held-out reports
with various points is appeared in Fig. 4.7. _ Fig. 4.7. Line Graph of Log Likelihood on
held-out telugu reviews collection It is seen from the Fig. 4.7

that the log probability per token for the quantity of themes has expanded (in the
negative hub) when the quantity of subjects were expanded from 25 to 50. From there
on, the log probability per token has diminished (in the negative hub) definitely when
the quantity of points were expanded with an augmentation of 25 subjects from the
present 50 themes to 100.

This analysis determined that the quantity of themes to gain from the real telugu
reviews accumulation is to be fixed at 50. The parameter 'a' is set as 50/k. The numerator
estimation of 'a' is the greatness of the Dirichlet earlier over the theme dispersion of a
review. The default esteem is 5. At whatever point the quantity of subjects to be
educated is over the estimation of 10, the numerator esteem is expanded.

This is to learn stable subjects in the wake of running on the telugu reviews gathering.
The parameter 'ß' is set as 200/Wn where Wn is the complete number of words in the
telugu reviews accumulation. The numerator estimation of 'ß' is the extent of the subject
task to the word in the telugu reviews vocabulary. This aides in creating the cognizant
points.

These subjects present progressively number of product highlights from the telugu
reviews accumulation. The numerator esteem depends on the quantity of emphasess
the LDA model is actualized. The esteem changes dependent on the expansion in the
span of the dataset. These estimations of a and ß are valuable for better subject groups
age [102].

It is important that the separated product highlights depend on the probabilities


determined by the LDA model on the highlights of the telugu reviews gathering. It is
seen that the highlights found by the LDA are increasingly broad on account of
including a portion of the non highlights of the product. A various leveled FOT tree for
Smart Phone telephone is created utilizing Apache Jena structure.
This FOT is a light weight and space autonomous ontology tree produced for a class of
Smart Phone telephones of various makers. The ideas in the ontology are clarified with
the real product highlights referenced by the producer of the Smart Phone telephone.
The explained product highlights are considered as the examples of the ideas in FOT.
The cases of FOT are contrasted and the words in the subject groups.

The coordinated product highlights are put away in a document for further analysis on
opinions. The data recovery measures on the extraction of proper product highlights are
arranged in Table 4.16 underneath. Table 4.16. Information Retrieval measures on
extracted product features using LDA and FOT Product _Precision(%) _Recall(%)
_F1-score(%) _ _Iphone X plus(D1) _87.5 _80.3 _83.7

_ _Oppo f7 plus(D2) _89.2 _83.4 _86.2 _ _Samsung Galaxy S9prime(D3) _88.3 _81.7 _84.8 _
_ The above table shows the performance statistics of the proposed feature extraction
approach. It was observed that the precision of Oppo f7 plus smart phone is better than
Iphone X plus and Samsung Galaxy S9prime smart phone as the number of matching
features from instances of FOT are more for Oppo f7 plus when compared with Iphone
X plus and Samsung Galaxy S9prime smart phones.

The telugu reviews written for the three smart phones contain the explicit mention of
the product feature in them. It was also observed that the value of recall has decreased
for the three smart phones as LDA retrieves the general product features and never
extracts the associated product features.

The percentage of non extracted and uncommon product features in NLP as retrieved
using LDA and FOT based technique on the three datasets are tabulated in Table 4.17
below. Table 4.17. Percentage of non extracted and uncommon product features using
LDA+FOT approach Dataset _Percentage of non extracted and uncommon product
features in NLP as extracted by LDA+FOT _ _Iphone X plus _30.7% _ _Oppo f7 plus _31%
_ _Samsung Galaxy S9prime _32% _ _ The percentages from the above table specify that
there are still some product features that were extracted using LDA and FOT based
feature extraction approach which the NLP based approach was not able to retrieve
them.

The combination of product features from NLP and LDA based analysis maximizes the
number of product features. These product features are further analyzed for sentiments.
These sentiments with the corresponding semantics improve the customer confidence in
the products recommendation process. By achieving the precision of 88% for the three
Smart Phone phones, it is concluded that the hierarchical FOT enable the extraction of
maximum and exact product features from the LDA topic clusters when compared with
only LDA clusters in the semantic space. 4.4

Conclusion The results and observations of the conducted experiments are discussed.
The obtained product features and opinions from the above two experiments are
annotated in a description logic based ontology for intelligent product
recommendations by the machine using the sentiment mined from the ontology.
Chapter 5 Sentiment based semantic rule learning for improved product
recommendations 5.1

Introduction Significant data like product highlights and suppositions are acquired from
customer online telugu audits utilizing NLP and LDA investigation. These investigations
were done in the prior part covering the two modules from the proposed model. These
bits of data are commented on with the ideas of Product Review Opinion Ontology
(PROO) in the third and last module from the proposed model.

The ontology with case data fills in as foundation information to learn standard put
together notions that are communicated with respect to product highlights. These
semantic standards are found out on both taxonomical and non-taxonomical relations
accessible in PROO Ontology. So as to check the mined principles, Inductive Logic
Programming (ILP) is connected on PROO.

These standard based assessments give significant data of using the relationship among
the product highlights 'as-a-unit' to improve the slants of the parent highlights. These
parent highlights are available at the larger amount close to the base of the ontology.
The notions of the related product highlights are additionally improved. This
methodology improves the slants of the parent highlights and the related highlights
which in the end improve the amassed notion of the product.

This outcome in either change in the situation of the product in the rundown of
comparative products prescribed or shows up in the suggested rundown. This
encourages the client to settle on right buy choices. 5.2 Role of semantic rules
Customary machine learning algorithms experience the data and become familiar with
the theory.

Tree and guideline based algorithms become familiar with the speculation utilizing the
property estimation sets from the info data. Machine can't go past the errand of
recognizing highlights and feelings from the telugu surveys as it never have earlier
information to comprehend the connections among the traits and setting explicit
requirements that are accessible among the product highlights and assessments.
Semantic web ontology beats this issue.

Ontology encodes the connections among the ideas of highlights and sentiments with
imbalance limitations, semantic attributes and cardinality confinements. This ontology is
utilized as foundation information on the product telugu audits. The information mined
from the ontology is communicated as semantic standards. These semantic principles
underscore the objective feeling communicated on the product highlight.

Machines can group the product telugu surveys consequently with precise slants
learned on the product highlight. 5.3 Influence of sentiments on the semantic rules
Sentiment analysis assumes a fundamental job in understanding the feelings from
online telugu audits. It comprehends the perspectives on the general population on the
product, to take snappy buy choices on the product, and to improve the accessibility of
the product in the market. Online telugu audits influence the feeling of the perusers.

Estimating the impact of the sentiment on the semantic principles as learning spread is
performed to comprehend whether positive telugu surveys of the product spread
quicker than negative telugu audits. The sort of feelings that are increasingly delegate
on different internet business locales about the product is additionally all around
recognized.

Besides, the sort of sentiment communicated in telugu surveys dependent on worldly


changes on the highlights of the product is resolved in a legitimate way. 5.4 Semantic
rules learning based on ontology mining _ The chief goal of learning the objective
sentiment from the ontology data by the machine is to channel the telugu surveys
dependent on the approaching element demand in positive and negative classes.

This is completed by sorting the product telugu audits composed on highlight chain of
importance for insightful basic leadership when another client needs to buy a product.
So as to understand this target, three principle assignments are introduced. The first is
the advancement of the Product Review Opinion Ontology (PROO) for product
highlights and assessments.

The second assignment is to learn semantic standards utilizing Semantic Data Mining
(SDM) based algorithms and the latter is inductive rationale programming for learning
the element and sentiment relations and confirming with the semantic principles space.
An itemized portrayal of these associated undertakings is given in Fig. 5.1 underneath as
an organized system. RULES FROM SDM DECISION TREE = ILP C4.5 RULES Fig.

5.1. Semantic Data Mining 5.4.1 Development of the Product Review Opinion Ontology -
(PROO) In software engineering and data science, ontology [41] formally speaks to the
information as a lot of ideas and the connections among them inside an area. It is
utilized to display a space and bolster thinking about ideas.

Formally, ontology is the announcement of a legitimate hypothesis [102]. A portion of


the motivations to design Ontology are: I) to share basic comprehension of the structure
of data among individuals or programming operators, ii) to empower reuse of area
information so as to make space suppositions that are unequivocal, iii) to concentrate
on isolated space learning from the operational learning, iv) to examine the area
learning. The Artificial Intelligence writing contains numerous meanings of ontology and
a significant number of these negate each other.

An ontology is a formal unequivocal depiction of ideas in an area of talk (classes some


of the time called ideas), properties of every idea portraying different highlights and
qualities of the idea (spaces now and then called jobs or properties), and individual
occasions of classes. In functional terms, building up the ontology incorporates certain
means like: I) deciding the area and extent of ontology, ii) characterizing classes in the
ontology, iii) organizing the classes in an ordered (subclass–superclass) chain of
importance, iv) characterizing openings and depicting the permitted qualities for these
spaces, v) filling in the qualities for openings for occasions. PROO Ontology is built with
the above determined advances. 5.4.1.1

The domain and scope of PROO Ontology The space that the PROO Ontology spreads is
all the product telugu reviews. The PROO Ontology is utilized to relate parts (model is
camera of the Smart Phone telephone) and part-highlights (precedent is zoom
dimension of the Smart Phone telephone camera) of the product (model is Smart Phone
telephone) as product includes and recognize the sentiment related with the specific
component.

PROO Ontology responds to the inquiries on product includes, the article present in the
review, with the component that is with communicated sentiment, and the sentiment
direction of the review concerning review highlight. 5.4.1.2 Classes in the PROO
Ontology The classes in the PROO Ontology are characterized in the top-down
methodology. The production of the general classes to be specific Fact, Opinion, Object,
Review, Feature, Polarity, and Sentiment, etc.

The particular classes specifically ObjectPart and ObjectPartFeature and TotalOpinion


are made. 5.4.1.3 Arrangement of the classes in a taxonomic (subclass–superclass)
hierarchy The rundown of classes that are characterized are the classes in the ontology
and progresses toward becoming stays in the class chain of command. The association
of the classes is in a progressive way.

OWL never forces request in the game plan of basic classes however forces a request
requirement on the taxonomical classes. The disjoint OWL classes in the classes course
of action are Fact and Opinion individually. The misclassification of the taxonomical
classes never occurs as the disjoint classes to be specific Fact and Opinion have no
normal subclass.

PROO Ontology class scientific classification is appeared in Fig. 5.2. _ Fig. 5.2. PROO
Ontology class taxonomy 5.4.1.4 Slots and describing allowed values for these slots The
classes characterized for PROO Ontology alone never give enough data to address the
competency questions. When these classes are characterized, the inner structures for
these classes are depicted. These are called properties of the class. These are otherwise
called openings.

The article properties are to be specific expressFeature, hasObjectPart,


hasObjectPartFeature, contains, isExpressedOn, isPartOf, mineFrom, hasPolarity,
portrayObject and so on. The spaces with permitted estimations of Opinion class are
appeared in Fig. 5.3. _ Fig. 5.3. Opinion slots with allowed values 5.4.1.5 The creation of
instance values for classes The last advance is making individual examples of classes in
the progression.

Characterizing an individual example of a class requires picking a class, making an


individual occurrence of that class, and filling the space esteems. The class Feature in
Fig. 5.4 is determined with an individual occurrence thing and the chose an incentive
from the range class of that relationship. __ Fig. 5.4. Feature class instance The
representation of the PROO Ontology with the classes, spaces with the permitted
qualities and the opening connections among the classes framing a learning base is
displayed underneath in Fig. 5.5. _ Fig. 5.5. Visualization of PROO Ontology 5.4.2

Semantic Data Mining the PROO Ontology for learning semantic rules Semantic Data
Mining (SDM) [27] is a data mining approach where domain ontologies are used as
background knowledge for data mining. Mining the engineered ontology in toto by the
current classification algorithms is a hard task. In order to simplify the process, the
ontology with the corresponding instance data is mapped with the relational database
tables.

These tables are further analyzed for dependencies among them. This analysis leads to
the creation of a new table with relational dependencies. This table is used for learning
patterns/models. The analogies between ontologies and relational databases are taken
into account for the semantic data mining process. These are namely instances
corresponds to records, classes corresponds to tables, attributes corresponds to record
fields and relations corresponds to relations between the tables. The domain and range
classes for the slots assert about individuals.

The slot name of a domain OWL class is considered to be the attribute and the range
OWL class is filled with its corresponding instance to that slot in the mapped RDBMS
table. The Opinion class in Ontology is an Opinion table in RDB with expressFeature,
portrayObject, mineFrom and hasPolarity slots as table attributes and the values of
these attributes are the instances of Feature, Object, Review and Polarity PROO
Ontology classes.

Conventional data mining process learns the data model from the patterns that are
inherently hidden in the data and the relations among the attributes in the dataset.
Semantic Data Mining, contrast to conventional data mining, mines the abundance of
patterns available in the ontology and learns the semantic associations among the
classes from the ontology.

The mining process is carried out by searching for those predicates which are available
in PROO Ontology that satisfy taxonomical and non-taxonomical constraints on the
ontology at the time of hypothesis rules space learning. In order to carry out this search
process, content constraints are defined over the classes (tables) and relationships
(relations) that are available in the ontology.

A content constraint is a rule that is able to capture additional knowledge from the
ontology in addition to the available abundant patterns. A content constraint is either a
taxonomical constraint or a non-taxonomical constraint. A taxonomical constraint is a
content constraint expressed on the predicate written between the classes that are
organized in the hierarchical manner in the ontology.

A non-taxonomical constraint is a content constraint expressed on the predicate written


between the related classes. This form of learning semantic association rules in the form
of relationships among the data instances was specified by Cláudia Antunes [15] for
efficient Association rule mining. The detailed procedures expressed in algorithmic form
for learning taxonomical constraints and non-taxonomical constraints are presented
below.

Input: Ontology {O} Output: semantic rule {A(B} LEARN_TAXONOMICAL_CONSTRAINT


(Ontology O) { for each node in the taxonomy from Ontology O { content_constraint =
false; if(parentof(parentnode, childnode)) content_constraint=true; write
(parentof(parentnode, childnode) ( targetclass(childnode)); else if(parentof(parentnode1,
childnode1) ^ parentof(parentnode2, childnode2)) childnode1 (parentnode2;
content_constraint=true; write(parentof(parentnode1,childnode2) ^
datatypeproperty(parentnode1,rel(int)) ( targetclass(childnode2)); }} _ _Algorithm for
learning taxonomical constraints from Ontology Algorithm for learning taxonomical
requirements functions as pursues: given the ontology, all the super class and sub class
progressive systems are recognized. Super class hub is called parent and sub class hub is
called youngster.

The standards are then created from the predicate on the chain of command as the
connection among parent and tyke prompting target class. Content imperative is
instated to the bogus incentive before all else and is then changed to genuine esteem
once taxonomical requirements are acquired. The Algorithm additionally checks the
relative youngster hubs in the pecking order. A Descendant hub is a hub which is gotten
from the predecessor hub.

An Ancestor hub is the parent hub in the given chain of importance. A parent hub is
trailed by tyke hubs. All the tyke hubs for a given parent are said to be the relative hubs.
The middle of the road parent hub is made as another youngster hub in the relative
progression to fulfill the relative property.

The standards are then produced from the predicate on the pecking order as the
connection between parent, recently shaped tyke and the datatype property prompting
target class. Input: Ontology {O} Output: semantic rule {A(B}
LEARN_NONTAXONOMICAL_CONSTRAINT (Ontology O) { for each node in Ontology O
{ content_constraint = false; if(objectproperty(nodei, nodej)) content_constraint=true;
write (objectproperty(nodei, nodej) ( targetclass(nodei)); else if(objectproperty(nodei,
nodej) ^[datatypeproperty(nodei,rel(int)) v datatypeproperty(nodej,rel(int))])
content_constraint=true;
write(objectproperty(nodei,nodej)^datatypeproperty(nodej,rel(int)) ( targetclass(nodej) );
write(objectproperty(nodei,nodej)^datatypeproperty(nodei,rel(int)) ( targetclass(nodei) );
}} _ _Algorithm for learning non-taxonomical constraints from Ontology Algorithm for
learning non-taxonomical requirements fills in as pursues: given the ontology, all the
related class hubs that are bound with the article properties are recognized.

Content imperative is introduced to the bogus incentive at the outset and is then
changed to genuine esteem once related class hubs are gotten. The guidelines are then
produced with the item property as the connection between the related classes
prompting the objective class. The Algorithm additionally checks for the connection
between related classes and datatype properties.
The related class hub and the datatype property are connected utilizing the conditions
that are forced on the ontology. This is distinguished by the algorithm. The substance
requirement esteem is changed to genuine. The standards are then produced from the
connection between article property and the datatype property prompting the objective
class.

The Opinion, Feature, ObjectPart and ObjectPartFeature tables are utilized in creating
the semantic guidelines on Sentiment table which says that the comparing article part
and item partfeature are the highlights that have decidedly arranged sentiment when
the opinion quality on these highlights has an esteem equivalent to or more prominent
than 2.5. The comparing class chains of command and the related classes of the PROO
Ontology are displayed in Fig. 5.6.

is-a is-a is-a is-a hasRelatedFeature Fig. 5.6. Class Hierarchies and related classes in
PROO People trust a review via doing a fundamental factual analysis of finding the level
of individuals communicating constructive opinion on a specific element. For the
machine so as to do the equivalent measurable analysis, certainty interims are to be
discovered.

The certainty that the opinion is communicated on the element is done by figuring the
certainty interims of the conceivable opinion communicated by individuals. The acquired
range determines with certainty that X% individuals have communicated constructive
opinion on the element with opinion quality of the component thought about and Y%
individuals have communicated against the element with opinion quality of the element
contemplated.

The recipe to compute the certainty interims is; p + z * vp(1-p)/n … (5.1) The esteem p is
the level of individuals having positive opinion for a component with opinion quality of
the element mulled over, n is the example size and C.I is Confidence Interval [69] that
has consistent 'z' as 1.96 for 95% certainty level and 2.58 for 99% certainty level.

To test whether the recently included property impacts the opinion quality property, an
autonomous t-test is performed. Further, to test whether the recently included property
influences the Sentiment class, Binary Logistic Regression analysis is completed. The
measurements are introduced in Fig. 5.7 beneath. Fig. 5.7.

Independent t-test and Binary Logistic Regression to know the influence of new
property on sentiment class The factual analysis uncovered that the opinion quality
property influences the Sentiment class in arranging the telugu reviews. This is on the
grounds that at 95% certainty interim of the mean distinction between positive
sentiment and negative sentiment class esteems, the lower certainty breaking point
esteem and upper certainty utmost esteem has higher deviation.

Additionally, the recently included property "hasStatisticalTrust" isn't affected by the


opinion quality characteristic and has not changed the data model. This is on the
grounds that at 0.05 dimension of centrality esteem for playing out the test, the t-test
brought about 0.1 dimension of noteworthiness esteem for "hasStatisticalTrust"
property.

The "hasStatisticalTrust" property is valuable for making the machine to gain proficiency
with the numerical trust data and gives data at the season of review arrangement. The
scholarly model as Decision Tree from the occasions of the ontology is organized in
Table 5.1. Table 5.1 Decision Tree Model learning Machine Learning inputs _Values
considered/generated _ _Classifier learned _ID3 Decision Tree _ _Splitting Attribute
_Strength _ _Number of examples classified under Good Sentiment class label _14 _
_True Positives Percentage _93% _ _Application of Rule Selection _Yes _ _ 5.4.3

Inductive Logic Programming for verification of the mined semantic rules Given some
positive element cases and negative element occasions, Inductive Logic Programming
learns controls by utilizing PROO Ontology as the foundation information with the goal
that the standard covers all the positive cases. The standard fulfilling the previous
referenced condition is said to be finished and the standard fulfilling the later
referenced condition is said to be steady.

The educated model as C4.5 rules from the occasions is classified in Table 5.2. Table 5.2.
C4.5 Rule learning Machine Learning inputs _Values considered/generated _ _Classifier
learned _C4.5 Rules _ _Splitting Attribute _Strength _ _Number of examples classified
under Good Sentiment class label _14 _ _True Positives Percentage _93% _ _Application
of Rule Selection _Yes _ _ These ILP rules learned are considered to follow Occam’s razor
principle [111] of finding the hypothesis that best fits (the disjunctive expression in the
rule antecedent provides the simplest possible explanation for learning the rule
consequent) the instance data.

It is observed that these rules are with the PROO Ontology mined rules. Both of these
rule forms are found to learn the positive sentiment class label for the positive instances
of the product feature. ID3 decision tree algorithm and C4.5 algorithm are used for
learning the sentiment based semantic rules. ID3 algorithm builds the classifier fast with
the data.
It is observed from the results that this classifier is more accurate. C4.5 algorithm creates
understandable prediction rules. On the other hand, the popular classifiers from
machine learning research community namely Naïve Bayes [42] classifier and Support
Vector Machines (SVM) [10] classifier are not used for learning semantic rules.

It is observed that with Naïve Bayes classifier, more number of bad sentiment examples
is wrongly predicted as good sentiment examples. This specifies that the false positive
rate is high. It is also observed that the accuracy of the SVM classifier is less on the data
as the size of the dataset after filtering out the taxonomical and non-taxonomical
feature based examples is less. 5.5

Improving product recommendations using the learned semantic sentiments The


product features that are present near the ontology root are the parent features
obtained from the taxonomical constraints and the other features present at the same
level as the parent features. Other features are obtained from the non-taxonomical
constraints. In order to achieve this goal, a framework is presented in Fig. 5.8. The
framework is composed of main component.

The improvement of sentiments of the product features using the knowledge mined
from the PROO Ontology for improved product recommendations is shown in a
diagram as under. Yes No Customer Fig. 5.8. Model for improving the sentiments of the
product features 5.5.1 Improving the product recommendations using rule based
sentiments from ontology The standard based sentiments mined from the PROO
Ontology indicates the relations between the parent include and the tyke highlight. It
likewise uncovers the relations among the related product highlights.

The opinion quality of the element for which the sentiment is to be controlled by the
machine likewise conveys its significance in the standard. Additionally the sentiments
determined for every one of the product includes subsequent to removing from the
telugu reviews are put away independently for further mapping. The definite technique
for improving product proposals is communicated in algorithmic structure underneath.

In the algorithm the images are as per the following: O is the PROO Ontology. Pi is the
product and I = 1,2,3,… . The sentiment of the product highlight Fj of the product Pi is
spoken to as Sentiment(Fj,Pi) where j=1,2,3,… . The Pos(Fj,Pi), Neg(Fj,Pi) and Neu(Fj,Pi)
are the positive, negative and unbiased product highlights. The check() is the quantity of
events of extremity kind.

parentof(Fjkparent_node, Fjkchild_node) is the element progressive system in the


ontology. Objectproperty(nodea,nodeb) is the reality about related product highlights.
Strength(node,rel(int)) is the opinion quality of the element which is available in the
review. Profundity of the hub in the ontology and the stature of the ontology are the
ontology tree measures.

Algorithm for product predictions using ontological sentiments Algorithm for


prescribing products fills in as pursues: given the product to be looked by the end client
in the E-Commerce site, all the comparable products are suggested. At first, the
algorithm recovers all the comparable products data from the ontology based regarding
the client looked product.

The normal product highlights of recovered products and the sought product are called
as 'k-highlights'. Next for every one of the k-includes the relating sentiment is
determined by utilizing the quantity of positive notices and number of negative notices
on the highlights. At whatever point an unbiased notice is distinguished, it is
additionally included and utilized in the sentiment estimation.

At that point the taxonomical and non-taxonomical sentiment manages on the product
highlights are recovered from the ontology. The objective sentiment examples Positive
and Negative are mapped to the base and most extreme sentiment scores of the
product highlights to make a sentiment extend. The product 'Samsung Galaxy S9prime'
has one of the taxonomical highlights as battery and battery life individually.

The quantity of positive notices and negative notices on the battery are 6 and 0. There
are no nonpartisan notices. The quantity of positive and negative notices on the battery
life is 1 and 0. There are no unbiased notices. The sentiment scores acquired after
estimation for battery and battery life are 1 and 1 individually.

The opinion qualities for battery and battery life got from review dataset are 3 and 3. By
applying these highlights as examples in the taxonomical sentiment rule, the semantic
sentiment scholarly is certain. The sentiment scores of battery and battery life are
presently mapped to Positive sentiment name.

The product 'Samsung Galaxy S9prime' has one of the non-taxonomical highlights as
RAM and execution individually. The quantity of positive and negative notices on the
RAM is 6 and 5. There are no nonpartisan notices. The quantity of positive and negative
notices on the presentation is 6 and 2. There are no unbiased notices.

The sentiment scores got after figuring for RAM and execution are 0.09 and 0.1
individually. The opinion qualities for RAM and execution acquired from review dataset
are 2.5 and 2.5. By applying these highlights and opinion quality qualities as examples in
the non-taxonomical sentiment rule, the semantic sentiment educated is sure.

The sentiment scores of RAM and execution are currently mapped to Positive sentiment
mark. The comparative products are recovered from the ontology by questioning on the
'similarTo' object property for the relating occurrence esteems for the client looked
product.

Presently for every k-highlight among all the recovered products in ontology, at
whatever point there exists any taxonomical requirements and when the sentiment of
the parent include hub in the ontology is not exactly the sentiment of the tyke highlight
hub then the sentiment of the parent highlight hub is refreshed by including the
weighted sentiment of the kid highlight hub. The weight is the profundity of the kid
include hub present in the ontology.

This sort of analysis is conceivable as indicated by [6] which says that the significance of
the element is controlled by the profundity of the element in the ontology. This analysis
sees the taxonomical highlights 'as-a-unit'. At whatever point the sentiment of the
parent highlight hub is equivalent to the sentiment of the tyke include hub, at that point
no update is done on these hubs.

When all the taxonomical imperatives are dissected, the non-taxonomical requirements
are likewise broke down. The non-taxonomical limitations are broke down to gain
proficiency with the related highlights and the commitment to their sentiment esteems.
At the point when the sentiments of the related hubs are not exactly or equivalent to
zero, the sentiments of the related hubs are refreshed by including the proportion.

The proportion is 1/100th of the tallness of the ontology to make the score present in
the sentiment extend. The tallness of the ontology is added to the current sentiment
score as the related hubs are available at any dimension in the ontology other than the
root. The product 'Samsung Galaxy S9prime' has sentiment scores acquired after count
for battery and battery life is 1 and 1 separately.

There is no update in the sentiment esteem for both of the highlights. This is on the
grounds that the sentiment esteems for parent include (battery) and tyke highlight
(battery life) which fall under taxonomical requirements are equivalent. The product
'Samsung Galaxy S9prime' has sentiment scores acquired after computation for screen
and show is 1 and 0 separately.

There is an update in the sentiment esteem for highlight 'show'. This is on the grounds
that the sentiment estimation of presentation is equivalent to zero. The refreshed
sentiment esteem for the component 'show' is 0.03. The product highlights screen and
show fall under non-taxonomical limitations. At long last, the products are arranged in
the plunging request of the improved sentiments. The arranged rundown is given as the
product proposals to the client.

5.6 Design decisions in the implementation of ontology The Description Logic (DL) is
utilized in thinking the cases of ontology. DL is the math behind the develops of the
ontology. The built PROO ontology has DL expressivity level ALCIN(D). ALCIN(D) is
Attribute Logic with Complement, Role Inverse, Unqualified Number Restriction and
Datatype.

This ontology is powerfully adaptable and the standards gained from it are
computationally reasonable in polynomial running time i.e., PTIME. The objective
sentiment which is found out as the standard resulting on the article properties of PROO
ontology is decidable as the principles are deductible in the PTIME. Additionally the
educated principles are DL-protected as these standards are confined to known
occasions of the ontology.

There were a few issues experienced at the season of PROO ontology advancement. This
PROO ontology improvement depended on structure choices taken at two phases. The
two phases were to be specific the structure choices made before the ontology
improvement and, the choices set aside a few minutes of ontology advancement. The
principal plan choice before the advancement of ontology was on the extent of the
ontology to speak to the fitting learning for conceptualization.

In the product telugu reviews space, the PROO ontology was proposed to help the new
clients in recovering the article data from an enormous number of telugu reviews by
thinking on item property ontology way. The second plan choice was on holding fast to
the advancement of a formal ontology in order to reason the ontology for making
important ends.

The PROO ontology was created utilizing the formal Web Ontology Language (OWL)
builds. The third plan choice was whether to clarify the product highlights and opinions
removed from the telugu reviews as occasions to the ideas of the ontology or not. The
plan choice taken amid the advancement of ontology was to pick the required
superclass-subclass scientific categorizations in the ontology.

The scientific categorizations made in the improvement of PROO ontology were the
chain of command of the product highlights and the PoS word class labels. For certain
questions on PROO ontology it was seen that the data recovered is off base. A similar
occurrence that was utilized in breaking down the distinctive product telugu reviews has
prompted the previous referenced issue. 5.7

Evaluation of Results The datasets that were utilized in the component explicit sentiment
classification and information based product suggestions were the gathering of
electronic gadget telugu reviews from Amazon. The electronic gadgets were Iphone X
furthermore, Oppo f7 in addition to and Samsung Galaxy S9prime cell phones. These
products were named as P1, P2 and P3 individually.

The determination of telugu reviews was considered in such a route as each review
contains the notice of the product highlights. Table 5.3 presents the subtleties of the
datasets utilized for this trial. Table 5.3. Telugu reviews Dataset details Document
attributes _Values _ _Number of review documents _300 _ _Minimum sentences per
review _9 _ _Maximum sentences per review _15 _ _ The pre-handling of telugu reviews
was done by expelling stop words and non English words.

Invalidation words were available close to the descriptor in a review sentence and those
were maneuvered carefully. For those review sentences, the sentiment direction of the
word was caught by basically flipping the real sentiment. The product highlights and
opinions extricated on the considered Smart Phone telephone telugu reviews utilizing
NLP based language model and LDA based language model are gathered. PROO
ontology is built and commented on with the gathered product highlights and opinions.

Just a single product type for the standard based sentiments analysis as the PROO
ontology is created for a class of Smart Phone telephones of various makers. ILP
standards are found out from the PROO ontology. The standard predecessor is found
out by shaping a combination of PROO ontology classes and the pertinent spaces which
identify with these classes.

The class occasions and the opening qualities are contemplated for learning the
objective sentiment class case which is the standard resulting. The got principles spread
the positive examples of the product highlight. The assessment of the educated
guidelines is envisioned with the assistance of Area Under Receiver Operating
Characteristic Curve (AUC) introduced in Fig. 5.9.

_ Fig. 5.9. Area under ROC Plot with classifier accuracy The AUC is a measure to exhibit
the telugu reviews shrouded in both of the two sentiment gatherings (great/terrible)
accessible from the dataset. The parameters of the Receiver Operating Characteristic
(ROC) bend are the objective class name and the positioning property.
The objective case considered is useful for the sentiment class and the positioning trait
is considered as opinion quality. A precision of 91.67% of ROC region inclusion is
acquired. The k-highlights distinguished after the client hunt down Iphone X in addition
to are arranged underneath in Table 5.4. The estimation of k found is 17. The
comparative products are Oppo f7 in addition to and Samsung Galaxy S9prime. Table
5.4.

List of k-features k-features _ _Phone _ _Rom _ _Battery _ _Performance _ _Os _ _Brand _


_network connectivity _ _Camera _ _Price _ _build quality _ _Touch _ _Screen _ _battery
life _ _camera quality _ _Appearance _ _Display _ _Ram _ _ The algorithm computes
sentiments for all the three cell products on seventeen highlights. Presently the
algorithm gets all the taxonomical and non-taxonomical requirements for learning
highlight sentiments from the ontology as principles.

In this work, the stature of the PROO Ontology is 3 and the profundity of the kid
highlight hub in the ontology tree for taxonomical sentiments is 2. So as to assess the
sentiments of the k-highlights for suggesting products, closeness measurements
specifically Cosine Similarity [4] and Better [88] are considered. The modest number for
k-highlights limits the capacity to think about the products amid recovery. This prompts
an issue called 'sparsity issue'.

This issue is basic in collective sifting frameworks. An exact analysis is done to


comprehend the effect of little k esteems on product suggestions. The disperse plot for
the level of products with various k esteems with the looked product is introduced in
Fig. 5.10 underneath. _ Fig. 5.10. Scatter plot for the percentage of products with
different k values It is seen from the above assume that at k estimation of 1, the product
suggestions are unrealistic as every one of the products have same similitude esteem.

It is likewise seen that a solitary product isn't prescribed with the accessible k includes as
the products are rivaling appreciation to the sentiments on these k highlights. From k=2
through 17, the product suggestions has begun. So as to know the product suggestions
for little k esteems, the cosine likeness esteems for k esteems 1, 2 and 3 without and
with ontology are organized in Table 5.5 beneath. Table 5.5.

Cosine similarity values for small k _Without Ontology _With Ontology _ _k


_Cosine(P1,P2) _Cosine(P1,P3) _Cosine(P1,P2) _Cosine(P1,P3) _ _1 _1 _1 _1 _1 _ _2 _0.86
_0.89 _0.86 _0.89 _ _3 _0.69 _0.94 _0.69 _0.94 _ _ An important observation is made from
the above table. It is that the cosine similarity values without and with ontology for small
values of k are same.
The influence of taxonomical and non-taxonomical constraints has on the product
recommendations are reflected from k value of 4. Different values for ‘k’ provide the
useful understanding about the products comparison for eventual recommendations.
The variations in the number of k-features on the similar products using sentiments
without ontology and with ontology are tabulated in Table 5.6 below. Table 5.6.

Better and Cosine Similarity Measures statistics for analyzing similarities between
products _Without Ontology _With Ontology _ _k _Better(P1,P2) _Cosine(P1,P2)
_Better(P1,P3) _Cosine(P1,P3) _Better(P1,P2) _Cosine(P1,P2) _Better(P1,P3) _Cosine(P1,P3)
_ _4 _-0.0275 _0.87 _-0.0075 _0.79 _-0.0275 _0.75 _-0.0075 _0.95 _ _8 _-0.00063 _0.61
_-0.08938 _0.45 _-0.00063 _0.33 _-0.08938 _0.52 _ _12 _0.037083 _0.54 _0.044583 _0.51
_0.037083 _0.54 _0.025874 _0.51 _ _17 _0.099705 _0.29 _0.058235 _0.48 _0.099705 _0.29
_0.035866 _0.49 _ _ The higher Better qualities in relative examination with the pursuit
product indicate that the product is on the highest point of the proposal list.

The lower cosine esteems in relative examination with the hunt product indicate that the
product is on the highest point of the proposal list. The sentiments of k-includes on the
three products without ontology are shown in Fig. 5.11. underneath. _ Fig. 5.11.
Sentiments of k-features of similar products in the absence of ontology The product
similitude with the sentiment data on the comparable products without the help of
ontology is shown in Fig. 5.12 underneath. _ Fig. 5.12.

Products Comparison with the searched product in the absence of ontology The
sentiments of k-features on the three products in the presence of ontology are
displayed in Fig. 5.13. below. _ Fig. 5.13. Sentiments of k-features of similar products in
the presence of ontology The product similarity with the sentiment data on the similar
products with the support of ontology is displayed in Fig. 5.14 below. _ Fig. 5.14.

Products comparison with the searched product in the presence of ontology The
product proposals dependent on the Cosine comparability measures with and without
ontology support for various 'k' values are indicated in Table 5.7 underneath. Table 5.7.
Product recommendations K _Without Ontology _With Ontology _ _4 _Oppo f7 plus,
Samsung Galaxy j7 _Samsung Galaxy j7, Oppo f7 plus _ _8 _Oppo f7 plus, Samsung
Galaxy S9 _Samsung Galaxy j7, Oppo f7 plus _ _12 _Oppo f7 plus, Samsung Galaxy S9
_Oppo f7 plus, Samsung Galaxy S9 _ _17 _Samsung Galaxy j7, Oppo f7 plus _Samsung
Galaxy j7, Oppo f7 plus _ _ From the outcomes in above table it is seen that without the
help of ontology for various estimations of 'k' (4,8,12) the cosine comparability restored
the comparable products as proposals in a similar request (product P2 starts things out
in the rundown and after that the product P3) by utilizing the sentiments on
k-highlights. The product with higher cosine esteem between two comparative products
is appeared first product in the suggestions list.

For k estimation of 17, the request in the product proposals is changed. This is on the
grounds that the product P3 has higher cosine esteem and P2 has lower cosine esteem
when contrasted and the looked product. At the point when ontological information is
used in the product proposals analysis the sentiments of the taxonomical highlights
({battery, battery life} and {camera, camera quality}) are not changed as the sentiments
of the parent highlights are more noteworthy than the sentiments of the tyke includes in
the scientific categorization.

The sentiments of the non-taxonomical highlights (in the work the related highlights are
{RAM, Smart Phone performance}, {brand, price} and {screen, display}) are improved in
the comparable products of k-includes by utilizing the proposal algorithm. It is seen that
the request of product suggestions subsequent to improving the sentiments of the
related highlights is changed for two k esteems (for qualities 4 and 8). This is on the
grounds that the related sentiments of product P3 have improved so they show higher
cosine esteem than the product P2.

This demonstrates the improvement in the product proposals. 5.8 Conclusion The last
investigation completed in this examination is introduced. The job of semantic standards
in sentiment learning is examined. The impact of sentiments on semantic principles is
likewise examined. The algorithms for learning taxonomical and non taxonomical
requirements are clarified and results are arranged.

Additionally the algorithm for improving product sentiments utilizing the scholarly
taxonomical and non taxonomical limitations for product proposals is clarified and
results are arranged. The plan choices in the usage of PROO ontology are examined. A
few perceptions from the examination are likewise talked about. Chapter 6 Comparative
Models 6.1

Introduction The experiments carried out in this thesis were designed to evaluate the
product review analysis model by comparing the extracted product features, and also to
analyze the sentiments by means of the proposed model to be acquired by targeting
the baseline models. The experiments used Amazon E-commerce product telugu
reviews.

The performance achieved by the product telugu reviews analysis model was evaluated
based on the accuracy of the extracted product features, and thus resulting in the better
recommendations of the products based on the semantic sentiments learned on the
product features. The experimental evaluation of the proposed model is compared with
the baseline models evaluated on similar product telugu reviews of different vendors.

Further, the comparative analysis is performed on semi-automatic feature extraction


against ontology supported feature extraction in NLP. Also, the comparative analysis is
performed on semi-automatic feature extraction against ontology supported feature
extraction in LDA. Furthermore, the comparative analysis is performed on ontology less
product sentiments against ontology supported product sentiments for product
recommendations.

Finally, the proposed model is compared with the existing methods in ontology based
sentiment classification. 6.2 Comparison based on similar product telugu reviews by
different vendors The comparative results based on similar product telugu reviews by
different vendors are discussed below. The product telugu reviews obtained from
Amazon were used for this comparison.

Three product telugu reviews were considered for conducting this experiment. Iphone X
plus, Oppo f7 plus and Samsung Galaxy S9prime smart phones were the products for
which the telugu reviews were considered for analysis. The labels specified to these
three datasets were D1, D2 and D3 respectively.

Hereafter, in this discussion, the proposed model was referred to as VT model for Vishnu
and Tarani approach and the baseline model was referred as HL model for Hu and Liu
[40] approach to extract frequent nouns. The precision, recall and F1-score details in the
comparison of frequent nouns extraction is tabulated in Table 6.1. Table 6.1precision,
recall and F1-score comparison of frequent nouns Datasets _Precision (%) _Recall (%)
_F1-Score (%) _ _ _VT Model _HL Model _VT Model _HL Model _VT Model _HL Model _
_D1 _86.3 _55.2 _79.1 _67.1 _82.5 _60.5 _ _D2 _87.8 _57.3 _80.5 _65.2 _83.9 _61 _ _D3 _85
_56.3 _82.9 _73.1 _83.9 _63.6

_ _ The frequent nouns extracted using VT model on D1 dataset has achieved the
precision of 86.3%. A significant increase of 31.1% was observed when compared with
the HL model for frequent nouns extraction. Also, the precision obtained on D2 dataset
using VT model was 87.8%. A significant increase of 30.5% was observed when
compared with the HL model. Finally, the precision obtained on D3 dataset using VT
model was 85%.

A significant increase of 28.7% was observed when compared with the HL model. This is
because the VT model extracted frequent nouns using the empirical threshold obtained
by analyzing the three datasets. The comparative results are plotted in Fig. 6.1 below. It
is observed from Fig. 6.1 that the VT model outperforms on the HL model in terms of
precision. _ Fig. 6.1.

precision, recall and F1-score comparison of frequent nouns The precision, recall and
F1-score details in the comparison of relevant nouns extraction is tabulated in Table 6.2.
Table 6.2 precision, recall and F1-score comparison of relevant nouns Datasets
_Precision (%) _Recall (%) _F1-Score (%) _ _ _VT Model _PE Model _VT Model _PE Model
_VT Model _PE Model _ _D1 _93.4 _75 _77.1 _75.3 _84.4 _75.1 _ _D2 _92 _72 _79.2 _76
_85.1 _73.9 _ _D3 _92.8 _57.8 _72.6 _69.6 _81.4 _63.2

_ _ The proposed model was referred to as VT model and the baseline model was
referred as PE model for Popescu and Etzioni approach [82] to extract relevant nouns.
The relevant nouns extracted using VT model on D1dataset has achieved the precision
of 93.4%. A significant increase of 18.4% was observed when compared with the PE
model for relevant nouns extraction. Also, the precision obtained on D2 dataset using
VT model was 92%.

A significant increase of 20% was observed when compared with the PE model. Finally,
the precision obtained on D3 dataset using VT model was 92.8%. A significant increase
of 35% was observed when compared with the PE model as VT model extracted the
features of the part of the product as relevant nouns. The comparative results are
plotted in Fig. 6.2 below. It is observed from Fig. 6.2 that the VT model outperforms than
the PE model in terms of precision. _ Fig.

6.2. precision, recall and F1-score comparison of relevant nouns The precision, recall and
F1-score details in the comparison of implicit nouns extraction are tabulated in Table
6.3. Table 6.3.

precision, recall and F1-score comparison of implicit nouns Datasets _Precision (%)
_Recall (%) _F1-Score (%) _ _ _VT Model _CS Model _VT Model _CS Model _VT Model
_CS Model _ _D1 _76.4 _71 _84.3 _79 _80.1 _74.8 _ _D2 _85.7 _72 _82.1 _76 _83.8 _73.9 _
_D3 _80 _74 _85.8 _80 _82.7 _76.9 _ _ The proposed model was referred to as VT model
and the baseline model was referred to as CS model for Cambria and Soujanya
approach [94] to extract implicit nouns. The implicit nouns extracted using VT model on
D1dataset has achieved the precision of 76.4%. A considerable increase of 5.4% was
observed when compared with the CS model for relevant nouns extraction. Also, the
precision obtained on D2 dataset using VT model was 85.7%.

A significant increase of 13.7% was observed when compared with the CS model. Finally,
the precision obtained on D3 dataset using VT model was 80%. A considerable increase
of 6% was observed when compared with the CS model. The comparative results are
plotted in Fig. 6.3 below. It is observed from Fig. 6.3 that the VT model outperforms the
CS model in terms of precision. _ Fig. 6.3. precision, recall and F1-score comparison of
implicit nouns 6.3

Comparison between semi-automatic feature extraction and Ontology based feature


extraction in NLP The semi-automatic features are those that were extracted using the
patterns learned from the dataset. The patterns are learned from the dataset with the
help of human intervention. The comparative results based on semi-automatic feature
extraction and ontology based feature extraction in NLP are discussed below.

The product telugu reviews obtained from Amazon were used for this comparison.
Three product telugu reviews were considered for conducting this experiment. Iphone X
plus, Oppo f7 plus and Samsung Galaxy S9prime smart phones were the products for
which the telugu reviews were considered for analysis. The labels specified to these
three datasets were D1, D2 and D3 respectively.

Hereafter, in this discussion, the proposed model was referred to as VT model and the
baseline models was referred to as CS Model for Cambria and Soujanya approach [94]
to extract both explicit and implicit product features in an automatic manner and PH
Model for Pei and Hongwei approach [80] to extract both explicit and implicit product
features using the ontology. The average accuracy details in the comparison of
semi-automatic features extraction are tabulated in Table 6.4. Table 6.4.

Average accuracy comparison of semi-automatic features extraction in NLP Product


_Automatic Feature Extraction Average Accuracy (in %) _ _VT model on D1,D2,D3 _80 _
_CS model on D1,D2,D3 _72 _ _ The average accuracy on three review datasets using VT
model is 80%. A significant increase of 8% is observed when compared with the CS
model. This is because the VT model extracts the product features using step-by-step
approach.

The model ensures the retrieval of maximum product features. The average accuracy
details in the comparison of ontology based feature extraction are tabulated in Table
6.5. Table 6.5 Average accuracy comparison of ontology based features extraction in
NLP Product _Ontology based Feature Extraction Average Accuracy (in %) _ _VT model
on D1,D2,D3 _93 _ _PH model on D1,D2,D3 _87.3 _ _ The average accuracy on three
review datasets using VT model is 93%. A considerable increase of 5.7% is observed
when compared with the PH model. The PROO ontology identifies the product features
from the online telugu reviews written in English language.

PH model identifies the product features written in Chinese product telugu reviews as
the ontology was developed in Chinese language. The researchers of the proposed PH
model implemented their ontology after converting the three datasets into Chinese
language. This signifies that with the support of PROO ontology, better percentage of
product features is extracted on three product features in NLP. 6.4

Comparison between semi-automatic feature extraction and Ontology based feature


extraction in LDA The comparative results based on semi-automatic feature extraction
and ontology based feature extraction in LDA are discussed below. The product telugu
reviews obtained from Amazon were used for this comparison. Three product telugu
reviews were considered for conducting this experiment.

Iphone X plus, Oppo f7 plus and Samsung Galaxy S9prime smart phones were the
products for which the telugu reviews were considered for analysis. The labels specified
to these three datasets were D1, D2 and D3 respectively. Hereafter, in this discussion,
the proposed model was referred to as VT model and the baseline models was referred
to as AO Model for Alice Oh approach [116] to extract the product features in an
automatic manner and SM Model for Mukherjee approach [99] to extract the product
features using the ontology.

The average accuracy details in the comparison of semi-automatic features extraction is


tabulated in Table 6.6. Table 6.6. Average accuracy comparison of semi-automatic
features extraction in LDA Product _Automatic Feature Extraction Average Accuracy (in
%) _ _VT model on D1,D2,D3 _85 _ _AO model on D1,D2,D3 _78 _ _ The average accuracy
on three review datasets using VT model is 85%.

A considerable increase of 7% is observed when compared with the AO model. This is


because the VT model extracts the product features using the traditional LDA with the
careful selection of LDA priors ‘a’ and ‘ß’. This has a significant impact on the learned
product features.

The average accuracy details in the comparison of ontology based feature extraction are
tabulated in Table 6.7. Table 6.7. Average accuracy comparison of ontology based
features extraction in LDA Product _Ontology based Feature Extraction Average
Accuracy (in %) _ _VT model on D1,D2,D3 _88 _ _SM model on D1,D2,D3 _73 _ _ The
average accuracy on three review datasets using VT model is 90%.

A significant increase of 15% is observed when compared with the SM model. The PROO
ontology not only identifies the weighted product features as in LDA but also extracts
other product features from the online telugu reviews which help in improving the
analysis of the opinions.
SM model, on the other hand identifies the product features that frequently occur in the
product telugu reviews and depends on maximum coverage of the review vocabulary by
LDA. This signifies that with the support of PROO ontology, better percentage of
product features is extracted on three product features in LDA. 6.5 Comparison between
ontology less product sentiments and ontology supported product sentiments for
product recommendations The comparative results based on ontology less product
sentiments and ontology based product sentiments for product recommendations are
discussed below.

The product telugu reviews obtained from Amazon were used for this comparison.
Three product telugu reviews were considered for conducting this experiment. Iphone X
plus, Oppo f7 plus and Samsung Galaxy S9prime smart phones were the products for
which the telugu reviews were considered for analysis. The labels specified to these
three datasets were D1, D2 and D3 respectively. The count based sentiments of the
product features on the three datasets are calculated.

These feature sentiments are visualized in Fig. 6.4, Fig. 6.5 and Fig. 6.6 below. _ Fig. 6.4.
Count based sentiments of the product features of Iphone X plus smartphone _ Fig. 6.5.
Count based sentiments of the product features of Oppo f7 plus smartphone _ Fig. 6.6.

Count based sentiments of the product features of Samsung Galaxy S9prime


smartphone These count based sentiments of product features are analyzed for
taxonomical and non-taxonomical relationships among product features with the
support of sentiment rules learned from PROO ontology. In order to recommend
products to the customer, the similar products are to be compared with each other.

This comparison internally takes place on k-features of the products. The comparison of
product sentiments on ‘k-features’ with and without using ontology are illustrated in
Fig.6.7 and Fig. 6.8 below. _ Fig. 6.7. Comparison of product sentiments on k-features
without using ontology _ Fig. 6.8.

Comparison of product sentiments on k-features using ontology The above figures


specify that with the support of sentiment rules learned from ontology there is an
improvement in the count based sentiments. In order to compare the similar products
the Better and Cosine similarity measures are used. Different values for ‘k’ provide the
useful understanding about the products comparison for eventual recommendations.

The comparison of Better and Cosine similarity measures for different k-features with
and without using ontology are visualized in Fig.6.9 and Fig.6.10 below. _ Fig.6.9.
Comparison of Better and Cosine similarity measures for different k-features without
using ontology _ Fig.6.10. Comparison of Better and Cosine similarity measures for
different k-features without using ontology From the outcomes it is seen that without
the help of philosophy for various estimations of 'k' (4,8,12) the cosine similitude
restored the comparable products as proposals in a similar request (product P2 starts
things out in the rundown and afterward the product P3) by utilizing the feelings on
k-highlights.

The product with higher cosine esteem between two comparable products is appeared
first product in the proposals list. For k estimation of 17, the request in the product
proposals has been changed. This is on the grounds that the product P3 has higher
cosine esteem and P2 has lower cosine esteem when contrasted and the sought
product.

At the point when ontological learning is used in the product proposals investigation
the notions of the taxonomical highlights ({battery, battery life} and {camera, camera
quality}) are not changed as the estimations of the parent highlights are more
noteworthy than the conclusions of the tyke includes in the scientific categorization. The
notions of the non-taxonomical highlights (in the work the related highlights are {RAM,
Smart Phone performance}, {brand, price} and {screen, display}) are improved in the
comparative products k-includes by utilizing the suggestion calculation.

It is seen that the request of product proposals in the wake of improving the
suppositions of the related highlights is changed for two k esteems (for qualities 4 and
8). This is on the grounds that the related estimations of product P3 have improved and
hence comprises of the higher cosine esteem than the product P2. This demonstrates
the improvement in the product proposals.

The comparison of semantic rules based product recommendations and statistical


sentiments based product recommendations is carried out by determining the Mean
Absolute Error (MAE) between the similar products both in the absence of ontology and
in the presence of ontology. The proposed ontology based products recommendation
model (hereafter it is called as VT model) improves the sentiment of the product based
on k-similar product features.

The MAE is very less in VT model when compared with the MAE results of [22] (hereafter
DZ model) in which the researchers worked on tweets for sentiments based
recommendations. The MAE details in the comparison of sentiments based
recommendations using size of product features of seventeen are tabulated in Table 6.8.
Table 6.8. MAE Comparison of k-features sentiments based product recommendations
Dataset _VT Model _DZ Model _ _Oppo f7 plus telugu reviews and Tweets _0.0035 _3.23
_ _Samsung Galaxy S9telugu reviews and Tweets _0.0052 _3.23 _ _ The MAE comparison
of k-features sentiments based product recommendations are illustrated in Fig.6.11
below. _ Fig.6.11 MAE Comparison of k-features sentiments based product
recommendations The observation from the above result is that the MAE values
determined by using ontology based sentiments are very less when compared with the
statistical sentiment model.

Therefore, it is clear that the semantic rules for product recommendations achieve better
performance than statistical sentiments based product recommendations. 6.6
Comparison of proposed model with existing methods in Ontology based sentiment
classification So far various works are carried out on ontology based sentiment
classification using movie telugu reviews.

Sentiment classification task using SVM, Vector Analysis, Geometric Polarity Pyramid
applied on the ontology is found in the literature as discussed in chapter 2. The details
are listed in Table 6.9. Table 6.9 Comparison with state-of-art ontology based sentiment
classification techniques Research Work _Corpus _Review Documents _Method
_Accuracy _ _Khin Phyu Phyu Shein _Movie telugu reviews _300 _Ontology and SVM [53]
_80% _ _Isidro et al. _Movie telugu reviews _2000 _Ontology and Vector Analysis [43]
_84.8% _ _Present Work _Product telugu reviews _300 _Ontology and SDM _91.67% _ _
The present work on online product telugu reviews results in an accuracy of 91.67%
which is better than the state-of-art techniques for sentiment classification.

To the best of author’s knowledge, this work is the first of its kind on product telugu
reviews to use Semantic Data Mining for better product recommendations based on the
semantic sentiments learned on the identified product features. 6.7 Conclusion The
experimental evaluation of the proposed model is compared with the baseline models
evaluated on different product telugu reviews.

The comparative analysis is also carried out on similar product telugu reviews of
different vendors. Also, the comparative analysis is performed on semi-automatic
feature extraction against ontology supported feature extraction in LDA. Furthermore,
the comparative analysis is performed on ontology less product sentiments against
ontology supported product sentiments for product recommendations using MAE
statistical measure. Finally, the proposed model is compared with the existing methods
in ontology based sentiment classification.

Chapter 7 Conclusions and Future Directions In this research work an attempt is made to
extract maximum product features from the telugu reviews. Further, the corresponding
opinions are also identified and extracted. Finally, an ontology is developed to
automatically identify the features and opinions by the machine from the ever scaling
telugu reviews and to recommend the products in a better manner by utilizing the
sentiments mined from the ontology.

This supports the end user to adopt wise decisions without compromise on the product
feature set in a fast and accurate manner. The performance of the enhanced model is
compared with the plain sentiment classification model in order to elevate the
significance of ontology in product recommendations. 7.1

Conclusions Motivated by the problem of particular way of extracting product features,


the performance of step-by-step features extraction method is assessed in the
classification of sentiments of the product features. Furthermore, the application of
hierarchical FOT on the LDA topic clusters enables the extraction of maximum and exact
product features than with only LDA clusters in the semantic space.

The calculation of orientation of the opinions using statistical measure ‘SO’ with the help
of SentiWordNet signifies the extraction of actual opinions. The feature level sentiment
analysis of the extracted features and opinions using the mined semantic rules for better
product recommendations is sufficient to achieve better performance than statistical
sentiments based product recommendations. 7.2

Contributions of this work Following are the three significant contributions of this
research work: 1. The extraction of all kinds of product features and the corresponding
opinions from online product telugu reviews are an important task. As the first
contribution of this research work, a step-by-step features extraction is performed to
learn the maximum and exact features that are useful in the further tasks of opinion
mining. The opinions which are located adjacent to the features are also extracted as
opinions as well.

The orientation of these opinions is also determined. Furthermore, the influence of NLP
on the extracted product features and opinions are explored. The results are presented
using the standard information retrieval measures such as precision, recall and F-score.
2. The analysis of the product telugu reviews is required in order to extract maximum
and accurate features.

The new customers use this service and compare with the similar class product features
at the time of purchasing the product from online e-commerce website. In this direction,
the second contribution of this research work is to extract the product features using
LDA topic model and a hierarchical FOT tree. Using the LDA topic matrix, the opinions of
the extracted features are also identified. The orientation of the opinions is also
determined.

Further, the influence of LDA priors in learning the product features from the generated
topics is discussed. The performance of this model is evaluated using the standard
information retrieval measures such as precision, recall and F-score. It is shown that the
hierarchical FOT enable the extraction of maximum and exact product features from the
LDA topic clusters in the semantic space. 3.

Customary machine learning algorithms experience the information and get familiar
with the speculation utilizing the characteristic esteem sets. Machines can't go past the
undertaking of distinguishing highlights and conclusions from the telugu surveys as
they don't have earlier learning to comprehend the connections among the qualities
and setting explicit imperatives that are accessible among the product highlights and
sentiments.

Semantic web cosmology defeats this issue.Based on this idea, the third contribution is
to investigate the influence of sentiments using the mined semantic rules from the
ontology. It is empirically shown that mining the ontology with data not only improves
the learning of a machine for automatic categorization of online telugu reviews but also
improves product recommendations which are based on the extracted product features
and opinions. 7.3

Future Directions The present work focuses on analyzing the product telugu reviews
using ontology. Future extent of work is in the lines of learning the goals of the
reviewers utilizing the propelled machine learning algorithms and greater informational
indexes. The impact of the goals on new clients and on the product makers by
measuring the impact of goal on data dispersion in web based life are to be researched.

The classification execution of the machine learning model on the aims is to be analyzed
for finding the genuine expectation of the commentator.

INTERNET SOURCES:
-------------------------------------------------------------------------------------------
0% - https://recycling.environmentalconferenc
0% - https://www.rjsmediaconsulting.com/blog
0% - https://www.zeolearn.com/magazine/blog/2
0% - https://www.researchgate.net/publication
0% - https://www.bing.com/aclk?ld=e3rDGJcHBsh
0% - https://www.researchgate.net/publication
0% - http://www.open.edu/openlearn/ocw/mod/ou
0% - https://www.slideshare.net/eyenabainza/l
0% - https://gaddeswarup.blogspot.com/2011/07
0% - http://www.fao.org/3/w3241e/w3241e02.htm
0% - https://ijsr.net/archive/v5i2/NOV161677.
0% - https://www.wikihow.com/Main-Page
0% - http://www.iaeng.org/publication/WCE2015
0% - https://www.researchgate.net/publication
0% - Empty
0% - https://journals.sagepub.com/doi/abs/10.
0% - https://dl.acm.org/citation.cfm?id=10140
0% - https://www.goodreturns.in/company/dewan
0% - https://en.wikipedia.org/wiki/Ontology_l
0% - https://www.researchgate.net/publication
0% - https://link.springer.com/article/10.100
0% - https://communities.vmware.com/blogs/vmw
0% - https://www.bing.com/aclk?ld=e3zBOYDTttZ
0% - http://studentsrepo.um.edu.my/3214/2/CON
0% - https://www.researchgate.net/publication
0% - https://www.hindawi.com/journals/cin/201
0% - https://en.wikipedia.org/wiki/Sentiment_
0% - https://www.researchgate.net/publication
0% - https://www.slideshare.net/butest/a-bile
0% - https://www.cs.uic.edu/~liub/FBS/sentime
0% - https://www.cs.uic.edu/~liub/FBS/sentime
0% - https://rbi.org.in/Scripts/PublicationRe
0% - http://aircconline.com/ijcses/V6N5/6515i
0% - https://link.springer.com/article/10.118
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://en.wikipedia.org/wiki/Sentiment_
0% - https://www.sciencedirect.com/science/ar
0% - https://www.ee.columbia.edu/~dpwe/papers
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.readkong.com/page/survey-on-
0% - http://downloads.hindawi.com/journals/sp
0% - https://mafiadoc.com/machine-learning-ne
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/profile/Tej
0% - https://slidex.tips/download/appendices-
0% - https://ufdc.ufl.edu/UFE0013105/00001
0% - http://docshare.tips/international-journ
0% - https://link.springer.com/book/10.1007/9
0% - https://www.researchgate.net/profile/Tej
0% - https://jbiomedsem.biomedcentral.com/art
0% - http://www.gbv.de/dms/ilmenau/toc/371295
0% - https://www.g2.com/compare/brightedge-vs
0% - https://dl.acm.org/citation.cfm?id=24093
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.hindawi.com/journals/cin/201
0% - https://isaacjoh.com/Publications/Disser
0% - https://research.library.mun.ca/9791/1/G
0% - https://link.springer.com/article/10.118
0% - https://www.sciencedirect.com/science/ar
0% - https://languagelog.ldc.upenn.edu/nll/?p
0% - https://www.researchgate.net/publication
0% - https://www.hindawi.com/journals/afs/201
0% - https://issuu.com/nadirchine/docs/pro-ma
0% - https://mafiadoc.com/view-open-unisa_5a0
0% - https://dl.acm.org/citation.cfm?id=16075
0% - https://www.researchgate.net/publication
0% - https://systematicreviewsjournal.biomedc
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - http://docshare.tips/from-natural-resour
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://en.wikipedia.org/wiki/Artificial
0% - http://www.math.bas.bg/~nkirov/2005/oop/
0% - http://docshare.tips/big-data-analytics-
0% - https://core.ac.uk/download/pdf/39669150
0% - https://link.springer.com/article/10.100
0% - http://www.personal.psu.edu/users/j/a/ja
0% - http://europepmc.org/articles/PMC5031999
0% - http://docshare.tips/security-enhanced-a
0% - https://www.sciencedirect.com/science/ar
0% - http://www.business2business.co.in/searc
0% - https://issuu.com/mymobile/docs/my_mobil
0% - https://pdfs.semanticscholar.org/1fcc/b7
0% - https://academic.oup.com/economicpolicy/
0% - https://www.pharmatutor.org/articles/a-r
0% - https://www.bing.com/aclk?ld=e3gtES4Kskt
0% - https://slidelegend.com/review-of-social
0% - https://elearningtech.blogspot.com/2006/
0% - https://e-commerce1e.blogspot.com/2011/0
0% - https://www.icas.com/ca-today-news/10-co
0% - https://restoreprivacy.com/secure-email/
0% - https://conversionxl.com/blog/customers-
0% - https://www.learning-mind.com/20-interes
0% - https://www.paymentscardsandmobile.com/p
0% - https://www.bing.com/aclk?ld=e3wtc7R6hIZ
0% - https://useinsentences.blogspot.com/2015
0% - https://www.sciencedirect.com/science/ar
0% - https://thoroughlyreviewed.com/automotiv
0% - http://www.discountedparcels.co.uk/
0% - http://consumerpsychologist.com/marketin
0% - https://www.researchgate.net/publication
0% - https://quizlet.com/216471133/group-coun
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.thesaurus.com/browse/provide
0% - https://www.bing.com/aclk?ld=e3jOQfx6QXe
0% - http://www.easyhindityping.com/english-t
0% - https://scholarspace.manoa.hawaii.edu/bi
0% - https://www.pinterest.com/
0% - https://www.quicksprout.com/learn-to-wri
0% - https://www.bridging-the-gap.com/busines
0% - https://www.forbes.com/sites/jaysondemer
0% - https://www.math.toronto.edu/mathnet/fal
0% - https://www.researchgate.net/publication
0% - https://researchrundowns.com/quantitativ
0% - https://ijesm.co.in/uploads/68/6082_pdf.
0% - http://www.ijcsit.com/docs/Volume%206/vo
0% - https://www.sciencedirect.com/science/ar
0% - https://news.slashdot.org/story/18/10/11
0% - https://datamining.typepad.com/data_mini
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://arxiv.org/pdf/1710.06536
0% - https://telrp.springeropen.com/articles/
0% - https://monkeylearn.com/text-analysis/
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - http://data-mining.philippe-fournier-vig
0% - https://www.sciencedirect.com/science/ar
0% - https://www.aclweb.org/anthology/N19-125
0% - http://europepmc.org/articles/PMC4020838
0% - http://vidhansabha.bih.nic.in/pdf/recrui
0% - https://link.springer.com/article/10.100
0% - http://ijarcet.org/wp-content/uploads/IJ
0% - https://aclweb.org/anthology/D16-1169
0% - https://www.atiner.gr/journals/media/201
0% - https://link.springer.com/chapter/10.100
0% - https://www.ey.com/Publication/vwLUAsset
0% - https://www.sec.gov/Archives/edgar/data/
0% - https://jurnalteknologi.utm.my/index.php
0% - http://lyra.ifas.ufl.edu/ABE6644/papers/
1% - https://www.intechopen.com/books/machine
0% - https://www.google.com/chrome/
0% - https://folk.idi.ntnu.no/noervaag/papers
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.academia.edu/11921820/Sentim
0% - https://en.wikipedia.org/wiki/Principal_
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.answers.com/Q/What_were_the_
0% - http://www.researchmathsci.org/PINDACArt
0% - https://psywb.springeropen.com/articles/
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://link.springer.com/chapter/10.100
0% - https://www.intellexer.com/sentiment_ana
0% - https://dl.acm.org/citation.cfm?id=12428
0% - http://www.labcompliance.com/tutorial/me
0% - https://people.umass.edu/~mcclemen/581Sa
0% - https://quizlet.com/223570337/acnt-1340-
0% - https://www.verywellmind.com/attitudes-h
0% - https://towardsdatascience.com/importanc
0% - https://www.researchgate.net/publication
0% - https://onewetsneaker.wordpress.com/2009
0% - https://en.wikipedia.org/wiki/Machine_le
0% - https://www.linguee.com/english-spanish/
0% - https://issuu.com/irpindia/docs/paper11_
0% - http://memory.psych.upenn.edu/files/pubs
0% - https://www.sciencedirect.com/science/ar
0% - https://webgate.ec.europa.eu/maritimefor
0% - https://www.fosteropenscience.eu/sites/d
0% - https://readingwhilewhite.blogspot.com/2
0% - https://www.imdb.com/title/tt0989757/rev
0% - https://www.ucd.ie/issda/static/document
0% - https://sentic.net/new-avenues-in-opinio
0% - https://www.theepochtimes.com/chapter-ei
0% - http://constancias.blogutem.cl/2019/06/0
0% - https://mafiadoc.com/foundations-of-succ
0% - https://issuu.com/drcpublications/docs/r
0% - https://www.ahajournals.org/doi/full/10.
0% - https://www.sciencedirect.com/science/ar
0% - http://data.allenai.org/esr/Papers%20600
0% - http://docshare.tips/the-sociology-proje
0% - https://mafiadoc.com/thesis-the-universi
0% - https://www.thefreedictionary.com/mull+o
0% - https://quizlet.com/127395836/marketing-
0% - https://quizlet.com/307471069/practice-p
0% - https://en.wikipedia.org/wiki/Talk:Negat
0% - https://www.smashingmagazine.com/2010/10
0% - https://www.dezyre.com/article/how-big-d
0% - https://liris.cnrs.fr/Documents/Liris-65
0% - https://www.cs.uic.edu/~cornelia/papers/
0% - http://www.iauniversity.net/documentos/C
0% - http://www.economicsdiscussion.net/infla
0% - https://www.researchgate.net/publication
0% - https://keio.pure.elsevier.com/en/public
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=10819
0% - https://courses.lumenlearning.com/boundl
0% - https://bmcpublichealth.biomedcentral.co
0% - https://www.analyticsvidhya.com/blog/201
0% - http://simulationresearch.lbl.gov/sites/
0% - https://www.researchgate.net/publication
0% - http://www.tara.tcd.ie/handle/2262/86654
0% - https://en.wikipedia.org/wiki/E-commerce
0% - http://www.erm.ecs.soton.ac.uk/theme4/ai
0% - https://www.uts.edu.au/staff/farookh.hus
0% - https://sentic.net/new-avenues-in-opinio
1% - https://www.intechopen.com/books/machine
1% - https://www.intechopen.com/books/machine
0% - https://wikieducator.org/Wikieducator_tu
0% - https://www.sciencedirect.com/science/ar
0% - https://monkeylearn.com/blog/aspect-base
0% - https://dl.acm.org/citation.cfm?id=14582
0% - https://www.elsevier.com/connect/10-aspe
0% - https://www.sciencedirect.com/science/ar
0% - http://www.qualitative-research.net/inde
0% - https://pdfs.semanticscholar.org/030e/6b
0% - https://journals.ametsoc.org/doi/10.1175
1% - https://www.intechopen.com/books/machine
0% - https://pdfs.semanticscholar.org/f70b/68
0% - https://courses.lumenlearning.com/atd-he
0% - http://aircconline.com/ijdkp/V6N1/6116ij
0% - https://github.com/kk7nc/Text_Classifica
0% - https://www.projectcubicle.com/schedule-
0% - http://www.joetsite.com/wp-content/uploa
1% - https://www.intechopen.com/books/machine
0% - https://www.coursehero.com/file/17733065
0% - http://scholarship.claremont.edu/cgi/vie
0% - https://www.researchgate.net/publication
0% - https://www.idrc.ca/sites/default/files/
0% - https://www.researchgate.net/publication
0% - https://www.academia.edu/6104121/METHODS
0% - https://es.snatchbot.me/rss_feed
0% - https://www.researchgate.net/profile/Ril
0% - http://docshare.tips/web-mining-applicat
0% - https://www.scaledagileframework.com/fea
0% - https://www.wikihow.com/Spot-a-Fake-Revi
0% - https://www.loot.co.za/index/html/index1
0% - https://www.environment.nsw.gov.au/resou
0% - https://dl.acm.org/citation.cfm?id=30572
0% - https://www.sciencedirect.com/science/ar
0% - https://www.pearson.com/content/dam/one-
0% - http://www.ijptonline.com/wp-content/upl
0% - https://docs.oracle.com/cd/E53547_01/ope
0% - https://www.sciencedirect.com/science/ar
0% - https://link.springer.com/referenceworke
0% - https://www.youtube.com/watch?v=ACssWxQ9
0% - https://en.m.wikipedia.org/wiki/Soil_fer
0% - https://www.slideshare.net/hustwj/sentim
0% - https://www.sciencedirect.com/science/ar
0% - http://downloads.hindawi.com/journals/ac
0% - https://www.academia.edu/13569361/A_SURV
0% - https://www.globallogic.com/wp-content/u
0% - https://www.quora.com/What-is-a-4-2-2-2-
0% - https://www.researchgate.net/publication
0% - https://hiconsumption.com/2019/03/best-a
0% - https://legimanupdates.blogspot.com/2018
0% - http://online-dating-sites.us.com/
0% - https://www.quora.com/How-can-e-commerce
0% - https://www.sciencedirect.com/science/ar
0% - https://en.wikipedia.org/wiki/.asp
0% - https://sentence.yourdictionary.com/so
0% - https://noise.getoto.net/tag/cloudwatch/
0% - http://www.ncert.nic.in/NCERTS/l/kelm202
0% - https://www.sciencedirect.com/science/ar
0% - http://doctrinepublishing.com/showbook.p
0% - https://freepaper.me/downloads/abstract/
0% - https://en.wikipedia.org/wiki/Sentiment_
0% - https://www.esurveyspro.com/article-data
0% - http://www7a.biglobe.ne.jp/~eternalmembe
0% - https://www.irjet.net/archives/V3/i2/IRJ
0% - https://www.bioclinica.com/resources/sci
0% - https://link.springer.com/article/10.118
0% - http://www.joetsite.com/wp-content/uploa
0% - https://www.cs.uic.edu/~liub/FBS/opinion
0% - https://www.researchgate.net/publication
0% - https://subscription.packtpub.com/book/b
0% - https://www.eesc.europa.eu/en/our-work/o
0% - https://www.inderscience.com/info/ingene
0% - https://elsmar.com/pdf_files/Internal%20
0% - https://jesusisoursaviour.com/category/u
0% - https://www.vasco.com/images/DIGIPASS-Wi
0% - https://www.isindexing.com/isi/searchedp
0% - https://www.amazon.com/
0% - https://www.datapine.com/blog/best-big-d
0% - https://www.bis.org/ifc/publ/ifcb50_09.p
0% - https://blog.world-mysteries.com/science
0% - https://www.dpreview.com/interviews/3766
0% - http://ibuyspy.cloudapp.net/ProductDetai
0% - https://www2.deloitte.com/us/en/pages/ta
0% - http://www.contexture.ws/PUBTERMS/INDEX.
0% - https://treasury.gov.au/sites/default/fi
0% - https://stackoverflow.com/questions/7719
0% - https://ich.unesco.org/doc/src/ITH-09-4.
0% - https://www.sciencedirect.com/science/ar
0% - https://arxiv.org/pdf/1709.06309.pdf
0% - https://www.researchgate.net/publication
0% - https://mafiadoc.com/boris-titel_5c21213
0% - https://www.buzzfeed.com/danieldalton/bo
0% - https://www.researchgate.net/publication
0% - https://quizlet.com/64118117/marketing-c
0% - https://journalofbigdata.springeropen.co
0% - https://www.academia.edu/31132422/SENTIM
0% - https://www.altexsoft.com/blog/datascien
0% - https://www.researchgate.net/publication
0% - https://patents.google.com/patent/US2009
0% - https://www.cl.cam.ac.uk/teaching/2002/N
0% - https://www.sciencedirect.com/science/ar
0% - https://www.cl.cam.ac.uk/teaching/2002/N
0% - http://www.fu.gov.si/fileadmin/Internet/
0% - http://docshare.tips/learning-vocabulary
0% - https://isindexing.com/isi/searchedpaper
0% - https://www.sciencedirect.com/science/ar
0% - https://www.slideshare.net/gnap/opinion-
0% - https://www.sciencedirect.com/science/ar
0% - http://www.vatican.va/roman_curia/pontif
0% - http://www.euro.who.int/__data/assets/pd
0% - http://ijarcet.org/wp-content/uploads/IJ
0% - https://www.c-sharpcorner.com/UploadFile
0% - http://universal-health-care.us.com/
0% - https://cyclone.wmo.int/ebook/Global-Gui
0% - http://ijecrt.org/wp-content/uploads/201
0% - https://www.researchgate.net/publication
0% - https://www.ijser.org/researchpaper/A-Br
0% - https://www.prageru.com/video/who-does-t
0% - https://wholesalejerseysfree.org/
0% - https://www.insightsonindia.com/2019/06/
0% - http://jrp.icaap.org/index.php/jrp/artic
0% - http://tesl-ej.org/ej20/a1.html
0% - https://dl.acm.org/citation.cfm?id=12206
0% - https://www.sciencedirect.com/science/ar
0% - https://en.wikisource.org/wiki/The_Gramm
0% - https://www.researchgate.net/publication
0% - https://study.com/academy/lesson/what-is
0% - https://searchcustomerexperience.techtar
0% - https://www.researchgate.net/scientific-
0% - https://dl.acm.org/citation.cfm?id=23911
0% - https://www.merriam-webster.com/
0% - https://www.sciencedirect.com/science/ar
0% - https://consumerpsychologist.com/cb_Rese
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - http://shodhganga.inflibnet.ac.in/bitstr
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sec.gov/Archives/edgar/data/
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.sciencedirect.com/science/ar
0% - https://www.aclweb.org/anthology/A92-102
0% - https://sentence.yourdictionary.com/coul
0% - https://www.tutorialspoint.com/statistic
0% - https://en.wikipedia.org/wiki/Machine_le
0% - https://mafiadoc.com/multiple-instance-l
0% - https://www.scribd.com/document/33978344
0% - https://academic.oup.com/bioinformatics/
0% - https://www.researchgate.net/publication
0% - https://www.onelook.com/pm/
0% - https://www.academia.edu/1750881/The_Rol
0% - http://accentsjournals.org/PaperDirector
0% - http://www.bing.com/images/search?q=prod
0% - https://built-environment.uonbi.ac.ke/in
0% - https://www.site.uottawa.ca/~tcl/gradthe
0% - https://researcher.watson.ibm.com/resear
0% - http://www.jatit.org/volumes/Vol59No2/7V
0% - https://www.academia.edu/8632121/Evaluat
0% - https://www.academia.edu/2804492/Unsuper
0% - https://doi.acm.org/10.1145/1401890.1401
0% - http://www.archive.org/stream/lifejohnli
0% - https://www.sciencedirect.com/science/ar
0% - https://www.scribd.com/doc/81676243/Dr-S
0% - https://www.lizlance.ca/research/identif
0% - https://issuu.com/ijsrd/docs/ijsrdv4i308
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://en.wikibooks.org/wiki/Survey_of_
0% - https://www.academia.edu/28100220/Survey
0% - https://www.loot.co.za/index/html/index3
0% - https://www.sciencedirect.com/science/ar
0% - https://martinslibrary.blogspot.com/2015
0% - https://en.wikipedia.org/wiki/Human
0% - https://www.livemint.com/Politics/STPIZR
0% - https://www.academia.edu/37334612/Answer
0% - https://www.researchgate.net/profile/Yan
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.bis.org/basel_framework/chap
0% - https://thenewschef.com/category/world/
0% - https://dl.acm.org/citation.cfm?id=30196
0% - https://www.hindawi.com/journals/edri/20
0% - https://www.cicling.org/2017/accepted-ab
0% - https://www.merriam-webster.com/dictiona
0% - https://www.researchgate.net/publication
0% - http://people.cs.pitt.edu/~wiebe/courses
0% - https://www.sciencedirect.com/science/ar
0% - http://www.acadpubl.eu/hub/2018-119-12/a
0% - https://www.researchgate.net/publication
0% - http://nebula.wsimg.com/07a53bfbdaf45ff8
0% - https://www.callcentrehelper.com/the-top
0% - https://justgenestuff.blogspot.com/2019/
0% - https://www.academia.edu/291678/Sentimen
0% - https://liris.cnrs.fr/Documents/Liris-65
0% - https://www.legalaid.nsw.gov.au/publicat
0% - https://www.researchgate.net/publication
0% - http://www.statmt.org/book/slides/04-wor
0% - https://rdrr.io/cran/syuzhet/src/R/syuzh
0% - https://www.researchgate.net/publication
0% - https://github.com/aesuli/SentiWordNet/c
0% - https://www.academia.edu/14508040/Lexico
0% - https://www.academia.edu/15181215/Sentim
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.lis.ntu.edu.tw/~khchen/writt
0% - https://rbi.org.in/Scripts/BS_ViewMaster
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.electronics-tutorials.ws/acc
0% - https://mafiadoc.com/proceedings-of-the-
0% - http://www.iauniversity.net/documentos/C
0% - https://www.researchgate.net/publication
0% - https://www.academia.edu/507499/Improvin
0% - https://www.researchgate.net/publication
0% - https://www.academia.edu/1232771/A_manuf
0% - http://paydaynnoo.org/
0% - https://dl.acm.org/citation.cfm?id=25405
0% - https://www.iep.utm.edu/author/dowden/pa
0% - https://www.academia.edu/34861321/Identi
0% - https://www.quora.com/How-is-machine-lea
0% - https://plato.stanford.edu/entries/archi
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=15972
0% - https://academic.oup.com/database/articl
0% - https://www.researchgate.net/publication
0% - http://downloads.hindawi.com/journals/ts
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=17985
0% - https://www.inderscience.com/info/ingene
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://diabetes.diabetesjournals.org/co
0% - https://www.ijert.org/research/unsupervi
0% - https://learnopengl.com/Lighting/Basic-L
0% - https://www.mdpi.com/1424-8220/19/2/234/
0% - https://mafiadoc.com/electrical-engineer
0% - http://quod.lib.umich.edu/m/moa/AFJ7943.
0% - https://livinglies.me/about/glossary-mor
0% - https://en.wikipedia.org/wiki/Talk:Nucle
0% - https://www.thoughtco.com/what-is-critic
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.appdynamics.com/blog/tag/mac
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://web.stanford.edu/group/ctr/ResBr
0% - https://www.sciencedirect.com/science/ar
0% - https://www.loot.co.za/index/html/index1
0% - https://www.academia.edu/30987540/PREDIC
0% - https://www.aclweb.org/anthology/W14-512
0% - https://dl.acm.org/citation.cfm?id=30363
0% - https://dl.acm.org/citation.cfm?id=25022
0% - https://www.researchgate.net/publication
0% - https://www.geeksforgeeks.org/searching-
0% - http://www.cs.ox.ac.uk/teaching/studentp
0% - https://dl.acm.org/citation.cfm?id=16255
0% - https://www.sciencedirect.com/science/ar
0% - https://dl.acm.org/citation.cfm?id=19948
0% - http://etheses.whiterose.ac.uk/5658/1/Th
0% - https://www.sciencedirect.com/science/ar
0% - http://ijamtes.org/gallery/89.%20july%20
0% - https://www.ncbi.nlm.nih.gov/books/NBK20
0% - https://www.researchgate.net/publication
0% - https://grocrastinate.com/about-what-i-a
0% - http://www.ijtc.org/download/Volume-1/oc
0% - https://docs.spring.io/spring-boot/docs/
0% - https://www.science.gov/topicpages/m/man
0% - https://www.sciencedirect.com/science/ar
0% - http://essayandreportwriting.com/academi
0% - https://www.dotnetcurry.com/javascript/1
0% - https://mafiadoc.com/financial-decision-
0% - http://media.royalroads.ca/owl/media/LTM
0% - https://archive.is/lCbTj
0% - https://www.csus.edu/indiv/k/kelleyca/do
0% - http://feeds.feedburner.com/LawPracticeT
0% - https://www.researchgate.net/publication
0% - http://docshare.tips/proceedings-of-drs-
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.scribd.com/document/40745536
0% - https://www.asian-efl-journal.com/wp-con
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://slidex.tips/download/germany-exc
0% - http://waset.org/author/mohammad
0% - https://www.engineering.unsw.edu.au/2018
0% - https://www.123telugu.com/category/revie
0% - https://journals.sagepub.com/doi/full/10
0% - http://www.faradalemedia.com/iqc/plenary
0% - https://www.sciencedirect.com/science/ar
0% - https://journals.sagepub.com/doi/full/10
0% - http://www.ijcst.com/vol74/1/18-shaik-sh
0% - https://en.wikipedia.org/wiki/Puli_(2015
0% - https://www.meaningcloud.com/blog/an-int
0% - https://www.patentdocs.org/food_and_drug
0% - https://searchsecurity.techtarget.com/de
0% - https://www.kdnuggets.com/2016/12/best-m
0% - https://blogs.msdn.microsoft.com/andreas
0% - http://www.cs.binghamton.edu/~lyu/SDM07/
0% - https://blogs.sap.com/2015/09/04/more-co
0% - https://www.scribd.com/document/32107377
0% - http://ww2.lib.metu.edu.tr/ihale/2013/01
0% - http://subasish.github.io/pages/TRB2016/
0% - https://www.academia.edu/29315327/Featur
0% - https://www.sciencedirect.com/science/ar
0% - https://sgst.wr.usgs.gov/gfsad30/worksho
0% - https://spark.apache.org/docs/2.2.1/api/
0% - https://www.theliyakebedefoundation.org/
0% - https://issuu.com/paveepoint/docs/making
0% - https://www.epa.gov/sites/production/fil
0% - https://quizlet.com/163985588/ultrasound
0% - https://dl.acm.org/citation.cfm?id=31860
0% - https://www.abowael.com/203/26/15/213/26
0% - https://www.scribd.com/document/24251611
0% - https://www.sciencedirect.com/science/ar
0% - http://www.jbc.org/content/136/2/365.ful
0% - http://www.syedislam.com/publications/07
0% - https://2wpower.com/en/games/betsoft/bet
0% - https://mafiadoc.com/hamlet-notes_5a2429
0% - https://phys.org/news/2018-12-life.html
0% - https://www.digikey.com/en/product-highl
0% - https://issuu.com/instituteofnetworkcult
0% - https://www.peerbits.com/blog/retail-ana
0% - http://web.iiit.ac.in/~yaswanth/Thesis.p
0% - https://experfy.com/blog/disruption-in-r
0% - https://fruittyblog.blogspot.com/2017/01
0% - https://www.yourgenome.org/facts/what-is
0% - https://grants.nih.gov/grants/guide/rfa-
0% - http://ijritcc.org/download/browse/Volum
0% - https://www.sciencedirect.com/science/ar
0% - https://www.mdpi.com/2078-2489/9/12/307/
0% - https://www.simplilearn.com/implementing
0% - https://mvnrepository.com/artifact/com.g
0% - https://statoperator.com/research/russia
0% - https://core.ac.uk/download/pdf/52926233
0% - https://publichealthreviews.biomedcentra
0% - https://www.hindawi.com/journals/cmmm/20
0% - https://www.researchgate.net/publication
0% - https://academic.oup.com/ywes/article/93
0% - https://www.researchgate.net/publication
0% - https://www.pandadoc.com/business-propos
0% - https://www.researchgate.net/publication
0% - https://issuu.com/learnshipdesign/docs/f
0% - https://abiomedicalinformatics.blogspot.
0% - https://www.academia.edu/17507876/Synthe
0% - https://www.psy.miami.edu/research/facul
0% - https://machinelearningmastery.com/appli
0% - https://opensourceforu.com/2017/03/robot
0% - https://searchbusinessanalytics.techtarg
0% - https://www.cicling.org/2016/accepted-ab
0% - https://study.com/academy/lesson/compari
0% - http://oaji.net/articles/2015/786-142514
0% - https://www.youtube.com/watch?v=NobgnJQp
0% - https://www.sciencedirect.com/science/ar
0% - http://www.scielo.org.co/scielo.php?scri
0% - https://pdfs.semanticscholar.org/2504/9f
0% - http://www.cs.columbia.edu/~noura/ICDM%2
0% - https://pdfs.semanticscholar.org/d0a6/0e
0% - https://www.academia.edu/36308551/Survey
0% - https://ieeexplore.ieee.org/document/744
0% - https://www.irjet.net/archives/V3/i2/IRJ
0% - https://www.researchgate.net/profile/Els
0% - http://turing.cs.washington.edu/papers/e
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - http://clopinet.com/fextract-book/IntroF
0% - https://www.ijert.org/research/mining-fe
0% - https://books.google.com/books/about/Lan
0% - https://www.sciencedirect.com/science/ar
0% - http://www.lrec-conf.org/proceedings/lre
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://problogger.com/how-to-write-a-mu
0% - https://in.bookmyshow.com/movies/it/ET00
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://dl.acm.org/citation.cfm?id=13904
0% - https://ijcat.com/archives/volume6/volum
0% - https://houseofbots.com/news-detail/2491
0% - http://www.lsi.upc.edu/~jpoveda/publicat
0% - https://link.springer.com/article/10.100
0% - https://www.int-arch-photogramm-remote-s
0% - https://www.sciencedirect.com/science/ar
0% - https://www.tftcentral.co.uk/reviews/phi
0% - https://dl.acm.org/citation.cfm?id=30806
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://mygolfspy.com/most-wanted-golf-b
0% - https://www.bing.com/aclk?ld=e3jifwk6BDm
0% - http://teachnet.com/manage/classroom-man
1% - https://www.intechopen.com/books/machine
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - http://data.allenai.org/esr/Papers%20200
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www-07.ibm.com/sg/manufacturing/
0% - https://www.mindtools.com/pages/article/
0% - https://www.researchgate.net/publication
0% - https://machinelearningmastery.com/how-t
0% - https://www.sciencedirect.com/science/ar
0% - http://file.scirp.org/Html/9227.html
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - http://www.jiem.org/index.php/jiem/artic
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.hindawi.com/journals/tswj/20
0% - http://www.acsij.org/documents/v2i4/ACSI
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=18609
0% - https://www.researchgate.net/publication
0% - https://www.questionpro.com/article/surv
0% - https://www.researchgate.net/publication
0% - http://publications.lib.chalmers.se/reco
0% - https://www.sciencedirect.com/science/ar
0% - https://www.academia.edu/24922581/SENTIM
0% - https://www.hindawi.com/journals/scn/201
0% - https://decisionanalyticsjournal.springe
0% - https://arxiv.org/pdf/1406.3714
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://mafiadoc.com/a-review-of-opinion
0% - https://www.researchgate.net/publication
0% - http://www.hssc.gov.in/writereaddata/Pub
0% - https://mafiadoc.com/proceedings-of-the-
0% - https://www.thesaurus.com/browse/review
0% - https://www.mdpi.com/journal/futureinter
0% - https://mafiadoc.com/towards-new-challen
0% - https://www.nngroup.com/articles/compari
0% - https://www.researchgate.net/publication
0% - https://www.semanticscholar.org/paper/Ma
0% - https://mafiadoc.com/urban-design-and-ec
0% - https://dl.acm.org/citation.cfm?id=24884
0% - http://www.inderscience.com/info/ingener
0% - https://www.entrepreneurshipinabox.com/1
0% - http://content.indiainfoline.com/Newslet
0% - https://www.tuck.com/best-nightlight-rev
0% - http://police-check.us.com/
0% - https://www.academia.edu/31837026/An_ana
0% - http://www.wakeupkiwi.com/news-articles-
0% - https://akinternationalnews.blogspot.com
0% - http://www.w3.org/2001/sw/Europe/reports
0% - https://www.sciencedirect.com/science/ar
0% - https://www.bing.com/aclk?ld=e3LxKY6lfG1
0% - https://www.bing.com/aclk?ld=e3NZtaIKcGK
0% - https://www.machinemfg.com/sheet-metal-m
0% - https://explorable.com/what-is-a-literat
0% - https://quizlet.com/30450429/ais-134-ch-
0% - https://www.sciencedirect.com/science/ar
0% - https://arxiv.org/pdf/1409.3942
0% - http://www.cs.ox.ac.uk/people/yarin.gal/
0% - http://www.thwink.org/sustain/articles/0
0% - https://www.researchgate.net/publication
0% - https://apps.apple.com/us/app-bundle/ult
0% - https://link.springer.com/article/10.100
0% - https://ag.ap.nic.in/pagca/2002-03/ENGLI
0% - https://timesofindia.indiatimes.com/ente
0% - https://www.researchgate.net/publication
0% - https://educationalresearchtechniques.co
0% - https://www.ijert.org/a-novel-approach-f
0% - https://www.bigcommerce.com/blog/how-per
0% - https://pdfs.semanticscholar.org/07c0/9b
0% - https://www.researchgate.net/publication
0% - http://aircconline.com/ijnlc/V6N1/6117ij
0% - https://www.sciencedirect.com/science/ar
0% - https://scholarspace.manoa.hawaii.edu/bi
0% - http://w3mtechnoz.com/drupal-solutions
0% - https://www.apnaahangout.com/trusted-web
0% - https://dl.acm.org/citation.cfm?id=30572
0% - https://www.researchgate.net/publication
0% - https://en.wikipedia.org/wiki/Damp_%28st
0% - https://www.sciencedirect.com/science/ar
0% - https://www.filmibeat.com/bollywood/revi
0% - http://lrec-conf.org/proceedings/lrec201
0% - https://en.wikipedia.org/wiki/Sentiment_
0% - https://en.wikipedia.org/wiki/Sentiment_
0% - https://quizlet.com/37655862/child-devel
0% - https://www.cicling.org/2016/accepted-ab
0% - http://downloads.hindawi.com/journals/ci
0% - https://pdfs.semanticscholar.org/979e/9b
0% - https://e-gloing.blogspot.com/2015_05_03
0% - https://www.catchmyblogs.com/2019/03/how
0% - http://studentsrepo.um.edu.my/2166/5/CHA
0% - http://www.aensiweb.net/AENSIWEB/anas/an
0% - https://www.researchgate.net/publication
0% - https://biz.heskeyo.com/2017/02/02/calcu
0% - https://www.scribd.com/document/23506197
0% - https://www.researchgate.net/publication
0% - https://mafiadoc.com/here-we-stand-sourc
0% - https://www.thesaurus.com/browse/success
0% - https://cryptonews.com/exclusives/digita
0% - http://www.manythings.org/vocabulary/lis
0% - https://nlp.stanford.edu/software/tmt/tm
0% - https://pdfs.semanticscholar.org/d73a/89
0% - https://qspace.library.queensu.ca/bitstr
0% - https://www.themathpage.com/
0% - https://stackoverflow.com/questions/1036
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://acadpubl.eu/hub/2018-119-12/arti
0% - http://www.lisaspasties.com/eh.html
0% - https://www.wto.org/english/news_e/news0
0% - https://www.nrcs.usda.gov/wps/portal/nrc
0% - https://www.york.ac.uk/crd/SysRev/!SSL!/
0% - https://www.ijser.org/researchpaper/Auto
0% - https://www.researchgate.net/publication
0% - https://en.wikipedia.org/wiki/Wikipedia:
0% - https://phitchuria.wordpress.com/2018/09
0% - https://www.academia.edu/11875305/A_HMM_
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.researchgate.net/profile/Naz
0% - https://www.academia.edu/2297417/Morphol
0% - https://www.sciencedirect.com/topics/com
0% - https://unctad.org/en/Docs/iteiia5_en.pd
0% - https://www.sciencedirect.com/science/ar
0% - https://www.academia.edu/7034388/Mining_
0% - https://www.sciencedirect.com/science/ar
0% - http://darhiv.ffzg.unizg.hr/id/eprint/10
0% - https://www.academia.edu/34702691/Predic
0% - https://www.sciencedirect.com/science/ar
0% - https://www.academia.edu/1211261/Extract
0% - https://www.sciencedirect.com/science/ar
0% - https://www.essay.uk.com/essays/marketin
0% - https://www.academia.edu/1505474/An_anal
0% - https://patents.google.com/patent/US8473
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - http://www.engr.psu.edu/datalab/Docs/Lim
0% - https://archive.sap.com/kmuuid2/40611dd6
0% - https://www.researchgate.net/publication
0% - https://www.thesaurus.com/browse/advance
0% - https://bmcmedresmethodol.biomedcentral.
0% - https://www.hindawi.com/journals/wcmc/20
0% - https://azdoc.pl/word-formation-in-engli
0% - https://www.scribd.com/document/39359320
0% - https://quizlet.com/216918829/ws-study-g
0% - https://www.google.com/docs/about/
0% - http://www.answerway.com/expertans.php?c
0% - https://mafiadoc.com/proceedings-of-acl-
0% - https://www.oxforddictionaries.com/words
0% - https://www.slideshare.net/rahulmonikash
0% - https://www.researchgate.net/publication
0% - https://www.tanghin.edu.hk/~tf@tanghin.e
0% - https://www.academia.edu/12294354/OBTAIN
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://doctiktak.com/computer-vision-an
0% - https://benthamopen.com/contents/pdf/TOC
0% - http://file.scirp.org/Html/1-7900197_276
0% - https://www.windowscentral.com/power-pla
0% - https://www.thesaurus.com/browse/signifi
0% - https://www.academia.edu/9483884/Mining_
0% - https://www.gingersoftware.com/content/g
0% - http://www.rochester.edu/college/transla
0% - https://objectlistview-python-edition.re
0% - http://aclweb.org/anthology/C/C12/C12-20
0% - http://www.aaai.org/Papers/AAAI/2004/AAA
0% - http://www.iaeng.org/publication/WCE2015
0% - https://positivewordsresearch.com/sentim
0% - https://www.cs.uic.edu/~liub/FBS/IJCAI-0
0% - https://en.wikipedia.org/wiki/Wikipedia
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.academia.edu/12294354/OBTAIN
0% - https://www.sciencedirect.com/science/ar
0% - https://core.ac.uk/download/pdf/82739442
0% - https://www.essay.uk.com/essays/computer
0% - https://www.ruled.me/ketogenic-diet-food
0% - https://stackoverflow.com/questions/5060
0% - https://betterexplained.com/articles/flu
0% - https://www.researchgate.net/publication
0% - https://cran.r-project.org/doc/manuals/r
0% - https://stackoverflow.com/questions/2375
0% - https://en.wikipedia.org/wiki/Natural_la
0% - http://text-analytics101.rxnlp.com/2011/
0% - https://independent.academia.edu/Journal
0% - http://www.ijirset.com/upload/2016/may/1
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - http://www.cs.helsinki.fi/u/doucet/paper
0% - https://dl.acm.org/citation.cfm?id=25316
0% - https://wikispaces.psu.edu/display/PSYCH
0% - https://www.researchgate.net/publication
0% - https://www.amrita.edu/faculty/dr-deepa-
0% - https://www.moresteam.com/toolbox/design
0% - https://www.bing.com/aclk?ld=e3erRxR9cQD
0% - https://www.researchgate.net/publication
0% - https://en.m.wikipedia.org/wiki/Mahesh_B
0% - https://byjus.com/chemistry/modern-perio
1% - https://www.intechopen.com/books/machine
0% - https://sipcalculator.in/
0% - http://ufdc.ufl.edu/AA00016616%5C00016
0% - https://www.cs.uic.edu/~liub/FBS/sentime
0% - https://en.wikipedia.org/wiki/Oil_drilli
0% - https://www.bing.com/aclk?ld=e3FaxHOvTXO
0% - https://ori.hhs.gov/education/products/n
0% - http://www.wrha.mb.ca/extranet/eipt/file
0% - https://datascience.stackexchange.com/qu
0% - https://www.sciencedirect.com/science/ar
0% - https://dl.acm.org/citation.cfm?id=28405
0% - http://the-re-up.com/videos/watch/497/Wa
0% - https://www.dictionary.com/e/s/word-of-t
0% - https://open.library.ubc.ca/cIRcle/colle
0% - https://www.bing.com/aclk?ld=e3y7e2bzq-M
0% - https://chandoo.org/wp/between-formula-e
0% - https://www.khanacademy.org/math/early-m
0% - https://content.iospress.com/articles/in
0% - http://ceur-ws.org/Vol-400/paper1.pdf
0% - https://bmcgenomics.biomedcentral.com/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.wisdomjobs.com/e-university/
0% - https://akinternationalnews.blogspot.com
0% - https://www.sciencedirect.com/science/ar
0% - https://editorialsamarth.blogspot.com/20
0% - http://csjournals.com/IJCSC/PDF9-1/27.%2
0% - https://prd-idrc.azureedge.net/sites/def
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://stackoverflow.com/a/52170570
0% - https://www.academia.edu/10471768/Labeli
0% - https://pdfs.semanticscholar.org/d9f4/a4
0% - https://www.academia.edu/14669502/Apect-
0% - https://docs.microsoft.com/en-us/azure/m
0% - https://www.researchgate.net/publication
0% - https://statoperator.com/research/samsun
0% - http://www.opticianstore.ro/product/zev-
0% - https://pricebaba.com/mobile/oppo-f7
0% - https://www.15minutenews.com/technology/
0% - https://www.amazon.in/gp/product-reviews
0% - https://www.academia.edu/21496512/OPINIO
0% - https://www.thoughtco.com/beautiful-soun
0% - http://www.inderscience.com/info/ingener
0% - https://enacademic.com/dic.nsf/enwiki/20
0% - https://excelxor.com/2015/02/22/return-e
0% - https://journalofbigdata.springeropen.co
0% - https://www.sciencedirect.com/science/ar
0% - https://wenku.baidu.com/view/3f0d0d3887c
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - http://www.rpi.edu/~des/4NodeQuad.pdf
0% - https://mafiadoc.com/a-review-of-opinion
0% - https://www.isitwp.com/wordpress-plugins
0% - https://www.analyticsvidhya.com/blog/201
0% - https://www.science.gov/topicpages/s/sup
0% - https://www.action-storage.com/blog/reta
0% - https://www.thesaurus.com/
0% - http://www.socialsciencecollective.org/s
0% - https://www.bing.com/aclk?ld=e37WKM46srF
0% - https://www.sciencedirect.com/science/ar
0% - https://www.ijert.org/a-tour-towards-sen
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=12426
0% - https://phdessay.com/design-management-1
0% - https://www.phpclasses.org/browse/file/8
0% - https://www.vocabulary.com/dictionary/bi
0% - http://www.fao.org/fsnforum/cfs-hlpe/sit
0% - https://mafiadoc.com/proceedings-of-the-
0% - https://www.sciencedirect.com/topics/mat
0% - https://www.sciencedirect.com/topics/com
0% - http://college.cengage.com/mathematics/b
0% - https://reference.wolfram.com/applicatio
0% - https://www.sciencedirect.com/science/ar
0% - https://github.com/fly51fly/aicoco/issue
0% - http://priede.bf.lu.lv/ftp/pub/RakstuDar
0% - http://www.cs.duke.edu/courses/compsci20
0% - https://archive.org/stream/in.ernet.dli.
0% - https://dl.acm.org/citation.cfm?id=27833
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://link.springer.com/article/10.100
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.allinterview.com/showanswers
0% - https://patents.google.com/patent/US9444
0% - http://oaji.net/journal-archive-stats.ht
0% - http://free-online-dating-sites.us.com/
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://blog.proofhub.com/5-phases-of-pr
0% - https://www.businessballs.com/dtiresourc
0% - https://www.academia.edu/3474097/This_is
0% - https://mafiadoc.com/mining-opinion-feat
0% - https://gate.ac.uk/releases/gate-5.0-bui
0% - http://www.rroij.com/open-access/a-new-a
0% - https://arxiv.org/pdf/1501.01386
0% - https://www.researchgate.net/publication
0% - https://www.academia.edu/12294354/OBTAIN
0% - https://www.researchgate.net/publication
0% - https://www.cs.uic.edu/~liub/FBS/opinion
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=29994
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.investopedia.com/terms/m/med
0% - https://www.academia.edu/4018283/CHAPTER
0% - https://docs.oracle.com/goldengate/c1221
0% - https://www.quora.com/What-are-the-best-
0% - https://buhrmann.github.io/tfidf-analysi
0% - https://sciemce.com/
0% - https://www.science.gov/topicpages/s/sho
0% - https://www.bing.com/aclk?ld=e3EqAk7JOSs
0% - https://towardsdatascience.com/dirichlet
0% - https://pubsonline.informs.org/doi/suppl
0% - http://data.allenai.org/esr/Papers%20200
0% - https://onlinelibrary.wiley.com/doi/full
0% - https://www.garykessler.net/library/cryp
0% - http://uc-r.github.io/discriminant_analy
0% - https://registrar.uchicago.edu/files/201
0% - https://www.hackerearth.com/practice/mac
0% - http://piloarts.com/judeswedding/like-it
0% - https://www.mid.ms.gov/companies/pdf/exa
0% - https://statoperator.com/research/samsun
0% - https://doctiktak.com/computer-vision-an
0% - https://www.labware.com/limshelp/LabWare
0% - https://www.ictmusic.org/sites/default/f
0% - https://quick-geek.github.io/articles/43
0% - https://www.wisegeek.com/what-is-product
0% - https://mafiadoc.com/proceedings-of-firs
0% - https://web.stanford.edu/class/cs379c/ar
0% - https://www.sciencedirect.com/topics/com
0% - https://medium.com/@shrikantpandeymnnit2
0% - https://www.coursehero.com/
0% - https://conversationstartersworld.com/to
0% - https://www.quora.com/
0% - https://www.hairextensionsuk.me.uk/2019/
0% - http://saypeople.com/2012/04/30/problems
0% - https://in.yahoo.com/
0% - https://isindexing.com/isi/searchedpaper
0% - https://www.researchgate.net/publication
0% - https://www.science.gov/topicpages/e/ear
0% - http://archive.unu.edu/unupress/food/8F1
0% - https://waset.org/Publications?p=17
0% - https://www.sec.gov/Archives/edgar/data/
0% - https://www.academia.edu/18901487/Procee
0% - http://www.thisisess.com/colours-of-lamu
0% - https://quizlet.com/12411477/sociology-1
0% - https://quizlet.com/8706047/microsoft-ex
0% - https://www.academia.edu/8928730/Final_R
0% - https://www.sciencedirect.com/science/ar
0% - https://www.15minutenews.com/technology/
0% - https://www.theverge.com/2018/4/6/171987
0% - https://en.m.wikipedia.org/wiki/Android_
0% - https://chi2018.acm.org/attending/procee
0% - https://www.bing.com/aclk?ld=e3G3K-U7LLI
1% - https://www.intechopen.com/books/machine
0% - https://www.transtutors.com/questions/ta
0% - https://9to5mac.com/guides/iphone-x-plus
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - http://file.scirp.org/pdf/ICA_2013020517
0% - https://dl.acm.org/ft_gateway.cfm?id=237
0% - https://stackoverflow.com/questions/4173
0% - http://www2.uiah.fi/projects/metodi/162.
0% - https://www.researchgate.net/topic/Ontol
0% - https://www.researchgate.net/publication
0% - https://www.ntsb.gov/investigations/proc
0% - https://doi.acm.org/10.1145/1772690.1772
0% - https://dl.acm.org/citation.cfm?id=30410
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - http://edu.gov.on.ca/eng/general/elemsec
0% - https://www.academia.edu/2785557/A_metho
0% - https://www.transparencymarketresearch.c
0% - https://gmatclub.com/forum/the-mbamissio
0% - https://quizlet.com/103701289/chapter-5-
0% - https://play.google.com/store/books/deta
0% - https://quizlet.com/19471082/thmbtg-ques
0% - https://sol.du.ac.in/mod/book/tool/print
0% - http://www.inderscience.com/info/ingener
0% - http://data.allenai.org/esr/Papers%200-4
0% - https://dl.acm.org/citation.cfm?id=23455
0% - https://quizlet.com/307219916/ba-370-ung
0% - https://www.archive.org/stream/NEW_1/NEW
0% - https://www.readkong.com/page/a-survey-o
0% - https://www.slideshare.net/hirraAftab1/v
0% - https://www.guru99.com/impact-analysis-s
0% - https://www.science.gov/topicpages/g/glo
0% - http://dates2.us.com/
0% - https://www.currentaffairsspecial.com/20
0% - http://www.inderscience.com/info/ingener
0% - https://www.eurofound.europa.eu/publicat
0% - https://courses.lumenlearning.com/boundl
0% - https://www.researchgate.net/publication
0% - https://hub.packtpub.com/data-mining/
0% - https://www.researchgate.net/publication
0% - https://mises-media.s3.amazonaws.com/Cla
0% - http://ceur-ws.org/Vol-660/paper6.pdf
0% - http://copypasteprogrammers.com/moving-i
0% - https://quizlet.com/29119204/theatre-stu
0% - https://en.wikipedia.org/wiki/Science
0% - https://dl.acm.org/citation.cfm?id=30913
0% - https://www.businesscommunications.xyz/
0% - https://www.bing.com/aclk?ld=e3PTXCFt1rL
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.idi.ntnu.no/education/master
0% - https://en.wikipedia.org/wiki/Existence_
0% - https://www.andyroid.net/
0% - https://www.csus.edu/indiv/k/kelleyca/do
0% - https://onscenes.weebly.com/philosophy/a
0% - https://www.academia.edu/1950225/Social_
0% - https://www.researchgate.net/profile/Van
0% - https://www.academia.edu/5807448/A_model
0% - https://www.articlesdunia.com/education/
0% - https://reasonandmeaning.com/category/pe
0% - http://game-research.com/index.php/artic
0% - https://www.opslens.com/2018/04/hardenin
0% - https://math.gatech.edu/seminar-and-coll
0% - http://www.unipamplona.edu.co/unipamplon
0% - https://protege.stanford.edu/publication
0% - https://www.w3resource.com/java-tutorial
0% - https://www.sciencedirect.com/science/ar
0% - https://www.javaworld.com/article/297973
0% - https://quizlet.com/112687243/psych-315-
0% - http://wikieducator.org/Lesson_5:_Growth
0% - https://open.library.ubc.ca/cIRcle/colle
0% - http://web.mit.edu/smadnick/www/SMA-2/Wh
0% - https://www.bing.com/aclk?ld=e3IJNfknEa8
0% - https://link.springer.com/article/10.118
0% - https://patents.google.com/patent/US2006
0% - https://quizlet.com/74518173/chapter-9da
0% - https://www.sciencedirect.com/science/ar
0% - https://www.differencebetween.com/differ
0% - https://www.bing.com/aclk?ld=e3i6XAycpm0
0% - http://docs.oasis-open.org/regrep/v3.0/p
0% - http://www.sql-tutorial.com/rdbms-and-da
1% - https://www.intechopen.com/books/machine
0% - https://www.digitalvidya.com/blog/what-i
0% - https://www.sciencedirect.com/science/ar
0% - http://bioportal.bioontology.org/ontolog
0% - https://www.researchgate.net/publication
0% - https://ir.canterbury.ac.nz/bitstream/ha
0% - https://docs.oracle.com/cd/B13789_01/ser
0% - https://www.sciencedirect.com/science/ar
1% - https://www.intechopen.com/books/machine
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
1% - https://www.intechopen.com/books/machine
0% - https://www.sciencedirect.com/science/ar
0% - https://www.academia.edu/7130493/SEMANTI
0% - https://techzone.vmware.com/resource/wor
0% - https://ineffableinfinite.blogspot.com/2
0% - https://github.com/zjacreman/correcthors
0% - https://github.com/zjacreman/correcthors
0% - https://meaww.com/mother-who-forced-obje
0% - http://www.archive.org/stream/historyofe
1% - https://www.intechopen.com/books/machine
0% - https://www.igi-global.com/rss/journals/
0% - http://cci.drexel.edu/faculty/yan/public
0% - https://meacms.mea.gov.in/Portal/XML/Art
0% - http://www.peso.gov.in/Work_Mannual/wmch
0% - http://infocenter.sybase.com/help/topic/
0% - https://patents.google.com/patent/US2007
0% - http://www.ey.com/Publication/vwLUAssets
0% - https://en.wikipedia.org/wiki/Class_%28c
0% - https://issuu.com/7days/docs/seven_days_
0% - https://www.scribd.com/document/11777014
0% - https://www.meritnation.com/ask-answer/q
0% - http://pop.acrwebsite.org/volumes/acr-pr
0% - https://home.ubalt.edu/ntsbarsh/business
0% - http://docshare.tips/handbook-of-systemi
0% - https://indiankanoon.org/doc/608874/
0% - https://bitsofco.de/theres-no-reason-to-
0% - https://booksite.elsevier.com/samplechap
0% - https://quizlet.com/3640397/care-final-p
0% - http://www.pharmpress.com/files/docs/Pha
0% - https://quizlet.com/140256906/psych-prac
0% - https://machinelearningmastery.com/tacti
0% - https://cdn.rohde-schwarz.com/pws/dl_dow
0% - https://www.researchgate.net/publication
0% - https://www.casemine.com/judgement/uk/5a
0% - https://www.researchgate.net/publication
0% - http://general-insurance.us.com/
0% - https://law.utexas.edu/wp-content/upload
0% - https://no1assignmenthelp.com/answers/ma
0% - https://thesiliconreview.com/magazines/c
0% - https://content.iospress.com/articles/jo
0% - https://www.python-course.eu/Decision_Tr
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://nellaishanmugam.wordpress.com/20
0% - https://www.researchgate.net/publication
0% - https://en.wikipedia.org/wiki/Associatio
0% - https://www.slideshare.net/salahecom/08-
0% - https://www.slideshare.net/butest/compar
0% - http://www.ntu.edu.sg/home/XLLI/publicat
1% - https://www.intechopen.com/books/machine
0% - https://www.tutorialspoint.com/machine_l
0% - https://dataaspirant.com/2017/01/30/how-
0% - https://www.academia.edu/34100170/Compar
0% - https://www.researchgate.net/publication
0% - https://www.groundai.com/project/joint-l
0% - https://www.academia.edu/37365180/Predic
0% - https://en.m.wikipedia.org/wiki/False_po
0% - https://bmcgenet.biomedcentral.com/artic
0% - https://www.analyticsvidhya.com/blog/201
1% - https://www.intechopen.com/books/machine
1% - https://www.intechopen.com/books/machine
0% - http://apps.who.int/gb/ebwha/pdf_files/W
1% - https://www.intechopen.com/books/machine
0% - https://www.sec.gov/Archives/edgar/data/
0% - https://www.researchgate.net/profile/Tej
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://en.wikipedia.org/wiki/Wikipedia:
0% - http://best-background-check.us.com/
0% - http://export.arxiv.org/rss/cs
0% - https://paperity.org/p/100445866/word-cl
0% - https://www.ecb.europa.eu/pub/financial-
0% - http://usir.salford.ac.uk/43583/1/PhD%20
0% - https://tothelandofdreams.blogspot.com/2
0% - https://www.bing.com/aclk?ld=e34E3kxXd_a
0% - https://dl.acm.org/citation.cfm?id=23624
0% - http://www.ren21.net/gsr-2017/pages/high
0% - https://www.academia.edu/30879045/WILL-P
0% - https://www.igi-global.com/rss/journals/
0% - https://www.researchgate.net/publication
0% - https://mafiadoc.com/icril-international
0% - https://brandongaille.com/155-catchy-saf
0% - https://batteryuniversity.com/learn/arti
0% - https://www.bing.com/aclk?ld=e3rZijaXSEm
0% - https://www.techradar.com/reviews/phones
0% - https://peerj.com/articles/cs-105/
0% - https://www.15minutenews.com/technology/
0% - https://www.instructables.com/id/How-To-
0% - http://www.authorstream.com/Presentation
0% - https://www.bseindia.com/downloads/invde
0% - https://www.sciencedirect.com/science/ar
0% - https://bible.org/book/export/html/6425
0% - https://www.bing.com/aclk?ld=e34pdNn9h5A
0% - http://cs.oberlin.edu/~asharp/cs151/labs
0% - https://www.bing.com/aclk?ld=e3pNmGkF48m
0% - https://www.academia.edu/39125907/Anti-i
0% - https://www.cs.cmu.edu/~ref/mlim/index.s
0% - https://en.wikipedia.org/wiki/Wikipedia:
0% - https://hubpages.com/politics/The-Truth-
0% - http://docshare.tips/reading-the-visual_
0% - https://phil-mershon.blogspot.com/2011/0
0% - https://sentence.yourdictionary.com/at
0% - https://richard-kos.blogspot.com/2010/10
0% - https://scottbot.net/tag/digital-humanit
1% - https://www.intechopen.com/books/machine
0% - http://www.sidewaysthoughts.com/blog/201
0% - https://www.asmallcinema.org/
0% - http://kiltscenter.tumblr.com/
0% - https://en.wikipedia.org/wiki/Wikipedia_
0% - https://authorzilla.com/lXDqM/survey-met
0% - https://www.eshopy.us/
0% - https://www.academia.edu/14489141/Reason
1% - https://www.intechopen.com/books/machine
1% - https://www.intechopen.com/books/machine
0% - https://www.ida.liu.se/divisions/aiics/a
0% - https://archive.org/stream/NewsUK1981UKE
0% - https://diversity-rjk.blogspot.com/2010/
0% - https://mafiadoc.com/ash-coverbqxd-bbaw-
0% - https://quizlet.com/226718305/mixed-meth
0% - https://www.researchgate.net/publication
0% - https://dl.acm.org/citation.cfm?id=23453
0% - https://en.wikipedia.org/wiki/Purpose_in
1% - https://www.intechopen.com/books/machine
0% - https://feedbin.com/starred/10JYoVs6nl2D
0% - https://doctiktak.com/computer-vision-an
0% - https://content.iospress.com/articles/se
0% - https://www.academia.edu/332182/Manually
0% - https://quizlet.com/16184645/ch-9-cellul
0% - https://www.sciencedirect.com/science/ar
0% - https://www.bing.com/aclk?ld=e3Rz48Y2NoK
0% - https://en.wiktionary.org/wiki/cheese_st
0% - https://www.fda.gov/downloads/Drugs/Deve
0% - https://www.researchgate.net/publication
0% - http://mhrd.gov.in/sites/upload_files/mh
0% - https://www.law.cornell.edu/constitution
0% - https://medium.com/@pmin91/aspect-based-
0% - https://www.ijettcs.org/Justsub.php
0% - https://coininfo.news/ontology-a-new-hig
0% - https://www.academia.edu/839937/Class_ca
0% - https://airccj.org/CSCP/vol3/csit3640.pd
0% - https://academic.oup.com/ywes/article/88
0% - https://en.wikibooks.org/wiki/Introducti
0% - http://chadjthiele.com/2012/06/09/lesson
0% - https://www.researchgate.net/publication
0% - http://eisdocs.dsdip.qld.gov.au/Coopers%
0% - https://www.phpclasses.org/browse/file/8
0% - https://arxiv.org/pdf/1708.07480.pdf
0% - http://compaq-upgrades.com/
0% - https://fraser.stlouisfed.org/title/1339
1% - https://www.intechopen.com/books/machine
0% - https://www.researchgate.net/topic/Ontol
0% - https://www.realtime.org.au/issue-year/2
0% - https://www.researchgate.net/publication
0% - https://people.dsv.su.se/~mad/chex.html
0% - http://dro.deakin.edu.au/eserv/DU:301115
0% - https://quizlet.com/33164607/eppp-exam-q
0% - https://women-s.net/women-with-low-self-
0% - https://www.gutenberg.org/files/32825/32
0% - https://www.killerfeatures.com/list-of/m
0% - https://www.researchgate.net/publication
0% - https://academic.oup.com/alcalc/article/
0% - https://cmry.github.io/notes/euclidean-v
1% - https://www.intechopen.com/books/machine
1% - https://www.intechopen.com/books/machine
1% - https://www.intechopen.com/books/machine
0% - https://www.hindawi.com/journals/edri/20
0% - https://pubs.acs.org/doi/10.1021/acs.che
0% - https://link.springer.com/article/10.100
0% - https://www.researchgate.net/publication
0% - http://mguler.etu.edu.tr/MAK205_Chapter5
0% - http://www.freepatentsonline.com/y2017/0
0% - https://www.transtutors.com/questions/su
1% - https://www.intechopen.com/books/machine
0% - https://www.nature.com/articles/s41467-0
0% - https://mafiadoc.com/ontology-refinement
0% - https://people.eng.unimelb.edu.au/stsy/g
0% - http://nemzetisegiszinhaz.hu/site/
0% - https://www.gadgetsnow.com/compare-mobil
0% - https://www.iep.utm.edu/author/dowden/pa
0% - https://www.sciencedirect.com/science/ar
0% - https://www.free-sample-letter.com/compl
0% - https://oapen.org/view?docId=633234.xhtm
0% - https://www.academia.edu/305091/The_Book
0% - https://indiaforumarchives.blogspot.com/
0% - https://www.gadgetmatch.com/xiaomi-redmi
0% - https://www.smashingmagazine.com/2017/01
0% - https://quizlet.com/27198199/us-govermen
0% - https://mafiadoc.com/the-social-construc
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.bing.com/aclk?ld=e392NRTySet
0% - http://laboratorios.fi.uba.ar/lsi/c-clus
0% - https://dl.acm.org/citation.cfm?id=29804
0% - http://www.ijcst.com/vol9/issue2/5-g-v-e
0% - http://shodhganga.inflibnet.ac.in/bitstr
0% - https://www.hindawi.com/journals/mpe/201
0% - http://www.worldresearchlibrary.org/up_p
0% - https://www.bing.com/aclk?ld=e33iJQE5W_w
0% - https://link.springer.com/article/10.118
0% - https://arxiv.org/pdf/1709.06311.pdf
0% - https://www.hindawi.com/journals/cin/201
0% - https://link.springer.com/article/10.118
0% - https://www.mdpi.com/2220-9964/6/12/386/
0% - https://www.academia.edu/11798203/Sentim
0% - https://www.sciencedirect.com/science/ar
0% - https://www.myaccountingcourse.com/finan
0% - https://www.topreviewssite.com/zaful-rev
0% - https://www.moresteam.com/toolbox/design
0% - https://www.bing.com/aclk?ld=e3erRxR9cQD
0% - https://www.researchgate.net/publication
0% - https://issuu.com/kartickchandrabarman/d
0% - https://www.sciencedirect.com/science/ar
0% - http://ra.adm.cs.cmu.edu/anon/1999/CMU-C
0% - https://www.researchgate.net/publication
0% - https://onlinelibrary.wiley.com/doi/10.1
0% - https://www.academia.edu/10261244/A_Mode
0% - https://www.researchgate.net/scientific-
0% - https://www.bmj.com/content/364/bmj.l525
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://www.researchgate.net/publication
0% - https://link.springer.com/article/10.118
0% - https://www.caclubindia.com/experts/gstr
0% - https://www.researchgate.net/publication
0% - https://www.researchgate.net/publication
0% - https://www.ijrte.org/wp-content/uploads
0% - http://hayko.at/vision/dataset.php
0% - https://www.researchgate.net/publication
0% - https://www.nature.com/articles/s41598-0
0% - https://www.researchgate.net/publication
0% - http://www.bing.com/news/search?q=6.2.&q
0% - https://www.academia.edu/14800942/Explor
0% - https://www.transtutors.com/questions/co
0% - https://www.researchgate.net/profile/Edw
0% - https://github.com/madrugado/deep-learni
0% - https://jnanobiotechnology.biomedcentral
0% - https://microbiomejournal.biomedcentral.
0% - https://www.researchgate.net/publication
0% - http://europepmc.org/articles/PMC3824920
0% - https://ubiome.com/blog/post/ubiome-larg
0% - http://ijsetr.org/wp-content/uploads/201
0% - https://www.dpreview.com/reviews/nikon-d
0% - https://www.bing.com/aclk?ld=e3erRxR9cQD
0% - https://www.researchgate.net/publication
0% - https://nepis.epa.gov/Exe/ZyPURL.cgi?Doc
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.academia.edu/12591224/Progre
0% - http://pubs.acs.org/doi/10.1021/acsomega
0% - http://manuals.frigidaire.com/prodinfo_p
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/profile/Vis
0% - https://dl.acm.org/citation.cfm?id=15972
0% - https://link.springer.com/article/10.118
1% - https://www.intechopen.com/books/machine
0% - https://www.researchgate.net/publication
0% - https://link.springer.com/content/pdf/10
0% - https://www.moresteam.com/toolbox/design
0% - https://www.bing.com/aclk?ld=e3erRxR9cQD
0% - https://www.researchgate.net/publication
0% - https://mafiadoc.com/ramjets-and-ramrock
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - http://data.allenai.org/esr/Papers%20350
0% - https://www.mindtools.com/pages/article/
0% - https://link.springer.com/article/10.118
0% - http://www.inderscience.com/info/ingener
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://df6sxcketz7bb.cloudfront.net/man
0% - https://www.researchgate.net/publication
0% - https://econperspectives.blogspot.com/20
0% - https://dl.acm.org/citation.cfm?id=18715
1% - https://www.intechopen.com/books/machine
0% - https://www.academia.edu/34723064/Contex
0% - https://www.researchgate.net/publication
0% - https://www.dpreview.com/reviews/nikon-d
0% - https://www.bing.com/aclk?ld=e3erRxR9cQD
0% - https://www.researchgate.net/publication
0% - https://radiologykey.com/mediastinal-mas
0% - https://www.pinterest.com/pin/4084202599
0% - https://thisiselife.blogspot.com/2018/05
0% - https://www.bing.com/aclk?ld=e35o2BV1Kh8
1% - https://www.intechopen.com/books/machine
0% - https://stackoverflow.com/questions/7322
0% - http://www.preservearticles.com/business
0% - http://jasss.soc.surrey.ac.uk/1/4/3.html
0% - http://data.allenai.org/esr/Papers%20800
1% - https://www.intechopen.com/books/machine
1% - https://www.intechopen.com/books/machine
0% - https://www.academia.edu/1990807/A_compa
0% - http://www.dknmu.org/uploads/file/6842.p
0% - https://content.iospress.com/articles/jo
0% - https://mafiadoc.com/lecture-notes-in-co
0% - https://cracksmod.com/malwarebytes-premi
0% - https://en.wikipedia.org/wiki/A2%2Bb2%3D
0% - https://www.courts.ca.gov/documents/CRS-
0% - https://oapen.org/view?docId=633234.xhtm
0% - https://thesai.org/Publications/ViewIssu
0% - https://hoteltechreport.com/category/rev
0% - https://www.gadgetmatch.com/samsung-gala
0% - https://www.eslbase.com/grammar/comparat
0% - https://booksfb2.com/book69/485039.fb2
0% - https://mafiadoc.com/electric-power-syst
0% - https://edwardlowe.org/how-to-write-a-sa
0% - https://www.researchgate.net/publication
0% - https://link.springer.com/protocol/10.10
0% - https://www.academia.edu/25180278/An_Imp
0% - https://mafiadoc.com/proceedings-of-fift
0% - https://www.researchgate.net/publication
0% - http://www.ncert.nic.in/NCERTS/l/feep106
1% - https://www.intechopen.com/books/machine
0% - https://arxiv.org/pdf/1601.06971
0% - https://www.researchgate.net/publication
0% - https://www.imdb.com/title/tt8425404/
0% - https://www.sciencedirect.com/science/ar
0% - http://citeseerx.ist.psu.edu/oai2?verb=L
0% - https://www.chegg.com/homework-help/ques
0% - https://dl.acm.org/citation.cfm?id=10731
0% - https://mafiadoc.com/proceedings-of-acl-
0% - http://casinocapitalism.info/Skeptics/Po
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://journalofbigdata.springeropen.co
0% - http://www.statmt.org/OSMOSES/FeatureEx.
0% - https://www.sciencedirect.com/science/ar
0% - https://www.av-comparatives.org/tests/mo
0% - https://www.mdpi.com/2220-9964/6/12/386/
0% - https://www.academia.edu/11798203/Sentim
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://en.wikipedia.org/wiki/Ontology_l
0% - https://www.researchgate.net/profile/Tej
0% - https://www.researchgate.net/publication
0% - https://noise.getoto.net/tag/support/pag
0% - https://www.sciencedirect.com/science/ar
0% - https://www.dolby.com/in/en/technologies
0% - https://www.sciencedirect.com/science/ar
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://www.sciencedirect.com/topics/com
0% - https://www.ncbi.nlm.nih.gov/pmc/article
1% - https://www.intechopen.com/books/machine
0% - https://www.sciencedirect.com/science/ar
0% - http://www.yourarticlelibrary.com/manage
0% - https://www.cs.uic.edu/~liub/publication
0% - https://www.ncbi.nlm.nih.gov/pmc/article
0% - https://cedar.buffalo.edu/~rohini/Papers
1% - https://www.intechopen.com/books/machine
0% - https://towardsdatascience.com/model-eva
0% - https://www.esurveyspro.com/article-data
0% - https://www.livechatinc.com/livechat-res
0% - https://www.researchgate.net/publication
0% - http://ksiresearchorg.ipage.com/seke/dms
0% - https://www.researchgate.net/profile/Tej
0% - https://towardsdatascience.com/model-eva
0% - https://www.researchgate.net/publication
0% - http://www.historyofinformation.com/expa
0% - http://www.vkmaheshwari.com/WP/?attachme
0% - https://en.wikipedia.org/wiki/List_of_no
0% - http://www.ijirt.org/master/publishedpap
0% - https://www.sciencedirect.com/science/ar
0% - https://www.researchgate.net/publication
0% - https://docs.microsoft.com/en-us/dynamic
0% - https://waset.org/Publications?p=150
0% - https://blogs.oracle.com/startup/compend
0% - https://www.slideshare.net/khalsajps/per
0% - https://onlinelibrary.wiley.com/doi/full

Das könnte Ihnen auch gefallen