Sie sind auf Seite 1von 64

GALLOP

1.INTRODUCTION

1.1What is Data Mining?

Structure of Data Mining

Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing
data from different perspectives and summarizing it into useful information - information that can
be used to increase revenue, cuts costs, or both. Data mining software is one of a number of
analytical tools for analyzing data. It allows users to analyze data from many different dimensions
or angles, categorize it, and summarize the relationships identified. Technically, data mining is the
process of finding correlations or patterns among dozens of fields in large relational databases.

1.2How Data Mining Works?

While large-scale information technology has been evolving separate transaction and analytical
systems, data mining provides the link between the two. Data mining software analyzes
relationships and patterns in stored transaction data based on open-ended user queries. Several
types of analytical software are available: statistical, machine learning, and neural networks.

TKREC, CSE DEPARTMENT 1


GALLOP

Generally, any of four types of relationships are sought:

 Classes: Stored data is used to locate data in predetermined groups. For example, a
restaurant chain could mine customer purchase data to determine when customers visit and
what they typically order. This information could be used to increase traffic by having daily
specials.

 Clusters: Data items are grouped according to logical relationships or consumer


preferences. For example, data can be mined to identify market segments or consumer
affinities.

 Associations: Data can be mined to identify associations. The beer-diaper example is an


example of associative mining.

 Sequential patterns: Data is mined to anticipate behavior patterns and trends. For
example, an outdoor equipment retailer could predict the likelihood of a backpack being
purchased based on a consumer's purchase of sleeping bags and hiking shoes.

Data mining consists of five major elements:

1) Extract, transform, and load transaction data onto the data warehouse system.
2) Store and manage the data in a multidimensional database system.
3) Provide data access to business analysts and information technology professionals.
4) Analyze the data by application software.
5) Present the data in a useful format, such as a graph or table.

Different levels of analysis are available:

 Artificial neural networks: Non-linear predictive models that learn through training and
resemble biological neural networks in structure.

TKREC, CSE DEPARTMENT 2


GALLOP

 Genetic algorithms: Optimization techniques that use process such as genetic


combination, mutation, and natural selection in a design based on the concepts of natural
evolution.

 Decision trees: Tree-shaped structures that represent sets of decisions. These decisions
generate rules for the classification of a dataset. Specific decision tree methods include
Classification and Regression Trees (CART) and Chi Square Automatic Interaction
Detection (CHAID). CART and CHAID are decision tree techniques used for classification
of a dataset. They provide a set of rules that you can apply to a new (unclassified) dataset
to predict which records will have a given outcome. CART segments a dataset by creating
2-way splits while CHAID segments using chi square tests to create multi-way splits.
CART typically requires less data preparation than CHAID.

 Nearest neighbor method: A technique that classifies each record in a dataset based on a
combination of the classes of the k record(s) most similar to it in a historical dataset
(where k=1). Sometimes called the k-nearest neighbor technique.

 Rule induction: The extraction of useful if-then rules from data based on statistical
significance.

 Data visualization: The visual interpretation of complex relationships in multidimensional


data. Graphics tools are used to illustrate data relationships.

1.3Characteristics of Data Mining:

 Large quantities of data: The volume of data so great it has to be analyzed by automated
techniques e.g. satellite information, credit card transactions etc.
 Noisy, incomplete data: Imprecise data is the characteristic of all data collection.
 Complex data structure: conventional statistical analysis not possible
 Heterogeneous data stored in legacy systems

TKREC, CSE DEPARTMENT 3


GALLOP

Benefits of Data Mining:

1) It’s one of the most effective services that are available today. With the help of data mining,
one can discover precious information about the customers and their behavior for a specific
set of products and evaluate and analyze, store, mine and load data related to them
2) An analytical CRM model and strategic business related decisions can be made with the
help of data mining as it helps in providing a complete synopsis of customers
3) An endless number of organizations have installed data mining projects and it has helped
them see their own companies make an unprecedented improvement in their marketing
strategies (Campaigns)
4) Data mining is generally used by organizations with a solid customer focus. For its flexible
nature as far as applicability is concerned is being used vehemently in applications to
foresee crucial data including industry analysis and consumer buying behaviors
5) Fast paced and prompt access to data along with economic processing techniques have
made data mining one of the most suitable services that a company seek

Advantages of Data Mining:

1. Marketing / Retail:

Data mining helps marketing companies build models based on historical data to predict who
will respond to the new marketing campaigns such as direct mail, online marketing campaign…etc.
Through the results, marketers will have appropriate approach to sell profitable products to
targeted customers.

Data mining brings a lot of benefits to retail companies in the same way as marketing. Through
market basket analysis, a store can have an appropriate production arrangement in a way that
customers can buy frequent buying products together with pleasant. In addition, it also helps the
retail companies offer certain discounts for particular products that will attract more customers.

TKREC, CSE DEPARTMENT 4


GALLOP

2. Finance / Banking

Data mining gives financial institutions information about loan information and credit
reporting. By building a model from historical customer’s data, the bank and financial institution
can determine good and bad loans. In addition, data mining helps banks detect fraudulent credit
card transactions to protect credit card’s owner.

3. Manufacturing

By applying data mining in operational engineering data, manufacturers can detect faulty
equipments and determine optimal control parameters. For example semi-conductor manufacturers
has a challenge that even the conditions of manufacturing environments at different wafer
production plants are similar, the quality of wafer are lot the same and some for unknown reasons
even has defects. Data mining has been applying to determine the ranges of control parameters
that lead to the production of golden wafer. Then those optimal control parameters are used to
manufacture wafers with desired quality.

4. Governments

Data mining helps government agency by digging and analyzing records of financial
transaction to build patterns that can detect money laundering or criminal activities.

5. Law enforcement:

Data mining can aid law enforcers in identifying criminal suspects as well as apprehending
these criminals by examining trends in location, crime type, habit, and other patterns of behaviors.

6. Researchers:

Data mining can assist researchers by speeding up their data analyzing process; thus, allowing
those more time to work on other projects.

TKREC, CSE DEPARTMENT 5


GALLOP

2.LITERATURE SURVEY

1) Personalized point-of-interest recommendation by mining users’ preference


transition
AUTHORS: X. Liu, Y. Liu, K. Aberer, and C. Miao

Location-based social networks (LBSNs) offer researchers rich data to study people's online
activities and mobility patterns. One important application of such studies is to provide
personalized point-of-interest (POI) recommendations to enhance user experience in LBSNs.
Previous solutions directly predict users' preference on locations but fail to provide insights about
users' preference transitions among locations. In this work, we propose a novel category-aware
POI recommendation model, which exploits the transition patterns of users' preference over
location categories to improve location recommendation accuracy. Our approach consists of two
stages: (1) preference transition (over location categories) prediction, and (2) category-aware POI
recommendation. Matrix factorization is employed to predict a user's preference transitions over
categories and then her preference on locations in the corresponding categories. Real data based
experiments demonstrate that our approach outperforms the state-of-the-art POI recommendation
models by at least 39.75% in terms of recall.

2) SPORE: A sequential personalized spatial item recommender system


AUTHORS: W. Wang, H. Yin, S. Sadiq, L. Chen, M. Xie, and X. Zhou

With the rapid development of location-based social networks (LBSNs), spatial item
recommendation has become an important way of helping users discover interesting locations to
increase their engagement with location-based services. Although human movement exhibits
sequential patterns in LBSNs, most current studies on spatial item recommendations do not
consider the sequential influence of locations. Leveraging sequential patterns in spatial item
recommendation is, however, very challenging, considering 1) users' check-in data in LBSNs has

TKREC, CSE DEPARTMENT 6


GALLOP

a low sampling rate in both space and time, which renders existing prediction techniques on GPS
trajectories ineffective; 2) the prediction space is extremely large, with millions of distinct
locations as the next prediction target, which impedes the application of classical Markov chain
models; and 3) there is no existing framework that unifies users' personal interests and the
sequential influence in a principled manner. In light of the above challenges, we propose a
sequential personalized spatial item recommendation framework (SPORE) which introduces a
novel latent variable topic-region to model and fuse sequential influence with personal interests in
the latent and exponential space. The advantages of modeling the sequential effect at the topic-
region level include a significantly reduced prediction space, an effective alleviation of data
sparsity and a direct expression of the semantic meaning of users' spatial activities. Furthermore,
we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online
top-k recommendation process by extending the traditional LSH. We evaluate the performance of
SPORE on two real datasets and one large-scale synthetic dataset. The results demonstrate a
significant improvement in SPORE's ability to recommend spatial items, in terms of both
effectiveness and efficiency, compared with the state-of-the-art methods.

3) CEPR: A collaborative exploration and periodically returning model for


location prediction
AUTHORS: D. Lian, X. Xie, V. W. Zheng, N. J. Yuan, F. Zhang, and E. Chen

With the growing popularity of location-based social networks, numerous location visiting records
(e.g., check-ins) continue to accumulate over time. The more these records are collected, the better
we can understand users’ mobility patterns and the more accurately we can predict their future
locations. However, due to the personality trait of neophilia, people also show propensities of
novelty seeking in human mobility that is, exploring unvisited but tailored locations for them to
visit. As such, the existing prediction algorithms, mainly relying on regular mobility patterns, face
severe challenges because such behavior is beyond the reach of regularity. As a matter of fact, the
prediction of this behavior not only relies on the forecast of novelty-seeking tendency but also
depends on how to determine unvisited candidate locations. To this end, we put forward a
Collaborative Exploration and Periodically Returning model (CEPR), based on a novel problem,

TKREC, CSE DEPARTMENT 7


GALLOP

Exploration Prediction (EP), which forecasts whether people will seek unvisited locations to visit,
in the following. When people are predicted to do exploration, a state-of-the-art recommendation
algorithm, armed with collaborative social knowledge and assisted by geographical influence, will
be applied for seeking the suitable candidates; otherwise, a traditional prediction algorithm,
incorporating both regularity and the Markov model, will be put into use for figuring out the most
possible locations to visit. We then perform case studies on check-ins and evaluate them on two
large-scale check-in datasets with 6M and 36M records, respectively. The evaluation results show
that EP achieves a roughly 20% classification error rate on both datasets, greatly outperforming
the baselines, and that CEPR improves performances by as much as 30% compared to the
traditional location prediction algorithms.

4) Mining user mobility features for next place prediction in location-based


services
AUTHORS: A. Noulas, S. Scellato, N. Lathia, and C. Mascolo

Mobile location-based services are thriving, providing an unprecedented opportunity to collect


fine grained spatio-temporal data about the places users visit. This multi-dimensional source of
data offers new possibilities to tackle established research problems on human mobility, but it also
opens avenues for the development of novel mobile applications and services. In this work we
study the problem of predicting the next venue a mobile user will visit, by exploring the predictive
power offered by different facets of user behavior. We first analyze about 35 million check-ins
made by about 1 million Foursquare users in over 5 million venues across the globe, spanning a
period of five months. We then propose a set of features that aim to capture the factors that may
drive users' movements. Our features exploit information on transitions between types of places,
mobility flows between venues, and spatio-temporal characteristics of user check-in patterns. We
further extend our study combining all individual features in two supervised learning models,
based on linear regression and M5 model trees, resulting in a higher overall prediction accuracy.
We find that the supervised methodology based on the combination of multiple features offers the
highest levels of prediction accuracy: M5 model trees are able to rank in the top fifty venues one
in two user check-ins, amongst thousands of candidate items in the prediction list.

TKREC, CSE DEPARTMENT 8


GALLOP

5) Friendship and mobility: user movement in location-based social networks


AUTHORS: E. Cho, S. A. Myers, and J. Leskovec

Even though human movement and mobility patterns have a high degree of freedom and variation,
they also exhibit structural patterns due to geographic and social constraints. Using cell phone
location data, as well as data from two online location-based social networks, we aim to understand
what basic laws govern human motion and dynamics. We find that humans experience a
combination of periodic movement that is geographically limited and seemingly random jumps
correlated with their social networks. Short-ranged travel is periodic both spatially and temporally
and not affected by the social network structure, while long-distance travel is more influenced by
social network ties. We show that social relationships can explain about 10 % to 30 % of all human
movement, while periodic behavior explains 50 % to 70%. Based on our findings, we develop a
model of human mobility that combines periodic short range movements with travel due to the
social network structure. We show that our model reliably predicts the locations and dynamics of
future human movement and gives an order of magnitude better performance than present models
of human mobility.

TKREC, CSE DEPARTMENT 9


GALLOP

3.SYSTEM ANALYSIS

3.1EXISTING SYSTEM:

 Approaches based on geographical closeness perform the prediction task by taking


advantage of geographical feature mining. Jeung et al. combined the motion function and
the extracted objects movement patterns to estimate a moving objects future locations.
 Monreale et al. utilized the trajectory patterns equipped with temporal information as
predictive rules.
 J. Lee et al. proposed a unified framework to include various temporal tightness patterns.
In these methods, the authors usually defined a spatial trajectory as an ordered-sequence of
locations, and extract associate rules from historical trajectory dataset using two different
methods

DISADVANTAGES OF EXISTING SYSTEM:

 It is very challenging due to the check-in data’s inherent characteristics.


 The recent emerging trend of more and more check-in applications makes the prediction
problem complex.
 It leads to the third and usually ignored but becoming more difficult challenge, which we
will focus in this article, i.e., Variety. The check-in activities in different applications are
not following the same patterns.

3.2PROPOSED SYSTEM:

 In this paper, we propose a new feature fusion approach, i.e., GlobAL feature fusion for
LOcation Prediction (GALLOP),to cope with the variety problem in location prediction.
To improve the applicability of location prediction approach,
 We utilize several kinds of features and discuss their different characteristics in the variety
of check-in scenarios. Three classes of features are used in GALLOP: context feature

TKREC, CSE DEPARTMENT 10


GALLOP

(geographical aspects), collaboration feature (users’ latent interest space) and content
feature (places’ description attributes).
 We introduce intuitive ways to model these check-in features and then formalize a
combination framework to deliver the predicted target places to end users.
 First, we investigate the difference of several representative check-in scenarios from the
spatial and temporal aspects. We argue that these varying check-in scenarios ask for more
general location prediction methods.
 Second, we demonstrate the combination power of location features in a novel angle. We
not only utilize the different classes of context, collaboration and content information, but
also factorize them in a new way to improve the prediction robustness and generality.

ADVANTAGES OF PROPOSED SYSTEM:

 The proposed GALLOP prediction approach is not only general over different check-in
scenarios but also comprehensive of different features. In the context feature, we design a
multiple granularity model to profile the geographical closeness.
 We select the predicted candidates based on the combination of local district, local city and
state scales. The weights of each scale are learned from training data.
 At last, the extensive study over several real datasets reveals the improvement and
advantage of our approach. We provide an empirical study with several competition
methods.
 Detailed experiments show the different behaviors of the prediction methods, and prove
that the general location prediction approach is a better choice to tackle the location
prediction challenges.

TKREC, CSE DEPARTMENT 11


GALLOP

3.3SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 1GB

SOFTWARE REQUIREMENTS:

• Operating system : - Windows 7.


• Coding Language : C#.NET
• IDE : Microsoft Visual Studio 2008
• Data Base : MS SQL SERVER 2005

TKREC, CSE DEPARTMENT 12


GALLOP

4.SYSTEM DESIGN

4.1SYSTEM ARCHITECTURE:

Fig.4.1

TKREC, CSE DEPARTMENT 13


GALLOP

4.2SYSTEM STUDY

FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system analysis
the feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development of
the system is limited. The expenditures must be justified. Thus the developed system as well within
the budget and this was achieved because most of the technologies used are freely available. Only
the customized products had to be purchased.

TKREC, CSE DEPARTMENT 14


GALLOP

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the available
technical resources. This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the users
solely depends on the methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

TKREC, CSE DEPARTMENT 15


GALLOP

4.3UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard is managed,
and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented
computer software. In its current form UML is comprised of two major components: a Meta-model
and a notation. In the future, some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business modeling
and other non-software systems.
The UML represents a collection of best engineering practices that have proven successful
in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns
and components.
7. Integrate best practices.

TKREC, CSE DEPARTMENT 16


GALLOP

DATA FLOW DIAGRAM:


1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used
to represent a system in terms of input data to the system, various processing carried out
on this data, and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used by
the process, an external entity that interacts with the system and the information flows in
the system.
3. DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.

START

USER ADMIN

Register Login

Login View User Details

Update Status
User’s Mobility Logs
Find Friends

Predict Users Next Location


Send Request

View Friend
Requests& Friends

User’s Check In

Fig4.2data flow diagram

TKREC, CSE DEPARTMENT 17


GALLOP

USE CASE DIAGRAM:


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is
present a graphical overview of the functionality provided by a system in terms of actors, their
goals (represented as use cases), and any dependencies between those use cases. The main purpose
of a use case diagram is to show what system functions are performed for which actor. Roles of
the actors in the system can be depicted.

Register

Login

Update Status

Find Friends

Send Request

View Friend
Requests& Friends

Users Check In Admin

View User Details

User’s Mobility

Predict Users Next


Location

Fig4.3 use case

TKREC, CSE DEPARTMENT 18


GALLOP

CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of
static structure diagram that describes the structure of a system by showing the system's classes,
their attributes, operations (or methods), and the relationships among the classes. It explains which
class contains information.

USER ADMIN

Register Login
Login

Newsfeeds () View User Details ()


Status Update () User’s Mobility Logs ()
Check In () Predict User’s Next
Find Friends () Location ()
Send Request()
Response Requests()

Fig4.4 class diagram

TKREC, CSE DEPARTMENT 19


GALLOP

SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and
timing diagrams.

Database

Cloud Server User

Login

View All Users

Register

Login
View User Details

User’s Mobility Logs

View profile
Predict Users Next Location

Users Check In

Find Friends

Send Friend request

Response to request

Newsfeed & Status


Update

TKREC, CSE DEPARTMENT 20


GALLOP

ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modeling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.

Star

USER ADMIN

Register Login

Login View User Details

Update Status
User’s Mobility
Logs
Find Friends

Predict Users
Send Request Next Location

View Friend
Requests&
Friends

User’s Check
In

TKREC, CSE DEPARTMENT 21


GALLOP

5.INPUT DESIGN AND OUTPUT DESIGN

INPUT DESIGN

The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and show
the correct direction to the management for getting correct information from the computerized
system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data.
The goal of designing input is to make data entry easier and to be free from errors. The data entry

TKREC, CSE DEPARTMENT 22


GALLOP

screen is designed in such a way that all the data manipulates can be performed. It also provides
record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow

OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents the information
clearly. In any system results of processing are communicated to the users and to other system
through outputs. In output design it is determined how the information is to be displaced for
immediate need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the system’s relationship
to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they should
Identify the specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following
objectives.

 Convey information about past activities, current status or projections of the


 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

TKREC, CSE DEPARTMENT 23


GALLOP

Software Environment

FEATURES OF. NET:

Microsoft .NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The
.NET Framework is a language-neutral platform for writing programs that can easily and securely
interoperate. There’s no language barrier with .NET: there are numerous languages available to
the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework
provides the foundation for components to interact seamlessly, whether locally or remotely on
different platforms. It standardizes common data types and communications protocols so that
components created in different languages can easily interoperate.

“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server,
for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

TKREC, CSE DEPARTMENT 24


GALLOP

The CLR is described as the “execution engine” of .NET. It provides the environment within which
programs run. The most important features are

 Conversion from a low-level assembler-style language, called Intermediate


Language (IL), into code native to the platform being executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
 Loading and executing programs, with version control and other such features.
 The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra

Information - “metadata” - to describe itself. Whilst both managed and unmanaged code can run
in the runtime, only managed code contains the information that allows the CLR to guarantee,
for instance, safe execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and
Deal location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language you’re using, impose certain constraints on the
features available. As with managed and unmanaged code, one can have both managed and

TKREC, CSE DEPARTMENT 25


GALLOP

unmanaged data in .NET applications - data that doesn’t get garbage collected but instead is
looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce type-
safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming language, a
set of language features and rules for using them called the Common Language Specification
(CLS) has been defined. Components that follow these rules and expose only CLS features are
considered CLS-compliant.

THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The
root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and
String, as well as Object. All objects derive from System. Object. As well as objects, there are

TKREC, CSE DEPARTMENT 26


GALLOP

value types. Value types can be allocated on the stack, which can provide useful flexibility. There
are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and
network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each
providing distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of applications and
XML Web services. The .NET framework supports new versions of Microsoft’s old favorites Visual
Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to
the family.

Visual Basic .NET has been updated to include many new and improved language
features that make it a powerful object-oriented programming language. These features include
inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured
exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant
language can use the classes, objects, and components you create in Visual Basic .NET.

TKREC, CSE DEPARTMENT 27


GALLOP

Managed Extensions for C++ and attributed programming are just some of the
enhancements made to the C++ language. Managed Extensions simplify the task of migrating
existing C++ applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for
Rapid Application Development”. Unlike other languages, its specification is just the grammar of
the language. It has no standard library of its own, and instead has been designed with the
intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language


developers into the world of XML Web Services and dramatically improves the interoperability
of Java-language programs with existing software written in a variety of other programming
languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the Visual
Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev Kit.

Other languages for which .NET compilers are available include

 FORTRAN
 COBOL
 Eiffel

TKREC, CSE DEPARTMENT 28


GALLOP

ASP.NET Windows Forms

XML WEB SERVICES

Base Class Libraries

Common Language Runtime

Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by the
CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET
Framework; it manages the execution of the code and also makes the development process
easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in


C#.NET can be used in any other CLS-compliant language. In addition, we can use objects,
classes, and components created in other CLS-compliant languages in C#.NET .The use of
CLS ensures complete interoperability among applications, regardless of the languages used
to create the application.

TKREC, CSE DEPARTMENT 29


GALLOP

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them.
In other words, destructors are used to release the resources allocated to the object. In
C#.NET the sub finalize procedure is available. The sub finalize procedure is used to
complete the tasks that must be performed when an object is destroyed. The sub finalize
procedure is called automatically when an object is destroyed. In addition, the sub finalize
procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures


with the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can handle
multiple tasks simultaneously, we can use multithreading to decrease the time taken by an
application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

TKREC, CSE DEPARTMENT 30


GALLOP

C#.NET supports structured handling, which enables us to detect and remove


errors at runtime. In C#.NET, we need to use Try…Catch…Finally statements to create
exception handlers. Using Try…Catch…Finally statements, we can create robust and
effective exception handlers to improve the performance of our application.

THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application


development in the highly distributed environment of the Internet.

OBJECTIVES OF. NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes is


stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and


guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-based
applications.

TKREC, CSE DEPARTMENT 31


GALLOP

SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now
called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced
with the term Analysis Services. Analysis Services also includes a new data mining
component. The Repository component available in SQL Server version 7.0 is now
called Microsoft SQL Server 2000 Meta Data Services. References to the
component now use the term Meta Data Services. The term repository is used only
in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TKREC, CSE DEPARTMENT 32


GALLOP

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table


design view. We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables


datasheet view mode.

TKREC, CSE DEPARTMENT 33


GALLOP

QUERY:

A query is a question that has to be asked the data. Access gathers data that
answers the question from one or more table. The data that make up the answer
is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run
query, we get latest information in the dynaset. Access either displays the dynaset
or snapshot for us to view or perform an action on it, such as deleting or updating.

TKREC, CSE DEPARTMENT 34


GALLOP

6.IMPLEMENTATION

MODULES:

 Context Feature
 Collaboration Feature
 Content Feature

MODULE DESCRIPTIONS:

Context Feature:

It refers to the spatial dimension. Users’ check-in activities are distributed in a spatial
scope. Nearby places can contribute to the representation of users’ check-in records,
especially when the users visit a focused set of places. Density closeness of users’
check-in logs from the spatial perspective has received a lot of attention. It has shown
advantage over traditional spatial modeling, both in flexibility and accuracy. We
follow similar motivation and provide our own design to extract users’ preference
from the spatial aspects.

Collaboration Feature:

It refers to the collaborative dimension. It is contributed from a large group of users’


records. Similar to the collaborative filtering formalization in recommendation
systems, here the users’ check-in activities can be reduced into a latent dimension,
representing their collaboration preferences.

The commonly used collaborative feature extraction method usually delivers a


reasonable performance in many scenarios, it does not perform well in the check-in
usage, which is revealed in our initial study. The causes for the unsatisfying

TKREC, CSE DEPARTMENT 35


GALLOP

performance are two-folds. First, the check-in matrix R is very sparse, i.e., it has
many zero items. Zero check-in record means that the user never visits the
corresponding place. This is probably because he/she is not a fan of that place or
his/her location scope is rather focused. Second, for some rated places, the frequency
information is not enough. In the counterpart representation of movie or product
recommendation, users not only reveal their favorite items with high ratings, but also
their least favorite items with low rating. However, this situation does not hold in
the case of location prediction, frequency of check-ins merely imply the confident
level of users’ preference for the corresponding location, which makes them barely
serve as explicit ratings for locations given from users.

Content Feature:

It refers to the place dimension. Places are not merely visited by users. These places
have inherent attributes, i.e., categories, text descriptions and other kinds of
annotations. We discuss how the transition between places reveals users’
interest/preference over time. In further, the transition patterns benefit the closeness
extraction of places for better prediction. In this paper, we use the categories of the
location as its attribute description. We discuss a general way to obtain the content
description features. POI, i.e., places in check-in records are usually annotated with
categories or attributes. Users’ transition pattern from one place to another one
shows the interest flow between these places. The extracted closeness from these
transition patterns can be used to predict the potential locations that user will take a
visit next.

TKREC, CSE DEPARTMENT 36


GALLOP

SAMPLE CODE

<%@ Master Language="C#" AutoEventWireup="true" CodeFile="Home.master.cs" Inherits="Home" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"


"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">

<head runat="server">

<meta charset="utf-8">

<meta http-equiv="X-UA-Compatible" content="IE=edge">

<meta name="viewport" content="width=device-width, initial-scale=1">

<title>GALLOP</title>

<!-- Bootstrap -->

<link href="css/bootstrap.min.css" rel="stylesheet">

<link rel="stylesheet" href="css/animate.css">

<link rel="stylesheet" href="css/font-awesome.min.css">

<link rel="stylesheet" href="css/font-awesome.css">

<link rel="stylesheet" href="css/jquery.bxslider.css">

<link rel="stylesheet" href="css/jquery.bxslider.css">

<link rel="stylesheet" type="text/css" href="css/normalize.css" />

TKREC, CSE DEPARTMENT 37


GALLOP

<link rel="stylesheet" type="text/css" href="css/demo.css" />

<link rel="stylesheet" type="text/css" href="css/set1.css" />

<link href="css/overwrite.css" rel="stylesheet">

<link href="css/style.css" rel="stylesheet">

<script src="js/countries.js" type="text/javascript"></script>

<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->

<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->

<!--[if lt IE 9]>

<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>

<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>

<![endif]-->

</head>

<body>

<nav class="navbar navbar-default navbar-fixed-top" role="navigation">

<div class="container">

<!-- Brand and toggle get grouped for better mobile display -->

<div class="navbar-header">

<button type="button" class="navbar-toggle collapsed" data-


toggle="collapse" data-target=".navbar-collapse.collapse">

<span class="sr-only">Toggle navigation</span>

<span class="icon-bar"></span>

TKREC, CSE DEPARTMENT 38


GALLOP

<span class="icon-bar"></span>

<span class="icon-bar"></span>

</button>

<a class="navbar-brand"
href=""><span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;GALLOP: </span><span style="font-
size:25px">GlobAL feature fused LOcation Prediction for Different Check-in Scenarios</span></a>

</div>

<div class="navbar-collapse collapse">

<div class="menu">

<ul class="nav nav-tabs" role="tablist">

<li role="presentation"><a
href="Home.aspx">Home</a></li>

<li role="presentation"><a
href="UserLogin.aspx">Users</a></li>

<li role="presentation"><a
href="AdminLogin.aspx">Admin</a></li>

<li role="presentation"><a href="Register.aspx">Create


Account</a></li>

</ul>

</div>

</div>

</div>

TKREC, CSE DEPARTMENT 39


GALLOP

</nav>

<div class="container">

<div class="row">

<div class="service">

<div class="">

<div class="">

<asp:ContentPlaceHolder ID="ContentPlaceHolder1" runat="server">

</asp:ContentPlaceHolder>

</div>

<hr>

</div>

</div>

</div>

</div>

<footer>

<div class="last-div">

<div class="container">

TKREC, CSE DEPARTMENT 40


GALLOP

<div class="row">

<div class="copyright">

<a target="_blank"
href="http://bootstraptaste.com"></a>

</div>

<!--

All links in the footer should remain intact.

Licenseing information is available at: http://bootstraptaste.com/license/

You can buy this theme without footer links online at:
http://bootstraptaste.com/buy/?theme=eNno

-->

</div>

</div>

<div class="container">

<div class="row">

<ul class="social-network">

<li><a href="#" data-placement="top"


title="Facebook"><i class="fa fa-facebook fa-1x"></i></a></li>

<li><a href="#" data-placement="top" title="Twitter"><i


class="fa fa-twitter fa-1x"></i></a></li>

<li><a href="#" data-placement="top"


title="Linkedin"><i class="fa fa-linkedin fa-1x"></i></a></li>

<li><a href="#" data-placement="top"


title="Pinterest"><i class="fa fa-pinterest fa-1x"></i></a></li>

TKREC, CSE DEPARTMENT 41


GALLOP

<li><a href="#" data-placement="top" title="Google


plus"><i class="fa fa-google-plus fa-1x"></i></a></li>

</ul>

</div>

</div>

<a href="" class="scrollup"><i class="fa fa-chevron-up"></i></a>

</div>

</footer>

<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->

<script src="js/jquery-2.1.1.min.js"></script>

<!-- Include all compiled plugins (below), or include individual files as needed -->

<script src="js/bootstrap.min.js"></script>

<script src="js/wow.min.js"></script>

<script src="js/jquery.easing.1.3.js"></script>

<script src="js/jquery.isotope.min.js"></script>

<script src="js/jquery.bxslider.min.js"></script>

<script type="text/javascript" src="js/fliplightbox.min.js"></script>

<script src="js/functions.js"></script>

<script type="text/javascript">$('.portfolio').flipLightBox()</script>

</body>

</html>

TKREC, CSE DEPARTMENT 42


GALLOP

7.SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying


to discover every conceivable fault or weakness in a work product. It provides a way
to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the

Software system meets its requirements and user expectations and does not fail in
an unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

TYPES OF TESTS:

Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the completion
of an individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and
expected results.

TKREC, CSE DEPARTMENT 43


GALLOP

Integration testing

Integration tests are designed to test integrated software components to


determine if they actually run as one program. Testing is event driven and is more
concerned with the basic outcome of screens or fields. Integration tests demonstrate
that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from
the combination of components.

Functional test

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

TKREC, CSE DEPARTMENT 44


GALLOP

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements

TKREC, CSE DEPARTMENT 45


GALLOP

document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

6.1 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written
in detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

TKREC, CSE DEPARTMENT 46


GALLOP

6.2 Integration Testing

Software integration testing is the incremental integration testing of two or


more integrated software components on a single platform to produce failures caused
by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

6.3 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets the
functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

TKREC, CSE DEPARTMENT 47


GALLOP

8.OUTPUT SCREENS
HOME PAGE:

Fig8.1 home page

TKREC, CSE DEPARTMENT 48


GALLOP

USER REGISTRATION PAGE:

Fig 8.2 user registration

TKREC, CSE DEPARTMENT 49


GALLOP

USER LOGIN PAGE:

Fig 8.3 user login

TKREC, CSE DEPARTMENT 50


GALLOP

STATUS UPDATE PAGE:

Fig 8.4 status update

TKREC, CSE DEPARTMENT 51


GALLOP

USER FIND FRIENDS:

FIG 8.5 find friend

TKREC, CSE DEPARTMENT 52


GALLOP

USER CHECK IN:

Fig 8.6 user checkin

TKREC, CSE DEPARTMENT 53


GALLOP

ADMIN LOGIN PAGE:

Fig8.7 admin page

TKREC, CSE DEPARTMENT 54


GALLOP

LOCATION PREDICTION PAGES:

FIG8.88 LOCATION PREDICTION

TKREC, CSE DEPARTMENT 55


GALLOP

TKREC, CSE DEPARTMENT 56


GALLOP

9.CONCLUSION
This paper presents a new feature fusion method for location prediction problem.
We systematically analyze the check-in characteristics of different scenarios and
propose to model three categories of features and combine them in a global way.
The geographical, collaborative and categorical information are all utilized. We
propose new models to include more global features to improve the generality and
robustness of the prediction method. Besides, the approach is versatile and easy to
extend. It shows impressive advantage on different datasets and significantly
improves the prediction accuracy.
This research has several interesting future directions. For example, better ways to
improve the feature preprocessing stage and design the compact structure to
maintain the extracted features. It is also valuable to exploit the evolving factors in
the location prediction. Additionally, the feature extraction methods we proposed in
this work can be extended to enable incremental updating. And new comprehensive
location prediction and update setting can be utilized.

TKREC, CSE DEPARTMENT 57


GALLOP

10.REFERENCES
[1] J. Sang, T. Mei, J.-T. Sun, C. Xu, and S. Li, “Probabilistic sequential pois
recommendation via check-in data,” in Proceedings of the 20th International
Conference on Advances in Geographic Information Systems, 2012, pp. 402–405.

[2] X. Liu, Y. Liu, K. Aberer, and C. Miao, “Personalized point-of interest


recommendation by mining users’ preference transition,” in Proc. of CIKM, 2013,
pp. 733–738.

[3] J.-D. Zhang, C.-Y. Chow, and Y. Li, “Lore: exploiting sequential influence for
location recommendations,” in Proceedings of the 22nd ACM SIGSPATIAL
International Conference on Advances in Geographic Information Systems, 2014,
pp. 103–112.

[4] W. Wang, H. Yin, S. Sadiq, L. Chen, M. Xie, and X. Zhou, “SPORE: A


sequential personalized spatial item recommender system,” in Proc. of ICDE, 2016,
pp. 954–965.

[5] D. Lian, X. Xie, V. W. Zheng, N. J. Yuan, F. Zhang, and E. Chen, “CEPR: A


collaborative exploration and periodically returning model for location prediction,”
TIST, vol. 6, no. 1, p. 8, 2015.

[6] A. Noulas, S. Scellato, N. Lathia, and C. Mascolo, “Mining user mobility features
for next place prediction in location-based services,” in Proc. of ICDM. IEEE, 2012,
pp. 1038–1043.
TKREC, CSE DEPARTMENT 58
GALLOP

[7] J. Ye, Z. Zhu, and H. Cheng, “Whats your next move: User activity prediction in
location-based social networks,” in Proc. of SDM, 2013.

[8] A. Noulas, S. Scellato, C. Mascolo, and M. Pontil, “An empirical study of


geographic user activity patterns in Foursquare,” in Proc.of ICWSM, 2011, pp. 70–
573.

[9] E. Cho, S. A. Myers, and J. Leskovec, “Friendship and mobility: user movement
in location-based social networks,” in Proc. Of SIGKDD, 2011, pp. 1082–1090.

[10] J. Bao, Y. Zheng, D. Wilkie, and M. Mokbel, “Recommendations in location-


based social networks: A survey,” Geoinformatica, vol. 19, no. 3, pp. 525–565, Jul.
2015.

[11] D. Lian, X. Xie, F. Zhang, N. J. Yuan, T. Zhou, and Y. Rui, “Mining location-
based social networks: A predictive perspective,” IEEE Data Engineering bulletin,
vol. 38, no. 2, pp. 35–46, 2015.

[12] D. Brockmann, L. Hufnagel, and T. Geisel, “The scaling laws of human travel,”
Nature, vol. 439, no. 7075, pp. 462–465, 2006.

[13] M. C. Gonzalez, C. A. Hidalgo, and A.-L. Barabasi, “Understanding individual


human mobility patterns,” Nature, vol. 453, no. 7196, pp. 779–782, 2008.

TKREC, CSE DEPARTMENT 59


GALLOP

[14] D. Lian, C. Zhao, X. Xie, G. Sun, E. Chen, and Y. Rui, “GeoMF: Joint
geographical modeling and matrix factorization for point-of interest
recommendation,” in Proc. of SIGKDD, 2014, pp. 831–840.

[15] M. Lichman and P. Smyth, “Modeling human location data with mixtures of
kernel densities,” in Proc. of SIGKDD, 2014, pp. 35–44.

[16] J.-D. Zhang and C.-Y. Chow, “GeoSoCa: Exploiting geographical, social and
categorical correlations for point-of-interest recommendations,” in Proc. of SIGIR,
2015, pp. 443–452.

[17] Y. Wang, N. J. Yuan, D. Lian, L. Xu, X. Xie, E. Chen, and Y. Rui, “Regularity
and conformity: Location prediction using heterogeneous mobility data,” in Proc. of
SIGKDD, 2015, pp. 1275–1284.

[18] B. W. Silverman, Density estimation for statistics and data analysis. CRC press,
1986.

[19] R. A. Finkel and J. L. Bentley, “Quad trees a data structure for retrieval on
composite keys,” Acta informatica, vol. 4, no. 1, pp. 1–9, 1974.

[20] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for


recommender systems,” Computer, vol. 42, no. 8, pp. 30–37, Aug. 2009.

[21] D. Saad, “Online algorithms and stochastic approximations,” Online Learning,


1998.

TKREC, CSE DEPARTMENT 60


GALLOP

[22] R. Motwani and P. Raghavan, Randomized algorithms. Chapman & Hall/CRC,


2010.

[23] T. H. Haveliwala, “Topic-sensitive PageRank,” in Proc. of WWW, 2002, pp.


517–526.

[24] A. Monreale, F. Pinelli, R. Trasarti, and F. Giannotti, “Wherenext: a location


predictor on trajectory pattern mining,” in Proc. Of SIGKDD, 2009, pp. 637–646.

[25] M. Li, A. Ahmed, and A. J. Smola, “Inferring movement trajectories from GPS
snippets,” in Proc. of WSDM, 2015, pp. 325–334.

[26] H. Yin, X. Zhou, Y. Shao, H. Wang, and S. Sadiq, “Joint modeling of user
check-in behaviors for point-of-interest recommendation,” in Proc. of CIKM, 2015,
pp. 1631–1640.

[27] C. D. Manning, P. Raghavan, H. Sch ¨ utze et al., Introduction to information


retrieval. Cambridge university press Cambridge, 2008.

[28] J. Lee, J. Han, and X. Li, “A unifying framework of mining trajectory patterns
of various temporal tightness,” IEEE Trans. Knowl. Data Eng., vol. 27, no. 6, pp.
1478–1490, 2015.

[29] S. Barua and J. Sander, “Mining statistically significant co-location and


segregation patterns,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 5, pp. 1185–1199,
2014.

TKREC, CSE DEPARTMENT 61


GALLOP

[30] J. Niedermayer, A. Z¨ ufle, T. Emrich, M. Renz, N. Mamoulis, L. Chen, and H.


Kriegel, “Probabilistic nearest neighbor queries on uncertain moving object
trajectories,” PVLDB, vol. 7, no. 3, pp. 205–216, 2013.

[31] Y. Fang, R. Cheng, W. Tang, S. Maniu, and X. S. Yang, “Scalable algorithms


for nearest-neighbor joins on big trajectory data,” IEEE Trans. Knowl. Data Eng.,
vol. 28, no. 3, pp. 785–800, 2016.

[32] D. Flatow, M. Naaman, K. E. Xie, Y. Volkovich, and Y. Kanza, “On the


accuracy of hyper-local geotagging of social media content,” in Proc. of WSDM,
2015, pp. 127–136.

[33] C. Zhang, J. Han, L. Shou, J. Lu, and T. La Porta, “Splitter: Mining fine-grained
sequential patterns in semantic trajectories,” PVLDB, vol. 7, no. 9, 2014.

[34] J. Foley, M. Bendersky, and V. Josifovski, “Learning to extract local events


from the web,” in Proc. of SIGIR, 2015, pp. 423–432.

[35] J. McGee, J. Caverlee, and Z. Cheng, “Location prediction in social media based
on tie strength,” in Proc. of CIKM, 2013, pp. 459–468.

[36] D. Karamshuk, A. Noulas, S. Scellato, V. Nicosia, and C. Mascolo, “Geo-


Spotting: Mining online location-based services for optimal retail store placement,”
in Proc. of SIGKDD, 2013, pp. 793–801.

TKREC, CSE DEPARTMENT 62


GALLOP

[37] A. Magdy, M. F. Mokbel, S. Elnikety, S. Nath, and Y. He, “Venus: Scalable


real-time spatial queries on microblogs with adaptive load shedding,” IEEE Trans.
Knowl. Data Eng., vol. 28, no. 2, pp.356–370, 2016.

[38] H. Jeung, Q. Liu, H. T. Shen, and X. Zhou, “A hybrid prediction model for
moving objects,” in Proc. of ICDE, 2008, pp. 70–79.

[39] E. Horvitz and J. Krumm, “Some help on the way: Opportunistic routing under
uncertainty,” in Proc. of UbiComp, 2012, pp. 371–380.

[40] B. D. Ziebart, A. L. Maas, A. K. Dey, and J. A. Bagnell, “Navigate like a cabbie:


Probabilistic reasoning from observed context-aware behavior,” in Proc. of
UbiComp, 2008, pp. 322–331.

[41] J. Krumm and E. Horvitz, “Predestination: Inferring destinations from partial


trajectories,” in Proc. of UbiComp, 2006, pp. 243–260.

[42] J. McInerney, A. Rogers, and N. R. Jennings, “Improving location prediction


services for new users with probabilistic latent semantic analysis,” in Proc. of
UbiComp, 2012, pp. 906–910.

[43] W. Wang, H. Yin, L. Chen, Y. Sun, S. Sadiq, and X. Zhou, “Geo-SAGE: A


geographical sparse additive generative model for spatial item recommendation,” in
Proc. of SIGKDD, 2015, pp. 1255–1264.

[44] Q. Yuan, G. Cong, Z. Ma, A. Sun, and N. M. Thalmann, “Timeaware point-of-


interest recommendation,” in Proc. of SIGIR, 2013,pp. 363–372.
TKREC, CSE DEPARTMENT 63
GALLOP

[45] F. Figueiredo, B. Ribeiro, J. M. Almeida, and C. Faloutsos, “Tribeflow: Mining


& predicting user trajectories,” in Proc. Of WWW, 2016, pp. 695–706.

TKREC, CSE DEPARTMENT 64

Das könnte Ihnen auch gefallen