Sie sind auf Seite 1von 11

Chapter 1

Introduction to Artificial Intelligence

1.1 What is Artificial Intelligence?

Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a


software think intelligently, in the similar manner the intelligent humans think. AI is
accomplished by studying how human brain thinks, and how humans learn, decide, and work
while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
Artificial Intelligence currently encompasses a huge variety of subfields, ranging from
general-purpose areas, such as learning and perception to such specific tasks as playing chess,
proving mathematical theorems, writing poetry, and diagnosing diseases. Artificial
Intelligence systematizes and automates intellectual tasks.

There are many definitions of artificial intelligence.


One can say that ‘A scientific and engineering discipline devoted to understand the principles
that make intelligent behaviour possible in natural or artificial systems and to develop
methods for the design and implementation of useful, intelligent artefacts.’
Another definition says ‘The theory and development of computer systems able to perform
tasks normally requiring human intelligence, such as visual perception, speech recognition,
decision-making, and translation between languages.”

1.2 History of Artificial Intelligence

The work of artificial intelligence was started in the mid of 20th century.

1950: Alan Turing introduced Turing Test for evaluation of intelligence and published
Computing Machinery and Intelligence. Claude Shannon published Detailed
Analysis of Chess Playing as a search.

1956: John McCarthy coined the term Artificial Intelligence. Demonstration of the first
running AI program at Carnegie Mellon University.

1958: John McCarthy invents LISP programming language for AI.

1973: The Assembly Robotics group at Edinburgh University built Freddy, the Famous
Scottish Robot, capable of using vision to locate and assemble models.

1
1979: The first computer-controlled autonomous vehicle, Stanford Cart, was built.

1997: The Deep Blue Chess Program beats the then world chess champion, Garry
Kasparov.

2000: Interactive robot pets become commercially available. MIT displays Kismet, a robot
with a face that expresses emotions. The robot Nomad explores remote regions of Antarctica
and locates meteorites.

And now, today in 21st century AI has reached a new level with very innovating ideas and
innovative products to help the fellow humans.

1.3 Fields in Artificial Intelligence

The artificial intelligence has many branches according to many things. The domain of
artificial intelligence is huge in breadth and width. While proceeding, we consider the
broadly common and prospering research areas in the domain of AI. These three main fields
in artificial intelligence which have tremendous impact on the current lifestyle.

1. Machine Learning
2. Natural Language Processing
3. Deep Learning

Fig 1.1: Fields in Artificial Intelligence

1.3.1 Machine Learning

Machine learning is a core sub-area of artificial intelligence as it enables computers to get


into a mode of self-learning without being explicitly programmed. It was born from pattern
recognition and the theory that computers can learn without being programmed to perform
specific tasks. Researchers interested in artificial intelligence wanted to see if computers

2
could learn from data. They learn from previous computations to produce reliable, repeatable
decisions and results.

Although machine learning is a field within computer science, it differs from traditional
computational approaches. In traditional computing, algorithms are sets of explicitly
programmed instructions used by computers to calculate or problem solve. Machine learning
algorithms instead allow for computers to train on data inputs and use statistical analysis in
order to output values that fall within a specific range. Because of this, machine learning
facilitates computers in building models from sample data in order to automate decision-
making processes based on data inputs.

Fig 1.2: Machine Learning Methods

Machine Learning Methods:

In machine learning, tasks are generally classified into two broad categories. These categories
are based on how learning is received or how feedback on the learning is given to the system
developed.
Two of the most widely adopted machine learning methods are supervised learning which
trains algorithms based on example input and output data that is labelled by humans,
and unsupervised learning which provides the algorithm with no labelled data in order to
allow it to find structure within its input data. Let’s explore these methods in more detail.

i) Supervised Learning
In supervised learning, the computer is provided with example inputs that are labelled
with their desired outputs. The purpose of this method is for the algorithm to be able to
“learn” by comparing its actual output with the “taught” outputs to find errors, and modify
the model accordingly. Supervised learning therefore uses patterns to predict label values on
additional unlabelled data.
For example, with supervised learning, an algorithm may be fed data with images of
sharks labelled as fish and images of oceans labelled as water. By being trained on this data,
the supervised learning algorithm should be able to later identify unlabelled shark images
as fish and unlabelled ocean images as water.
A common use case of supervised learning is to use historical data to predict statistically
likely future events. It may use historical stock market information to anticipate upcoming
fluctuations, or be employed to filter out spam emails. In supervised learning, tagged photos
of dogs can be used as input data to classify untagged photos of dogs.

ii) Unsupervised Learning


In unsupervised learning, data is unlabelled, so the learning algorithm is left to find
commonalities among its input data. As unlabelled data are more abundant than labelled data,
machine learning methods that facilitate unsupervised learning are particularly valuable.
The goal of unsupervised learning may be as straightforward as discovering hidden
patterns within a dataset, but it may also have a goal of feature learning, which allows the

3
computational machine to automatically discover the representations that are needed to
classify raw data.
Unsupervised learning is commonly used for transactional data. You may have a large
dataset of customers and their purchases, but as a human you will likely not be able to make
sense of what similar attributes can be drawn from customer profiles and their types of
purchases. With this data fed into an unsupervised learning algorithm, it may be determined
that women of a certain age range who buy unscented soaps are likely to be pregnant, and
therefore a marketing campaign related to pregnancy and baby products can be targeted to
this audience in order to increase their number of purchases.
Without being told a “correct” answer, unsupervised learning methods can look at
complex data that is more expansive and seemingly unrelated in order to organize it in
potentially meaningful ways. Unsupervised learning is often used for anomaly detection
including for fraudulent credit card purchases, and recommender systems that recommend
what products to buy next. In unsupervised learning, untagged photos of dogs can be used as
input data for the algorithm to find likenesses and classify dog photos together.

1.3.2 Natural Language Processing

Natural language processing (NLP) is a field of computer science, artificial


intelligence and computational linguistics concerned with the interactions
between computers and human (natural) languages, and, in particular, concerned with
programming computers to fruitfully process large natural language corpora. Challenges in
natural language processing frequently involve natural language understanding, natural
language generation.
The development of NLP applications is challenging because computers traditionally require
humans to "speak" to them in a programming language that is precise, unambiguous and
highly structured, or through a limited number of clearly enunciated voice commands.
Human speech, however, is not always precise -- it is often ambiguous and the linguistic
structure can depend on many complex variables, including slang, regional dialects and social
context.
Most of the research being done on natural language processing revolves around search,
especially enterprise search. This involves allowing users to query data sets in the form of a
question that they might pose to another person. The machine interprets the important
elements of the human language sentence, such as those that might correspond to specific
features in a data set, and returns an answer.
Google and other search engines base their machine translation technology on NLP deep
learning models. This allows algorithms to read text on a webpage, interpret its meaning and
translate it to another language.

1.3.3 Deep Learning

4
Deep learning is a class of machine learning algorithms that are based on the (unsupervised)
learning of multiple levels of features or representations of the data. Higher level features are
derived from lower level features to form a hierarchical representation.
Deep Learning is a subfield of artificial intelligence concerned with algorithms inspired by
the structure and function of the brain called artificial neural networks.
Neural networks are a set of algorithms, modelled loosely after the human brain, that are
designed to recognize patterns. They interpret sensory data through a kind of machine
perception, labelling or clustering raw input. The patterns they recognize are numerical,
contained in vectors, into which all real-world data, be it images, sound, text or time series,
must be translated.

Neural networks help us cluster and classify. You can think of them as a clustering and
classification layer on top of data you store and manage. They help to group unlabelled data
according to similarities among the example inputs, and they classify data when they have a
labelled dataset to train on.
An artificial intelligence function that imitates the workings of the human brain in processing
data and creating patterns for use in decision making. Deep learning is a subset of machine
learning in Artificial Intelligence (AI) that has networks which are capable of learning
unsupervised from data that is unstructured or unlabelled.

5
Chapter 2

Introduction to ROSS

Fig 2.1: ROSS INTELLIGNENCE


2.1 Concept
A system which provides help to big and small law firms as an attorney. A system which
takes very less time comparative to humans to identify legal authorities similar to particular
questions. Means, it must be fast.
Also, this system has to be very correct and unpredictable.
As we are talking about a system in artificial intelligence, it means this system must be self-
learning. As it is an attorney it should monitor the laws for changes. Also look for both the
positive and negative sides of the case.

ROSS Intelligence is a legal research engine that uses artificial intelligence to automate legal
processes making them more efficient and less expensive. Leveraging IBM's Watson, ROSS
uses natural language processing to search and provide legal information from citations to
full legal briefs.

2.2 Background

The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for
the firm. It will be responsible for sifting through thousands of legal documents to bolster
the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school
lawyers early on in their careers.

In May 2016, ROSS Intelligence introduced the ROSS to the world. Andrew Arruda is the
CEO of ROSS Intelligence. ROSS has joined the ranks of law firm BakerHostetler, which
employs about 50 human lawyers just in its bankruptcy practice. ROSS’s hiring
at BakerHostetler, a firm with 900 lawyers, represents a huge win for ROSS Intelligence in
securing use of their software within a major player in the legal field.
“Our goal is to have ROSS on the legal team of every lawyer in the world,” said Arruda, who
said that they are testing ROSS in practice areas beyond bankruptcy law.
“Until now, lawyers have been using static pieces of software to navigate the law, which are
limited and put hours of information retrieval tasks on a lawyer’s plate.” The software allows
the legal team to up vote and down vote excerpts based on the robot’s interpretation of the

6
question. ROSS uses machine learning technology to fine tune its research methods. The
legal robot is accessed via computer and billed as a subscription service.

2.3 Need

In today’s date there are n number of cases are filed in a minute. The majority of people can’t
afford a lawyer. In small states, 97% of cases show up to that courts without a lawyer. So
many million people with civil justice issues, 80% of them go to court without legal
representation. Everyone needs a lawyer. Period. Lawyer are expensive. But there going to
come a time in our lives when we have to interact with that legal system. Whether it is
signing a contract, going through a divorce, getting your will together. If you can’t afford a
one what pity it would be. You are 50% more likely to win when you have a lawyer, But then
again even if decided to get a lawyer you have 50% chances to lose the case. The legal
system is constantly changing. So, they have to divert time to legal research there fees go up.
But in the end you may not afford a good lawyer. A lawyer may need 25 hours of time to do
research and help the client. The average rate for lawyers in America bills out as 361$ an
hour. Legal research takes up 20% of time.
If we could make this work then we could remove one of the biggest barrier people face
when they try to hire a lawyer. Legal Research Fees. Lawyers are spending hundreds of
dollars a month for accessing the database system. And the type in keywords and the process
is clunky, results comeback in thousands. And this takes very huge time.
Using ROSS lawyers can just ask their research questions the way they appear in natural
language as if they are asking a colleague of theirs with experience on the matter. And ROSS
can read over a 100s of millions of pages of laws in a second. Finding the exact passages they
need and they can find everything they need in minutes if not seconds. And because ROSS is
artificially intelligent it is constantly learning. It is getting smarter every day. ROSS doesn’t
replace lawyers, it allows them to get away from that legal research task and focus on
advising their clients. ROSS is already saving lawyers’ 20 to 30 hours of research time in
bankruptcy case with is the first area ROSS Intelligence went into.

2.4 Features

ROSS helps lawyers, answers complex legal questions with ‘Proprietary Legal Cognition
Framework’. Legal cognition analyses the relationships and meanings of word to understand
the legal concepts they form. Enabling ROSS to read through millions of laws and find
precise answers. The more questions ROSS reads the smarter it gets.

Basically, a user can ask questions ROSS in natural language or verbally. And ROSS is able
to give the answers in natural language or verbally.

7
Chapter 3

More about ROSS and AI

The term Artificial Intelligence itself is just teaching a machine how to do a task that was
normally thought to be human. It is a very broad term.

Machine Learning is very exciting and probably popular branch is the ability now that we
have to have computer systems learnt. Whereas before you would program a computer. You
will have to tell it what to do. Now machines are starting to learn through use, through
working alongside humans, to do things that they don’t have to be programmed to do.

Natural Language Processing is the ability for a computer system to be able to read
through sentences, pages, passages of the way communicate as humans and understand it.
And the way that computer system do that is by analysing the words in a sentence seeing the
relations those word have on each other and decide from the intent. The meaning behind the
words and so instead of having complex system reading graphs and rows you can actually
start to communicate on the same terms, the way we communicate.

The last two briefly, first Visual Recognition is the ability for machines to identify
images. And the last one, Speech Recognition that is the ability for computer system to
understand the way we communicate verbally out loud. So, it will translate our vocal tones
into words and be able to understand it.

3.1 Steps involved in ROSS

The following steps are involved in finding a result.


1. The ROSS takes input in natural language.
2. It has a classifier which uses the previous stored records.
3. It matches the cases in classifier using neural networks.
4. It predicts the most relevant answer to the asked question.

Classifier
input Case study
output

Fig 3.1: Prediction in ROSS

8
Chapter 4

Comparative Studies

4.1 Advantages

1. Using artificial intelligence alongside cognitive technologies can help make faster
decisions and carry out actions quicker. ROSS saves the time of lawyers as well as of the
clients by being fast. ROSS was precisely accurate in latest bankruptcy and divorce cases. It
makes ROSS very reliable.

2. Due to being cheap in price, ROSS system will be able to help into the cases of poor
people.

3. It will create more possible ways in future for other Artificial Intelligence based systems.

4.2 Disadvantages

1. Machines do not have any emotions and moral values. They perform what is programmed
and cannot make the judgment of right or wrong. They either perform incorrectly or
breakdown in such situations.

2. Replacement of humans with machines can lead to large scale unemployment.


Unemployment is a socially undesirable phenomenon. People with nothing to do can lead to
the destructive use of their creative minds.

3. Humans will lose their creative power and will become lazy. Also, if humans start thinking
in a destructive way, they can create havoc with these machines.

9
Chapter 5

Conclusion and Future Scope


Currently, ROSS is most useful with the recent sources.
ROSS will be definitely useful in future for all the law based work.
And ROSS Intelligence will try to create more artificial intelligence based products.
Artificial Intelligence is a tool that allows us to do our jobs more efficiently.
It will be a beautiful paintbrush that we will have on our arsenal to do much more than we
can currently do.

10
Chapter 6

Bibliography

1. Russell S., Norvig P., (2003) “Artificial Intelligence: A Modern Approach” (2nd Ed.)
2. Bartlett, P. L., & Maass,W. (2003). Vapnik-Chervonenkis dimension of neural nets. In
M. A. Arbib (Ed.), “The handbook of brain theory and neural networks” (2nd Ed.)
3. Chawla, N., Bowyer, K., & Kegelmeyer, W. (2002). SMOTE: Synthetic minority
oversampling technique. Journal of Artificial Intelligence Research
4. www.tutorialspoint.com
5. https://www.youtube.com/watch?v=ZF0J_Q0AK0E
6. https://www.youtube.com/watch?v=wwbr0fombFs

11

Das könnte Ihnen auch gefallen