Beruflich Dokumente
Kultur Dokumente
With increasing population of India, increasing birth rate and decreasing death rate due to
advancement in the medical field it’s found that numbers of doctors are less to serve the need of the
increasing population. This scenario can be better understood while walking through the cities
government hospitals where the less availability of the doctors is the major cause behind the
improper treatment of the patients and in certain scenario the resultant death. Sometime even
doctors can make mistake in providing the correct treatment result in death of patient. To encounter
such cases there is a need of the smart and Intelligent chatbot who can provide advice to the doctors
and sometime even patients about what to do in such cases which ultimately results in the saving
the life of hundreds of people. The AI based medical chatbot on which this research topic is based
deals with providing medical advice in such scenario because sometime doctors can even make
mistake while observing the symptoms but the machine which is specifically developed for it can’t
make such mistake. This AI based medical chatbot can take decision as per the request of the
patient.
CHAPTER 1
INTRODUCTION
Through chat bots one can communicate with text or voice interface and get
reply through artificial intelligence. Typically, a chat bot will communicate with a real
person. Chat bots are used in applications such as ecommerce customer service, call centers
and Internet gaming. Chat bots are programs built to automatically engage with received
messages. Chat bots can be programmed to respond the same way each time, to respond
differently to messages containing certain keywords and even to use machine learning to
adapt their responses to fit the situation. A developing number of hospitals, nursing homes,
and even private centers, presently utilize online Chat bots for human services on their sites.
These bots connect with potential patients visiting the site, helping them discover specialists,
booking their appointments, and getting them access to the correct treatment. An ML model
has to be created wherein we could give any text input and on the basis of training data it
must analyze the symptoms. A Supervised Logistic Regression machine learning algorithm
can be implemented to train the model with data sets containing various diseases CSV files.
The goal is to compare outputs of various models and suggest the best model that can be used
for symptoms in realworld inputs. Data set contains CSV file having all diseases compiled
together. The logistic regression algorithm in ML allows us to process the data efficiently.
The goal here is to model the underlying structure or distribution of the data in order to learn
more from the training set. In any case, the utilization of artificial intelligence in an industry
where individuals’ lives could be in question, still starts misgivings in individuals. It brings
up issues about whether the task mentioned above ought to be assigned to human staff. This
healthcare chat bot system will help hospitals to provide healthcare support online 24 x 7, it
answers deep as well as general questions. It also helps to generate leads and automatically
delivers the information of leads to sales. By asking the questions in series it helps patients by
Almost everyone kept on hold while operators connect you to a customer care
executive. On an average people spend around 7 minutes until they are assigned to a person.
Gone are the frustrating days of waiting in a queue for the next available operative. They are
replacing live chat and other forms of slower contact methods such as emails and phone calls.
Since chat bots are basically virtual robots they never get tired and continue to obey your
command. They will continue to operate every day throughout the year without requiring to
take a break
Problem Statement:
. Typically, a chat bot will communicate with a real person. Chat bots are used in
applications such as ecommerce customer service, call centers and Internet gaming. Chat bots
are programs built to automatically engage with received messages. Chat bots can be
programmed to respond the same way each time, to respond differently to messages
containing certain keywords and even to use machine learning to adapt their responses to fit
the situation. A developing number of hospitals, nursing homes, and even private centers,
presently utilize online Chat bots for human services on their sites. These bots connect with
potential patients visiting the site, helping them discover specialists, booking their
appointments, and getting them access to the correct treatment. An ML model has to be
created wherein we could give any text input and on the basis of training data it must analyze
implemented to train the model with data sets containing various diseases CSV files. The
goal is to compare outputs of various models and suggest the best model that can be used for
symptoms in real world inputs. Data set contains CSV file having all diseases compiled
together. The logistic regression algorithm in ML allows us to process the data efficiently.
The goal here is to model the underlying structure or distribution of the data in order to learn
more from the training set. In any case, the utilization of artificial intelligence in an industry
where individuals’ lives could be in question, still starts misgivings in individuals. It brings
up issues about whether the task mentioned above ought to be assigned to human staff. This
healthcare chat bot system will help hospitals to provide healthcare support online 24 x 7, it
answers deep as well as general questions. It also helps to generate leads and automatically
delivers the information of leads to sales. By asking the questions in series it helps patients by
LITERATURE REVIEW
This unit is the front end of the system. It is responsible for collecting the user queries from
the user which are the input to the system. It is also responsible for displaying the system
generated results to the user. Therefore, it can be said that the chat interface is the face of the
system through which the entire communication takes place. It is the mediator of
conversation between the system and the user. The query that user fires on the chat interface
is passed on to the chatting backend which acts as a message delivering system between the
Chat interface and the Machine Learning Layer. This interface can be accessed either as a
website or as a smart phone app. The type of interface depends on the requirements of the
user that are to be satisfied by the system. If the system is accessed from a smartphone, the
interface will be in the form of an app and if the system is accessed from a website, then the
interface will be in the form of a website. For building apps on the smartphone, it will require
to use android for android phones or Swift for iOS. In this case, only the interfacing platform
will be programmed on android and the complete backend processing of the system will take
place on a server on which the system will be deployed. For making a website, either Java or
Python web frameworks can be used. Java provides Spring and Struts as the most advanced
and latest web frameworks. Similarly, Python allows usage of Django and Flask frameworks
for building of a website. The criteria for selection of the programming language depends
upon the functionalities that the system intents to provide, the requirements of the users that
will use the system, the algorithms that are to be used by the system, etc. Selection of an
appropriate programming language makes it simpler for developers to develop a system
which provides maximum functionality to the user with high accuracy and minimum
complexity.
enables the system to understand the natural language or the conversational language spoken
by the users. The conversational language used by humans for day to day conversations is not
as perfect as the formal language. It does not focus much on the vocabulary and the grammar.
Hence, it becomes difficult for a system to understand the intent of the sentence. The input
received from the user is in unstructured text format which cannot be understood by the
system directly. It understands input only in structured formats. The unstructured text
received from the user is converted to structured format by extracting important words and
patterns from the user text using the NLU techniques. Humans are capable of understanding
any mispronunciations, homophones, swapped words, shortened form of words (like „it‟s‟
for „it is‟), slang words or phrases and also words which are not used in formal vocabulary
but exist in regular conversations. NLU techniques enables the system to identify these
twerks if the user makes use of them while conversing with the chatbot, so as to make the
user feel that the conversation is taking place between two humans and not between a human
and a bot. NLU systems do not directly understand the meaning of the user sentences. It
involves a sequence of processes to derive the actual intent of the sentence. To understand a
complete sentence, the NLU system needs to understand each word of that sentence. It means
that the initial task is the segmentation of the sentences into individual words. Next, to
understand the word, the system needs to understand the grammar of the sentence. This can
be done by knowing the parts of speech of each word in that sentence. Here comes the POS
(Parts-Of-Speech) tagger into picture. After knowing the grammatical weightage of each
word, all of them are parsed to know the dependency among them. This is the most important
step wherein the word with the highest dependency is extracted, from which the intent of the
system can be known. It is not possible that the knowledge base would contain the exact
sentence that the user has sent. It might contain a sentence with the same intent but with
different words used in it. To match these types of synonymic sentences, synonym
determination and sentence matching are required. The different tasks to be implemented
under the NLU Engine and the methods to do the same have been discussed further.
Segmentation, also referred to as tokenization is the process of splitting text into smaller and
meaningful units. These units could be paragraphs, sentences, clauses, phrases, words or
letters. The smallest unit are the letters. Word segmentation is the splitting of sentences into
individual words separated by blank spaces. The tokenized units of the sentences are called as
tokens. The tokenizers split the sentences into words and punctuations marks as independent
units. The most commonly used tokenizer is of space type, i.e. it splits the sentences into
words at the blank spaces. It is also required that the tokenizer should consider abbreviations,
acronyms, dates, numbers in decimal formats, etc., which cannot split at punctuations and
in his algorithm to calculate character spaces in the sentences. The character spaces should
They include the gaps between letter, punctuations and the words. The algorithm functions on
the basis of the amount of gap or character space between each unit in the sentence. After the
calculation of character spaces, an average of the gaps is calculated to know the mean
average between characters in the sentence. This average gap distance is then applied to the
sentence which is to be segmented. The places at which the character space is more than the
average character space are said to be the points of tokenization. The gap between words is
always more than the average gap and hence tokenization takes place at the blank spaces
Naeun Lee et al. [2] [2017] proposed the implementation of word segmentation using NLTK.
Natural Language ToolKit (NLTK) is a python package which caters to provide services for
NLP. It has inbuilt tokenizers. Users need to import the package and use the required type of
tokenizer which is present in the form of functions. The NLTK includes a wide range of
tokenizers which are as follows standard, letter, word, classic, lowercase, N-gram, pattern,
keyword, path, etc. The most commonly used tokenizer is the word-punkt tokenizer which
splits the sentences at the blank spaces. The accuracy, speed and efficiency of the NLTK
tokenizers is commendable. Also, it does not require any algorithm implementation as the
Tao Jaing [3] [2011] explains the usage of CRF (Conditional Random Fields) Algorithm for
word segmentation. This algorithm trains the system for spaces between the characters. Using
this training, the system identifies the gap between characters in the test sentence. The system
keeps a threshold value for the gap distance. If the value of gaps in the test sentence is more
than the specified threshold, then the sentence splits at those points. CRF requires a lot of
training to be given to the system, which makes the process time consuming. Comparing the
three methods illustrated above, the NLTK proves to be more efficient in all aspects as
compared to the other two. The usage of NLTK does not require the implementation of any
algorithm as everything is taken care by the package itself. Also, the accuracy, speed and
POS Tagging is the process of assigning grammatical annotations to individual words in the
sentences. These annotations include the Parts-Of-Speech Tags. They denote the grammatical
importance of the word in the sentence based on the dependency of that word with other
words in that phrase, clause, sentence, paragraph, etc. The common POS tags are noun, verb,
pronoun, etc. There are number of ways which can be used to perform the POS Tagging.
Jerome R. Bellegarda [4] [2010] proposed a method called latent analogy for POS Tagging.
In this algorithm, latent semantic mapping (LSM) technique is used. It requires the training
on the available corpus. The LSM maintains a feature space of the trained corpus which has
been tagged. Now, new sentences are provided to the LSM for tagging and the analysis is
performed so as to determine the sentences from the training data which are closest to the test
sentence. This is called as sentence neighbourhood. Sentence neighbourhood holds true for
two sentences if they share the same intent matter. Once the intent matching sentences are
found from the trained data, the POS tags attached to those sentences are then mapped to the
[5] [2018] put forth the technique of implementing the POS Tagger using Neural Networks.
This algorithm consists of „n‟ numbers of hidden layers. Theses layers are determined by the
number of iterations or combinations required to tag the required sentence correctly. At each
layer of the algorithm, each word in the sentence is tagged with an appropriate POS tag and
then passed to the next later for checking the correctness of the tags. This keeps happening
unless the next layer provides the same tags as provided by the previous layer. Another
technique to implement the POS tagger is following the traditional approach i.e. of
maintaining a dictionary of tags for the given language. Python NLTK provides an inbuilt
Tagger which can be used just by importing the NLTK package. The NLTK has a predefined
set of tags and a trained data of its own. It tests the sentence and applies an appropriate tag to
it. On comparing the above three algorithms, the NLTK tagger proves to be speed and usage
efficient. But highest accuracy is provided by the neural network algorithm as it undergoes
many iterations.
relationship between words in a sentence based on the grammatical tags attached to it. It is
the next step after parsing. A dependency tree or graph is created for every sentence. This tree
is called as the parsing tree or the dependency tree. There are a number of ways by which the
Bo Chen [6] [2011] proposed a method for implementing the dependency tree. It initially
finds out the dependencies among the words in the sentence. Each word is checked for its
relationship or dependency with the other word. The word with the highest dependency is
selected to be the root. The other words with a relation with the root node are attached to it as
the child nodes. This keeps on continuing until all the words are placed in the tree. The tree
form of the sentence is called the dependency parser tree. The dependencies among the words
Zhenghua Li [7] [2014] provided a further improvised model of the dependency parser. In
the traditional method mentioned above the parser creates a parsed tree for the required
sentence. In the graph-based dependency parser, the tree created is converted to a graph
where the words in the sentences are the vertices and the dependency between the words are
the represented by the edges. This data structure shows a better representation of the parsed
3.2.4 A review on Synonym and Pattern Recognition For information retrievals, no matter
how big our data is, no sentence sent by the user can be perfectly same to any sentence in the
database. But there can be sentences with the same intent. After understanding the intent of
the user sentence, the database is checked for a sentence with the same intent. The matched
sentences have difference of words which are used to express the same content. They use
alternative words or synonyms. This makes synonym detection necessary for the system.
Synonyms for a particular word may be domain independent or domain dependent. Domain
independent synonyms are synonyms for a word in the entire vocabulary. But domain-
dependent synonyms are synonyms for a word in that respective domain only. There are
various algorithms used for the detection and extraction of synonyms, some of which are
[8] [2018] explains the traditional dictionary method of synonym extractions. In this method,
the system database maintains a dataset of synonyms for important keywords in that domain.
The sentence sent by the user is then mapped on to that synonym dataset. The keywords
detected from the sentence are then checked in that synonym set to check for same intent. All
possible synonyms of that keyword are then looked out for a match in the main database. The
sentence which is closest to the user sentence is extracted. This method is time consuming
among all the parts of speech tags, words having the tags as noun, verbs and adjectives are
marked as positive tags and the others as negative tags. The polarity for each feature (word)
is then carried out by using the POS tags. If the overall feature polarity is positive, then it can
be identified categorically. All the positive features are then grouped together and the
synonyms detection for the group of features will be relatively strong, as an entire clause is
checked for its synonymic meaning. The synonym sets which are extracted for that clause of
features is then calculated for information gain. The one with the highest information gain is
replies to be given. They provide replies to the user from a set of predefined replies
categorized on the basis of the query given by the user. Inclusion of ML in chatbots enables it
to compute the replies from scratch. It is used to make predictions to predict the responses for
the user queries and also to update the system from its experiences. It keeps updating the
databases as and when it encounters something new from the user. This engine uses
supervised or unsupervised or both techniques to analyze what the user requires. It further
uses a model to interpret the intent of the user and provides the appropriate results. The
results may be in the form of predictions or any form of analysis which are based on the
execution and analysis of mathematical models. Most of the machine learning models are
based on statistical and probabilistic evaluations of the instances occurring and the
calculations makes a prediction for the test instance. The decision engine not only includes
models for predictions, but also includes algorithms for information retrievals like entity
extractions, multiple text classifications, etc. Also, the inclusion of a machine learning layer
in a chatbot system, is used to create an ontological relationship for entities extracted, and
also associate them with context-specific questions along with their alternatives, synonyms
and machine-enabled classes. These features of machine learning, converts a static and basic
FAQ system to a smart and more personalized communicating experience. For chatbots that
provide services in diverse domains, the machine learning layer adds on to the services that it
can provide. It intends to increase the accuracy of the responses provided by the system to the
users and also extends the scope of the system. The system is enabled to update itself by
learning from its experiences. This makes the system less prone to false predictions. The
chatbots that are used in healthcare domain for disease predictions can use a wide range of
algorithms, some of which are clustering, Bayesian networks, decision trees, etc. The
methods of their execution and the comparison of the algorithms for the appropriate selection
of the same is briefed here. A decision engine is the brain of the system. It includes the
Also, ML enables the system to learn from its past experiences, so as to provide better and
revised results. The chatbots for health care domain require disease predictions algorithm.
Prediction can be carried out in many ways some of which are reviewed below. Sachin S.
Gavankar et al.
[10] [2017] proposed the eager decision tree algorithm for prediction. This type of decision
tree is the improvised version of the traditional decision tree. It creates this tree at runtime,
based on the user‟s queries and keeps updating the tree on new user messages. Consider its
working for disease prediction. In this algorithm, the symptoms detected in the user query are
added as child nodes to the root node. The nodes keep on getting added for new symptoms
detected. Further for every symptom, the algorithm checks for the second symptom which
has the highest occurrence with the earlier symptom and asks the user for that symptom. If he
says yes, then the system traces that path to check for the disease present at the root node.
This will keep iterating for all users and the tree keeps getting updated for new entries or
CHAPTER 3
HARDWARE DESCRIPTION
RASPBERRY PI
LCD DISPLAY
POWER SUPPLY
RASPBERRY PI:
The Raspberry Pi 3 Model B+ is the latest product in the Raspberry Pi 3 range, boasting a 64-
bit quad core processor running at 1.4GHz, dual-band 2.4GHz and 5GHz wireless LAN,
Bluetooth 4.2/BLE, faster Ethernet, and PoE capability via a separate PoE HAT The dual-
band wireless LAN comes with modular compliance certification, allowing the board to be
designed into end products with significantly reduced wireless LAN compliance testing,
improving both cost and time to market. The Raspberry Pi 3 Model B+ maintains the same
mechanical footprint as both the Raspberry Pi 2 Model B and the Raspberry Pi 3 Model B.
compatible System on Modules (SoMs) containing processor, memory, eMMC Flash (on
non-Lite variants) and supporting power circuitry. These modules allow a designer to
leverage the Raspberry Pi hardware and software stack in their own custom systems and form
factors. In addition these modules have extra IO interfaces over and above what is available
on the Raspberry Pi model A/B boards, opening up more options for the designer. The CM3+
contains a BCM2837B0 processor (as used on the Raspberry Pi 3B+), 1Gbyte LPDDR2
RAM and eMMC Flash. The CM3+ is currently available in 4 variants, CM3+/8GB,
CM3+/16GB, CM3+/32GB and CM3+ Lite, which have 8, 16 and 32 Gigabytes of eMMC
Flash, or no eMMC Flash, respectively. The CM3+ Lite product is the same as CM3+ except
the eMMC Flash is not fitted, and the SD/eMMC interface pins are available for the user to
connect their own SD/eMMC device. Note that the CM3+ is electrically identical and, with
the exception of higher CPU z-height, physically identical to the legacy CM3 products.
function correctly
Hardware
• Low cost
• Low power
• High availability
• High reliability – Tested over millions of Raspberry Pis Produced to date – Module IO pins
2.2 Peripherals
• 48x GPIO
• 2x I2C
• 2x SPI
• 2x UART
• 2x SD/SDIO
• 1x HDMI 1.3a
• 1x USB2 HOST/OTG
• 1x 2-lane DSI Display Interface (up to 1Gbps per lane) 2.3 Software
• Mature and stable Linux software stack – Latest Linux Kernel support – Many drivers
upstreamed – Stable and well supported userland – Full availability of GPU functions using
standard APIs
Mechanical Specification The CM3+ modules conform to JEDEC MO-224 mechanical
specification for 200 pin DDR2 (1.8V) SODIMM modules and therefore should work with
the many DDR2 SODIMM sockets available on the market. (Please note that the pinout of
the Compute Module is not the same as a DDR2 SODIMM module; they are not electrically
compatible.) The SODIMM form factor was chosen as a way to provide the 200 pin
connections using a standard, readily available and low cost connector compatible with low
cost PCB manufacture. The maximum component height on the underside of the Compute
Module is 1.2mm. The maximum component height on the top side of the Compute Module
is 2.5mm. The Compute Module PCB thickness is 1.0mm +/- 0.1mm. Note that the location
and arrangement of components on the Compute Module may change slightly over time due
heights and PCB thickness will be kept as specified. Figure 2 gives the CM3+ mechanical
dimensions
The Compute Module 3+ has six separate supplies that must be present and powered at all
times; you cannot leave any of them unpowered, even if a specific interface or GPIO bank is
unused. The six supplies are as follows: 1. VBAT is used to power the BCM2837 processor
core. It feeds the SMPS that generates the chip core voltage. 2. 3V3 powers various
BCM2837 PHYs, IO and the eMMC Flash. 3. 1V8 powers various BCM2837 PHYs, IO and
SDRAM. 4. VDAC powers the composite (TV-out) DAC. 5. GPIO0-27 VREF powers the
GPIO 0-27 IO bank. 6. GPIO28-45 VREF powers the GPIO 28-45 IO bank.
Product features: The I2C 1602 LCD module is a 2 line by 16 character display interfaced to
an I2C daughter board. The I2C interface only requires 2 data connections, +5 VDC and
This is a 16x2 LCD display screen with I2C interface. It is able to display 16x2 characters on
Usually, Arduino LCD display projects will run out of pin resources easily, especially
with Arduino Uno. And it is also very complicated with the wire soldering and connection.
This I2C 16x2 Arduino LCD Screen is using an I2C communication interface. It means it
only needs 4 pins for the LCD display: VCC, GND, SDA, SCL. It will save at least 4
digital/analog pins on Arduino. All connectors are standard XH2.54 (Breadboard type). You
To avoid the confliction of I2C address with other I2C devices, such ultrasonic sensor, IMU,
accelerometers, and gyroscope, the I2C address of the module is configurable from 0x20-
required.
LCD component is a wrapper around an I2 C Master component and makes use of an existing
I2 C Master component. If a project does not already have an I2 C Master component, one is
required in order to operate. When one of the API functions is called, that function calls one
The I2 C LCD component is used in applications that require a visual or textual display. This
component is also used where a character display is needed but seven consecutive GPIOs on
a single GPIO port are not possible. In cases where the project already includes an I2 C
POWER SUPPLY:
TRANSFORMER:
This document presents the solution for a 12V 1A flyback converter based on the Infineon
guided through the component selections, the circuit design and, finally, an overview of the
experimental results are presented. The TLE8386-2EL is part of the Automotive OPTIREG™
family and it implements a low-side-sense current mode controller with built in protection
80V N-channel enhanced mode MOSFET, it is part of the OptiMOS™-T2 family. Intended
audience This document is intended for power supply design engineers, application
engineers, students, etc., who need to design a Flyback converter for automotive power
particular the focus is on a battery connected flyback that delivers up to 12W at 12V output
voltage; the intention is to provide the user with all of the needed information to fully design
and characterize the SMPS bringing it from an engineering concept to its production. Specific
features and applications are: - 48V to 12V Automotive applications - Isolated current mode
Output Current: 1A
normally used for rectifier circuits. When a digital project has to work with AC mains a
Transformer is used to step-down the voltage (in our case, to 24V or 12V) and then convert it
twice as in bridge rectifier hence this transformer is commonly used in full wave rectifier
circuits.
The operation and theory behind a Center tapped transformer is very similar to a normal
secondary transformer. A primary voltage will be induced in the primary coil (I1 and I3) and
due to magnetic induction the voltage will be transferred to the secondary coil. Here in the
secondary coil of a centre tapped transformer, there will be an additional wire (T2) which will
be placed exactly at the center of the secondary coil, hence the voltage here will always be
zero.
If we combine this zero potential wire (T2) with either T1 or T2, we will get a voltage of 12V
AC. If this wire is ignored and voltage across T1 and T2 is considered then we will get a
voltage of 24V AC. This feature is very useful for the function of a full wave rectifier.
Let us consider the voltage given by the first half of the secondary coil as Va and the voltage
RECTIFER CIRCUIT:
We have learnt in rectifier circuits about converting a sinusoidal ac voltage into its
corresponding pulsating dc. Apart from the dc component, this pulsating dc voltage will have
unwanted ac components like the components of its supply frequency along with its
harmonics (together called ripples). These ripples will be the highest for a single-phase half
wave rectifier and will reduce further for a single-phase full wave rectifier. The ripples will
be minimum for 3-phase rectifier circuits. Such supply is not useful for driving complex
electronic circuits. For most supply purposes constant dc voltage is required than the
pulsating output of the rectifier. For most applications the supply from a rectifier will make
the operation of the circuit poor. If the rectifier output is smoothened and steady and then
passed on as the supply voltage, then the overall operation of the circuit becomes better.
Thus, the output of the rectifier has to be passed though a filter circuit to filter the ac
components. The filter is a device that allows passing the dc component of the load and
blocks the ac component of the rectifier output. Thus the output of the filter circuit will be a
steady dc voltage. The filter circuit can be constructed by the combination of components like
capacitors, resistors, and inductors. Inductor is used for its property that it allows only dc
components to pass and blocks ac signals. Capacitor is used so as to block the dc and allows
ac to pass. All the combinations and their working are explained in detail below. Series
Inductor Filter The circuit diagram of a full wave rectifier with a series inductor filter is given
below. As the name of the filter circuit suggests, the Inductor L is connected in series
between the rectifier circuit and the load. The inductor carries the property of opposing the
change in current that flows through it. In other words, the inductor offers high impedance to
the ripples and no impedance to the desired dc components. Thus the ripple components will
be eliminated. When the rectifier output current increases above a certain value, energy is
stored in it in the form of a magnetic field and this energy is given up when the output current
falls below the average value. Thus all the sudden changes in current that occurs in the circuit
will be smoothened by placing the inductor in series between the rectifier and the load. The
waveform below shows the use of inductor in the circuit. From the circuit, for zero frequency
dc voltage, the choke resistance Ri in series with the load resistance RL forms a voltage
divider circuit, and thus the dc voltage across the load is Vdc = RL/(Ri + RL) Vdc is the
output from a full wave rectifier. In this case, the value of Ri is negligibly small when
compared to RL. The effect of higher harmonic voltages can be easily neglected as better
filtering for the higher harmonic components take place. This is because of the fact that with
the increase in frequency, the reactance of the inductor also increases. It should be noted that
a decrease in the value of load resistance or an increase in the value of load current will
decrease the amount of ripples in the circuit. So, the series inductor filter is mostly used in
cases of high load current or small load resistance. A simple series inductor filter may not be
properly used. It is always better to use a shunt capacitor (C) with series inductor (L) to form
an LC Filter. Shunt Capacitor Filter As the name suggests, a capacitor is used as the filter and
this high value capacitor is shunted or placed across the load impedance. This capacitor,
when placed across a rectifier gets charged and stores the charged energy during the
conduction period. When the rectifier is not conducting, this energy charged by the capacitor
is delivered back to the load. Through this energy storage and delivery process, the time
duration during which the current flows through the load resistor gets increased and the
ripples are decreased by a great amount. Thus for the ripple component with a frequency of
‘f’ megahertz, the capacitor ‘C’ will offer a very low impedance. The value of this impedance
can be written as: Shunt Capacitor Impedance = 1/2 fC Thus the dc components of the input
signal along with the few residual ripple components, is only allowed to go through the load
resistance RLoad. The high amount of ripple components of current gets bypassed through
the capacitor C. Now let us look at the working of Half-wave rectifier and Full-wave rectifier
with Capacitor filters, their output filtered waveform, ripple factor, merits and demerits in
detail.
CHAPTER 4
SOFTWARE DESCRIPTION
PYTHON:
PYTHON 3.7:
by Guido van Rossum and first released in 1991, Python's design philosophy
level data structures and a simple but effective approach to object- oriented
programming. Python’s elegant syntax and dynamic typing, together with its
interpreted nature, make it an ideal language for scripting and rapid application
development in manya reason most platforms and may be freely distributed. The
same site also contains distributions of and pointers to many free third party Python
modules, programs and tools, and additional documentation. The Python interpreter
is easily extended with new functions and data types implemented in C or C++ (or
other languages callable from C). Python is also suitable as an extension language
for customizable applications. This tutorial introduces the reader informally to the
basic concepts and features of the Python language and system. It helps to have a
Python interpreter handy for hands-on experience, but all examples are self-
contained, so the tutorial can be read off- line as well. For a description of standard
and c-api-index. There are also several books covering Python in depth. This tutorial
does not attempt to be comprehensive and cover every single feature, or even every
commonly used feature. Instead, it introduces many of Python’s most notes worthy
features, and will give you a good idea of the language’s flavor and style. After
reading it, you will be able to read and write Python modules and programs, and you
will be ready to learn more about the various Python library modules described in
library-index. If you do much work on computers, eventually you find that there’s
some task you’d like to automate. For example, you may wish to perform a search-
and-replace over a large number of text files, or rename and rearrange a bunch of
photo files in a complicated way. Perhaps you’d like to write a small custom
database, or a specialized
may have to work with several C/C++/Java libraries but find the usual
for such a library and find writing the testing code a tedious task. Or maybe you’ve
written a program that could use an extension language, and you don’t want to
primary prompt causes the interpreter to exit with a zero exit status. If that doesn’t
work, you can exit the interpreter by typing the following command: quit(). The
code completion on systems that support read line. Perhaps the quickest check to see
whether command line editing is supported is typing Control-P to the first Python
prompt you get. If it beeps, you have command line editing; see Appendix
Interactive Input Editing and History Substitution for an introduction to the keys.
interpreter operates somewhat like the Unix shell: when called with standard input
connected to a tty device, it reads and executes commands interactively; when called
with a file name argument or with a file as standard input, it reads and executes a
script from that file. A second way of starting the interpreter is python -c command
[arg] ..., which executes the statement(s) in command, analogous to the shell’s -c
option. Since Python statements often contain spaces or other characters that are
invoked using python-m module [arg]...,which executes the source file for the
module as if you had spelled out its full name on the command line. When a script
file is used, it is sometimes useful to be able to run the script and enter interactive
There are tools which use doc strings to automatically produce online or printed
documentation or to let the user interactively browse through code; it’s good practice
to include doc strings in code that you write, so make a habit of it. The execution of a
function introduces a new symbol table usedfor the local variables of the function.
More precisely, all variable assignments in a functions to read the value in the local
symbol table; whereas variable references first look in the local symbol table, then in
the local symbol tables of enclosing functions, then in the global symbol table, and
finally in the table of built-in names. Thus, global variables cannot be directly
they may be referenced. The actual parameters (arguments) to a function call are
introduced in the local symbol table of the called function when it is called; thus,
arguments are passed using call by value (where the value is always an object
reference, not the value of the object).1 When a function calls another function, a
new local symbol table is created for that call. A function definition introduces the
function name in the current symbol table. The value of the function name has a type
Annotations are stored in the annotations attribute of the function as a dictionary and
haven o effect on any other part of the function. Parameter annotations are defined
expression, between the parameter list and the colon denoting the end of the def
statement.
The comparison operators in and not in check whether a value occurs (does not
occur) in a sequence. The operator is and does not compare whether two objects are
really the same object; this only matters for mutable objects like lists. All
comparison operators have the same priority, which is lower than that of all
be combined using the Boolean operators and the outcome of a comparison (or of
any other Boolean expression) may be negated with not. These have lower priorities
than comparison operators; between them, not has the highest priority and or the
parentheses can be used to express the desired composition. The Boolean operators
and are so-called short-circuit operators: their arguments are evaluated from left to
right, and evaluation stops as soon as the outcome is determined. For example, if A
and C are true but Bis false, A and B and C does not evaluate the expression C.
When used as a general value and not as a Boolean, the return value of a short-circuit
Classes provide a means of bundling data and functionality together. Creating a new
class creates a new type of object, allowing new instances of that type to be made.
Each class instance can have attributes attached to it for maintaining its state. Class
instances can also have methods (defined by its class) for modifying its state.
classes with a minimum of new syntax and semantics. It is a mixture of the class
mechanisms found in C++ and Modula-3. Python classes provide all the standard
multiple base classes, a derived class can override any methods of its base class or
classes, and a method can call the method of a base class with the same name.
Objects can contain arbitrary amounts and kinds of data. As is true for modules,
classes partake of the dynamic nature of Python: they are created at runtime, and can
(including the data members) are public (except see below Private Variables), and all
member functions are virtual. A sin Modula-3, there are no short hands for
referencing the object’s members from its methods: the method function is declared
with an explicit first argument representing the object, which is provided implicitly
by the call. A sin Small talk, classes themselves are objects. This providesSemantics
for importing and renaming. Unlike C++ and Modula-3, built-in types can be used as
base classes for extension by the user. Also, like in C++, most built-in operators with
special syntax (arithmetic operators, sub scripting etc.) can be redefined for class
make occasional use of Smalltalk and C++ terms. I would use Modula-3 terms, since
its object- oriented semantics are closer to those of Python than C++, but I expect
Objects have individuality, and multiple names (in multiple scopes) can be bound to
the same object. This is known as aliasing in other languages. This is usually not
appreciated on a first glance at Python, and can be safely ignored when dealing with
surprising effect on these mantic of Python code involving mutable objects such as
lists, dictionaries, and most other types. This is usually used to the benefit of the
program, since aliases behave like pointers in some respects. For example, passing
function modifies an object passed as an argument, the caller will see the change —
this eliminates the need for two different argument passing mechanisms as in Pascal.
any way (except for performance), and it may change in the future. Examples of
name spaces are: these to f built-in names (containing functions such as abs(), and
built-in exception names); the global names in a module; and the local names in a
namespace. The important thing to know about namespaces is that there is absolutely
modules may both define a function maximize without confusion — users of the
modules must prefix it with the module name. By the way, I use the word attribute
for any name following a dot — for example, in the expression z. real, real is an
forward mapping between the module’s attributes and the global names defined in
the module: they share the same namespace!1 Attributes may be read-only or
write modname.the_answer = 42. Writable attributes may also be deleted with the
del statement. For example, del mod name .the_ answer will remove the attribute
the_answer from the object named by mod name. Namespaces are created at
different moments and have different lifetimes. The namespace containing the built-
in names is created when the Python interpreter starts up, and is never deleted. The
global namespace for a module is created when the module definition is read in;
normally, module namespaces also last until the interpreter quits.The statements
executed by the top-level invocation of the interpreter, either read from a script file
or interactively, are considered part of a module called main, so they have their own
global namespace.(The built-in names actually also live in a module; this is called
built ins.) The local namespace for a function is created when the function is called,
and deleted when the function returns or raises an exception that is not handled
within the function. (Actually, forgetting would be a better way to describe what
actually happens.) Of course, recursive invocations each have their own local
namespace.
convention allows compiled modules from different releases and different versions
of Python to coexist. Python checks the modification date of the source against the
compiled version to see if it’s out of date and needs to be recompiled. This is a
independent, so the same library can be shared among systems with different
architectures. Python does not check the cache in two circumstances. First, it always
recompiles and does not store the result for the module that’s loaded directly from
the command line. Second, it does not check the cache if there is no source module.
in the source directory, and there must not be a source module. Some tips for
experts:
statements, the -OO switch removes both assert statements and doc
strings. Since some programs may rely on having these available, you should
only use this option if you know what you’re doing. “Optimized”
modules have an opt- tag and are usually smaller. Future releases may
• A program doesn’t run any faster when it is read from a .pyc file
than when it is read from a .py file; the only thing that’s faster about .pyc
directory.
decisions
THONNY IDE
developed to provide a small and fast IDE, which has only a few dependencies from
Desktop Environment like KDE or GNOME, so Thonny only requires the GTK2
toolkit and therefore you only need the GTK2 runtime libraries installd to run it.
For compiling Thonny yourself, you will need the GTK (>= 2.6.0) libraries and
header files. You will also need the Pango, Gliband ATK libraries and header files.
All these files are available at http://www.gtk.org. Furthermore you need, of course,
a C compiler and the Make tool; a C++ compiler is also required for the included
% ./configure
% make
% make install
The configure script supports several common options, for a detailed list,
type
% ./configure --help
There are also some compile time options which can be found in src/Thonny .h.
Please see Appendix C for more information. In the case that your system lacks
dynamic linking loader support, you probably want to pass the option --disable-vte to
the configure script. This prevents compiling Thonny with dynamic linking loader
compiled and tested under Debian 3.1 Sarge, Debian 4.0 Etch, Fedora Core 3/4/5,
Linux From Scratch and FreeBSD 6.0. It also compiles under Microsoft Windows
At startup, Thonny loads all files from the last time Thonny was launched. You
can disable this feature in the preferences dialog (see Figure 3-4). If you specify
some files on the command line, only these files will be opened, but you can find the
files from the last session in the file menu under the "Recent files" item. By default
this contains the last 10 recently opened files. You can change the amount of recently
opened files in the preferences dialog. You can start several instances of Thonny ,
but only the first will load files from the last session. To run a second instance of
Thonny , do not specify any file names on the command-line, or disable opening
Thonny detects an already running instance of itself and opens files from the
command-line in the already running instance. So, Thonny can be used to view and
manager. If you do not like this for some reason, you can disable using the first
you will have a terminal widget in the notebook at the bottom. If Thonny cannot
find libvte.so at startup, the terminal widget will not be loaded. So there is no need to
install the package containing this file in order to run Thonny . Additionally, you
can disable the use of the terminal widget by command line option, for more
information see Section3.2.You can use this terminal (from now on called VTE)
nearly as an usual terminal program like xterm. There is basic clipboard support.
You can paste the contents of the clipboard by pressing the right mouse button to
open the popup menu and choosing Paste. To copy text from the VTE, just select the
desired text and then press the right mouse button and choose Copy from the pop up
menu. On systems running the X Window System you can paste the last selected text
by pressing the middle mouse button in the VTE (on 2-button mice, the middle
As long as a project is open, the Make and Run commands will use the
project’s settings, instead of the defaults. These will be used whichever document is
currently displayed. The current project’s settings are saved when it is closed, or when
Thonny is shut down. When restarting Thonny , the previously opened project file
that was in use at the end of the last session will be reopened.
Execute will run the corresponding executable file, shell script or interpreted
script in a terminal window. Note that the Terminal tool path must be correctly set in
the Tools tab of the Preferences dialog - you can use any terminal program that runs
a Bourne compatible shell and accept the "-e" command line argument to start a
command. After your program or script has finished executing, you will be prompted
to press the return key. This allows you to review any text output from the program
By default the Compile and Build commands invoke the compiler and linker with
only the basic arguments needed by all programs. Using Set Includes and Arguments
you can add any include paths and compile flags for the compiler, any library names
and paths for the linker, and any arguments you want to use when running Execute.
Thonny has basic printing support. This means you can print a file by passing the
filename of the current file to a command which actually prints the file.
SYSTEM DESIGN
BLOCK DIAGRAM
WORKING:
The raspberry pi is interfaced with chatbot activator button and once the button gets
activated and the chat bot starts asking the question once we starts replying to the
question the raspberry pi performs Machine learning and compares the answers with
If any of the symptom gets matches the respective symptom is shown through LCD
METHODOLOGY
The Health-Care Chat Bot System should be written in Python, GUI links and a
simple, accessible network API. The system must provide a capacity for parallel operation
and system design should not introduce scalability issues with regard to the number of
surface computers, tablets or displays connected at any one time. The end system should also
allow for seamless recovery, without data loss, from individual device failure. There must be
a strong audit chain with all system actions logged. While interfaces are worth noting that this
system is likely to conform to what is available. With that in mind, the most adaptable and
portable technologies should be used for the implementation. The system has criticality in so
far as it is a live system. If the system is down, then customers must not notice, or notice that
the system recovers quickly (seconds). The system must be reliable enough to run, crash and
glitch free more or less indefinitely, or facilitate error recovery strong enough such that
In machine learning, support-vector machines DCNN are supervised learning models with
associated learning algorithms that analyse data used for classification and regression
analysis. Given a set of training examples, each marked as belonging to one or the other of
two categories, an DCNN training algorithm builds a model that assigns new examples to one
category or the other, making it a non-probabilistic binary linear classifier (although methods
such as Platt scaling exist to use DCNN in a probabilistic classification setting). An DCNN
model is a representation of the examples as points in space, mapped so that the examples of
the separate categories are divided by a clear gap that is as wide as possible. New examples
are then mapped into that same space and predicted to belong to a category based on the side
of the gap on which they fall. In addition to performing linear classification, SVMs can
efficiently perform a non-linear classification using what is called the kernel trick, implicitly
In recent years, a great variety of hardware solutions for real-time TSR has been proposed.
integrated circuit) chips, field programmable gate arrays (FPGAs), digital sign processors
In a software-based solution running on a Linux or window system with a 2.4-GHz dual core
CPU is presented.
CHAPTER 7
RESULTS
CHAPTER 8
Conclusion:
Thus, we can conclude that this system giving the accurate result. As we are using large
dataset which will ensures the better performance. Thus, we build up a system which is useful
Future Scope:
Chat bots are a thing of the future which is yet to uncover its potential but with its rising
popularity and craze among companies, they are bound to stay here for long. Machine
learning has changed the way companies were communicating with their customers. With
new platforms to build various types of chat bots being introduced, it is of great excitement to
witness the growth of a new domain in technology while surpassing the previous threshold.
References:
1. [1] Flora Amato, Stefano Marrone, “Chatbots meet eHealth: automat zing healthcare”,
Smart Wireless Interactive Healthcare System for Weight Control and Health
6. [6] Boukricha, H., Wachsmuth, I.: Modeling Empathy for a Virtual Human: How, When and to
What Extent. The 10th International Conference on Autonomous Agents and Multiagent
7. [7] Agarwal, R., Gao, G., DesRoches, C., et al.: The Digital Transformation of Healthcare:
Current Status and the Road Ahead. Information Systems Research 21, 796-809 (2010).
8. [8] Aron, A., Aron, E.N., Smollan, D.: Inclusion of Other in the Self Scale and the structure of
interpersonal closeness. Journal of Personality and Social Psychology 63, 596-612 (1992).
9. [9] Bickmore, T., Cassell, J.: Social Dialogue with Embodied Conversational Agents. In:
Kuppevelt, J.C.J., Bernsen, N.O., Dybkjær, L. (eds.) Advances in Natural Multimodal Dialogue
in automated health behavior change interventions. Patient Education and Counseling 59,
21-30 (2005).