Beruflich Dokumente
Kultur Dokumente
Program Highlights
180 HOURS OF INDUSTRY BOOT-CAMP STYLE TRAINING TOP CLASS FACULTY FROM
EXPERT LED CLASSROOM WITH 70% PRACTICAL MICROSOFT, GOOGLE,
SESSIONS CONTENT AMAZON, DELOITTE ETC.
24X7 LIFETIME ACCESS TO CREATE A FULL- FLEDGED CERTIFICATE FROM TCS ION
ONLINE LEARNING CONTENT AI PRODUCT AS YOUR PROCERT & EDVANCER WITH
& VIDEOS CAPSTONE PROJECT JOB ASSISTANCE
Get a huge hike in your We will work closely with Learn data science from
salary on becoming a data you to help build your data India's top data science
scientist science portfolio and start a training institute as ranked
data science career by industry & students
Technologies Covered
Pre-requisites
1. Education: B.E/B.Tech/M.Tech/MCA/MS/MBA or B.Sc/M.Sc in Statistics/ Maths/ Physics/ Economics or
any quantitative field
2. Experience: 0-10 years of work experience.
3. You should be comfortable in learning maths and programming. Not mandatory to have a prior
programming or maths background though.
Scholarship Process
1 2
16. 3
Enter your application details Your application will be Enrol for the course and get
on our site & upload your
17.
reviewed by our experts and if the scholarship of Rs. 30,000.
CV/profile. shortlisted you will be eligible
for the scholarship
Fees
Final Course fee: Rs. 75,000 + 18% GST after a 20% discount and scholarship of Rs. 30,000.
Pay the fee in 6 interest-free instalments* post a 10% down-payment. Effectively you will be paying just Rs.
14,750/- per month for 6 months post down-payment, if you clear the scholarship application process &
get the discount.
(*instalment offer subject to approval from our financing partner based on Aadhar card and 4 months bank statements)
Payments can be made online through credit cards, debit cards or net-banking.
About Edvancer
Edvancer is India's leading data science training institute where we provide a range of courses on data
science to all levels of learners. We have trained over 5,000 students and delivered over 10,00,000+ hours
of learning. Our alumni work with some of India's top companies in data science and even globally. Our
corporate clients include PwC, E&Y, L&T, HP, JP Morgan, Cognizant, Accenture, TCS, Microsoft etc.
Full Curriculum
Module 1: Predictive Analytics in R
What is this module about?: Predictive Analytics is the scientific process of deriving insights from raw
data to support decision making and is the core of data science. Through this module you will learn how to
use analytical techniques and the R language to solve business problems. This is a comprehensive module
which will take you from the basics of statistical techniques, R language right up to building predictive
models.
Tools to be learnt: R
Class Duration: 64 hours
Tools to be learnt: Python (Libraries like pandas, numpy, scipy, scikit-learn, bokeh, beautifulsoup)
Class Duration: 56 hours
Tree Models using Python In this module we will learn a very popular class of
• Introduction to decision trees machine learning models, rule based tree
• Tuning tree size with cross validation structures also known as Decision Trees. We'll
• Introduction to bagging algorithm examine their biased nature and learn how to use
bagging methodologies to arrive at a new
• Random Forests
technique known as Random Forest to analyse
• Grid search and randomized grid search data. We’ll further extend the idea of randomness
• ExtraTrees (Extremely Randomised Trees) to decrease bias in ExtraTrees algorithm.
• Partial Dependence Plots In addition, we learn about powerful tools used
• Case Studies with all kind of machine learning algorithms,
• Home exercises gridSearchCV and RandomizedSearchCV.
Boosting Algorithms using Python
• Concept of weak learners Want to win a data science contest on Kaggle or
• Introduction to boosting algorithms data hackathons or be known as a top data
• Adaptive Boosting scientist? Then learning boosting algorithms is a
• Extreme Gradient Boosting (XGBoost) must as they provide a very powerful way of
• Case study analysing data and solving hard to crack problems.
• Home exercise
Support Vector Machines (SVM) and KNN in Python We step in a powerful world of “observation based
• Introduction to idea of observation based learning algorithms” which can capture patterns in the data
• Distances and Similarities which otherwise go undetected. We start this
• K Nearest Neighbours (KNN) for classification discussion with KNN which is fairly simple. After
• Introduction to SVM for classification that we move to SVM which is very powerful at
• Regression with KNN and SVM capturing non-linear patterns in the data.
• Case study
• Home exercises
Unsupervised learning in Python Many machine learning algorithms become difficult
• Need for dimensionality reduction to work with when dealing with many variables in
• Introduction to Principal Component Analysis (PCA) the data. In comes to rescue PCA which solves
problems arising from data which has highly
• Difference between PCAs and Latent Factors
correlated variables. The same idea can be
• Introduction to Factor Analysis extended to find out hidden factors in our data
• Patterns in the data in absence of a target with Factor Analysis which is used extensively in
• Segmentation with Hierarchical Clustering and K-means surveys and marketing analytics.
• Measure of goodness of clusters
• Limitations of K-means We also learn about two very important
• Introduction to density based clustering (DBSCAN) segmentation algorithms; K-means and DBSCAN
and understand their differences and strengths.
Neural Networks
• Introduction to Neural Networks
Artificial Neural Networks are the building blocks of
• Single layer neural network
artificial intelligence. Learn the techniques which
• Multiple layer Neural network replicate how the human brain works and create
• Back propagation Algorithm machines which can solve problems like humans.
• Moment up and decaying learning rate in context of
gradient descent
• Neural Networks implementation in Python
Text Mining in Python
Unstructured text data accounts for more and more
• Quick Recap of string data functions interaction records as most of our daily life moves
• Gathering text data using web scraping with urllib online. In this module we start our discussion by
• Processing raw web data with BeautifulSoup looking at ways to collect all that data. In addition
• Interacting with Google search using urllib with custom to scraping simple web data; we’ll also learn to use
user agent data APIs with example of Twitter API, right from
• Collecting twitter data with Twitter API the point of creating a developer account on
• Introduction to Naive Bayes twitter. Further we discuss one of the very
• Feature Engineering for text Data powerful algorithm when it comes to text data;
Naive Bayes. Then we see how we can mine the
• Feature creation with TFIDF for text data
text data.
• Case Studies
Ensemble Methods in Machine Learning Individual machine learning models extract pattern
• Making use of multiple ML models taken together from the data in different ways , which at times
• Simple Majority vote and weighted majority vote results in them extracting different patterns from
the data. Rather than sticking to just one algorithm
• Blending
and not making use of other’s results is what we
• Stacking move past in this module. We learn to make use of
• Case Study multiple ML models taken together to make our
predictive modelling solutions even more powerful.
Bokeh For making quick prototypes of your solutions
• Introduction to Bokeh charts and plotting which can be scaled later as interactive
visualisation in the form of standalone or hosted
web pages, we introduce you to Bokeh, an evolving
library in python which has all the tools that you’ll
need to do the same.
Version Control using Git and Interactive Data We finish this module with a discussion on two very
Products important aspects of a data scientist’s work. First is
• Need and Importance of Version Control version control which enables you to work on large
• Setting up git and github accounts on local machine projects with multiple team members scattered
• Creating and uploading GitHub Repos across the globe. We learn about git and most
• Push and pull requests with GitHub App widely used public platform version control that is
• Merging and forking projects GitHub.
• Examples of static and interactive data products
Creating Visualizations
• Bar in Bar Chart
• Scatter Plots
• Histogram
• Heat Maps
• Highlighting in Tables
• Motion Charts
• Pie Chart In this module you will learn how to create a
• Bullet Chart number of charts each with its own specific utility.
• Box & Whisker Plot Learn through multiple case studies
• KPI Chart
• Market Basket Analysis Chart
• Pareto Chart
• Waterfall Chart
• Best Practice for Selecting Chart Type
• Case Study 1
• Case Study 2
Adding Statistics to Data
• Reference Lines
• Reference Bands
• Distribution Bands Understand how to add statistics to the charts and
• Trend Lines tables
• Forecasting
• Clustering
• Summary Card
• Case Study 1
Formatting & Annotation
• Add Titles, Captions & Annotations Format your visualizations and make them more
• Formatting Options - Fonts, Shading, Borders etc. informative
• Formatting Axes, Mark Labels and Legends
Dashboards & Stories
• What are Dashboards?
• Why and How are Dashboards Useful?
• Creating an Interactive Dashboard In this module you will learn how to create the end
• Adding Actions to a Dashboard output of visualization in Tableau which is creating
• Best Practices for Dashboard Design entire dashboards & data based storyboards to
• What is a Story? present to clients and management.
• Creating a Story
• Adding a Background Image to a Story
• Case Study