Sie sind auf Seite 1von 6

E-MAIL: n.amarnath96@gmail.

com

Mobile: +91-8919512704.
Amarnath
Professional Summary

Summary:
 Having 6+ years of professional experience in IT industry, with Hadoop Eco Systems, Data
warehousing tools.
 Technically accomplished professional with 3.5 years of Big Data Eco Systems experience in
ingestion, storage, querying, processing and analysis of Big Data/ Hadoop and AWS.
 Having 5 months experience in Spark SQL.
 In-depth knowledge and hands-on experience in dealing with Apache Hadoop components like
HDFS, HiveQL, HBase, Pig, Hive, Sqoop, Oozie.
 Hands on experience in writing Hive scripts.
 Experience in importing and exporting data using Sqoop from Relational Database Systems to HDFS
and vice-versa.
 Having experience in all stages of the project including requirements gathering, designing &
documenting architecture, development, performance optimization, cleaning, and reporting.
 Strong experience on AWS-EMR, Spark Installation, HDFS & Map-reduce Architecture. Along
with that having a good knowledge on Spark, Scala and Hadoop distributions like Hortonworks,
Apache Hadoop,
 Strong experience in All Hadoop and Spark ecosystems include Hive, Pig, Shell Script, Sqoop,
Oozie, Spark SQL,
 Strong experience in ETL Tools (Informatica) .
 Experience in building Pig scripts to extract, transform and load data onto HDFS for processing.
 Experience in writing HiveQL queries to store processed data into Hive tables for analysis.
 Excellent understanding and knowledge of NOSQL databases like HBase.
 Basic knowledge of Oozie, Flume and Spark.
 Writing pig Latin Scripting & loading data to Hive tables.
 Understanding and tacking backup of Data node, Name node, Job Tracker, Secondary Name node,
Task Tracker.
 Well experienced with scripting in Hive and PIG.
 Having experience in all stages of the project including requirements gathering, designing &
documenting architecture, development, performance optimization, cleaning, and reporting.
 Good knowledge in flume.
 Wrote a script and placed it in client side so that the data moved to HDFS will be stored in temporary
file and then it will start loading it in hive tables.
 Acquired good knowledge in HBase as I am working in one of the major migration project.
 Extensively working with Bteq, Fast Export, Flood, MLoad Teradata utilities to export and load
data from Flat Files.
 Extensively used ETL methodology for supporting Data Extraction, Transformation and Loading
processing, in a corporate-wide-ETL Solution using Informatica.

Education:
 Master of Computer Applications (MCA) from KMM Institute of PG Studies under SV University.
Professional Summary:
 Working as a Hadoop developer for Indecomm global services in Bangalore from Sept-2018 to
tilldate.
 Working as a Hadoop developer for L&T InfoTech in Bangalore from June-2017 to Aug-2018.
 Working as an ETL Developer for ASICS Technologies pvt.Ltd in Hyderabad from Nov’ 2012
toMay-2017.

Technical Skills:
 Programming Skills : SQL, and Teradata.
 Big Data Tools : HDFS, Pig, Hive, Sqoop, Oozie, flume, Tidal.
 Data Bases : Teradata, Oracle 11g, MySQL.
 NOSQL Databases : HBase.
 Operating Systems : : Linux (RedHat), Windows 2000/XP.
 IDE’s : Eclipse,

Professional Experience:
#Project 1 : Telecommunications.
Client : Inovvo.
Duration : Sep-2018 – till date.
Technology Landscape : Hadoop, Hive, Sqoop, SparkSQL, HDFS, Oozie ,Strom, Teradata.
Role : Senior Software Engineer.

Summary:
SAX - Subscriber Analytix application provides a 360º view that enables network
operators to enhance operational efficiencies and prioritize capital expenditures based on
customer demand in a constantly changing environment data preferred applications and websites,
interest categories, devices used, location and even roaming usage. The Subscriber Analytix
products provide analytics that operators can use internally to build smarter networks and
improve customer care.
Subscriber Analytix uses following information from multiple data sources and processed and
then
 Presented both graphically and in reports
 Broadband Logs
 Polystar User Plane Gn/Gi Data (Probe) i.e. htpxi, floxi, rtsxi
 Subscriber Profile
 Tariff Plan
 Subscriber Analytix offers providers a complete analytics solution that:
 Converts raw data into revenue by monetizing user behavior categories.
 Arms operators with accurate, actionable, timely information
 Improves the customer experience
 Drives reductions in customer churn

Recommendations Summary:
 Working on hive partitions and buckets.
 Creating RDD’s in SparkSql.
 Creating hive tables by using SparkSql.
 Writing Hive queries in Spark.
 Creating on workflow/schedulers like Oozie jobs.
 Good knowledge in map reduces.
 Writing HIVE scripts and load data in to target tables.
 Using data stage HDFS files load in to target tables.
 Working on HDFS commands.
 Extensively Used Sqoop to import/export data between RDBMS and Hive tables,
 Incremental imports and created Sqoop jobs for last saved value.
 Created Hive queries to compare the raw data with EDW reference tables
 And performing aggregates.
 Managing and scheduling jobs on a Hadoop cluster.
 Analysed the data by performing Hive queries and running Pig scripts to know user
behaviour.
 Optimized MapReduce Jobs to use HDFS efficiently by using various compression
mechanisms.
 Experienced in managing and reviewing Hadoop Log Files.
 Expertise in using ORC and Parquet file formats in Hive.

#Project 2 : Treasury Data Foundation


Client : ABSA. (PUNE)
Duration : Jan-2018 – Aug-2018
Technology Landscape : Hadoop, Hive, Sqoop, HDFS, Oozie Teradata, Oracle, LUM.
Role : Hadoop Developer.
Summary:
Scope of the project is to fetch data from cheque_mohtly table of EWD using Sqoop and process it
through different stages of Hadoop infrastructure (landing area, raw area, publish area) and finally store it
to TDF data mart.

Recommendations Summary
 Fetch Cheque_mohtly data from EDW suing Sqoop
 Store the data in Hadoop Landing area
 Process the data through Hadoop raw area and publish area (HIVE).
 Store the data in Treasury Data Mart(HIVE).
 Reconcile source data with Treasury data mart (HIVE).
Roles and Responsibilities:
 Creating a folder in HDFS and store large files in that folder.
 Creating Hive tables and load data in to hive tables.
 I Implemented on SCD Type2 in Hive.
 Working on hive partitions and buckets.
 Creating on workflow/schedulers like Oozie jobs.
 Good knowledge in map reduces.
 Writing Bteq scripts and load data in to target tables.
 Using data stage HDFS files load in to target tables.
 Working on HDFS commands.
 Good knowledge on pig commands.
 Working with Sqoop commands (import-export).
 HBase tables creation as per the oracle existing tables
 Working with HBase alter tables and updates tables.
 Identifying proper primary key and identifying duplicate records.
 Working on creating Teradata tables.

#Project 3 : J&J Global BI Support.


Client : Johnson and Johnson.
Duration : May-2017 – Dec-2017
Technology Landscape : Hadoop, PIG, Hive, Sqoop, HDFS, Oozie Teradata, Informatica.
Role : Hadoop Developer.

Summary:
The purpose of this document is to provide you, the stakeholder, with a clear understanding of the
requirements identified for the effort. This document contains the detailed system requirements that have
been identified, as well as all of the business requirements necessary to implement the identified solution
for this effort.
 Integrate OTIF (On Time in Full) and Inventory data from PRMS, JDE8.12,USROTC,
SAP P01 sources system into existing Plan Analytics framework
 Integrate Inventory data from PRMS, JDE Spine, JDE 7.3, USROTC, SAP P01, JDE
8.12,BTB and MARS sources system into existing Plan Analytics framework.
 Integrate Forecast data from the, JDE 7.3 and SAP P01 into existing Plan Analytics
framework.

Recommendations Summary:
 Creating service request in irs.
 Resolve Service request tickets.
 Working on Hive Tables, pig scripts.
 Working Sqoop jobs.
 Monitoring on workflow/schedulers like.
 Working on HDFS commands.
 Writing Bteq scripts and load data in to target tables.
 Creating SLT daily 3 times.
 Creating weakly dashboard.
 Working on HDFS commands.
 Good knowledge on pig commands.
 Monitoring with Sqoop commands (import-export).
 Identifying proper primary key and identifying duplicate records.
 Working on creating Teradata tables.

#Project4 : RSDS Remediation - Hadoop implementation.


Client : ANZ Operations.
Duration : June-2016– March-2017
Technology Landscape : Hadoop, PIG, Hive, Sqoop, HDFS, Oozie Teradata, MySQL.
Role : Hadoop Developer
Summary:
The purpose of this document is to provide you, the stakeholder, with a clear understanding
of the requirements identified for the effort. This document contains the detailed system requirements
that have been identified, as well as all of the business requirements necessary to implement the identified
solution for this effort.
Recommendations Summary:
One time history clean-up on 186 tables Correcting the primary keys in load process for 67
tables to avoid re-occurrence in future Enhancing the ANZ Audit and control framework with additional
Data quality governance to avoid re-occurrence in future Benchmarking of Teradata check points to
identify the tables/queries to be fine-tuned will optimize the performance .

Roles and Responsibilities:


 Creating a folder in HDFS and store large files in that folder.
 Creating Hive tables and load data in to hive tables.
 Working on hive partitions and buckets.
 Creating on workflow/schedulers like Oozie jobs.
 Good knowledge in map reduces.
 Writing Bteq scripts and load data in to target tables.
 Using data stage HDFS files load in to target tables.
 Working on HDFS commands.
 Good knowledge on pig commands.
 Working with Sqoop commands (import-export).
 HBase tables creation as per the oracle existing tables
 Working with HBase alter tables and updates tables.
 Identifying proper primary key and identifying duplicate records.
 Working on creating Teradata tables.

#Project 5 : Insurance Portal


Client : AMICA, USA.
Duration : Nov2012 – May2016
Technology Landscape : Teradata14, Informatica8.5, MySQL.
Role : ETL Developer.

Summary:
This project has the facility to take insurance policy other than life insurance through Internet. This gives
all the information about insurance, like policies, agents, offices, and glossary of insurance terminology,
online services to some of the policies. Information regarding expiry, renewal, status inquiry, chatting
with experts, asking questions which will be answered after one day if not possible by chatting.
.
Roles and Responsibilities:
 Developed this project from scratch and successfully implemented the project in live
environment.
 Writing With Bteq, Fload , Mload, FastExport utilities.
 Working PI and SI in Teradata.
 Working with Joins in Teradata.
 Extensively worked with BTEQ utilities to export and load data to/from Source Systems.
 Developed and tested all the Bteq scripts and prepared Unit Test Planning for scripts..
 Extensively worked in data Extraction, Transformation and Loading from source to target
System using BTEQ Scripts.
 Understanding the HLD Documents like MLPD and TDD Documents and customer
requirements.
 Creating Tables, views, collect statistics and DDL’s for the sql scripts from the TTD Documents.
 Involved in query tuning and performance for some existing queries.
 Preparation of Unit test cases document and unit test results document. Appreciated for the
quality of the work by onshore.

Das könnte Ihnen auch gefallen