Sie sind auf Seite 1von 3

Mohit Kumar

+44 7469673758 mohit.nigania@outlook.com Hatfield

Big Data Analytical Professional


5.0 Years of total IT work experience in various Enterprise Software , Hadoop, Spark, Hive, BigData
Analytic’s and ability to provide innovative solutions

KEY SKILLS
• Data Analysis • Data Visualization • Optimization Techniques • HDFS • SQL
Certified AWS Associate Architect
TECHNICAL SKILLS

Languages: Python,SQL
AWS Services: EC2, Lambda, Auto Scaling, ELB S3, EBS, RDS, DynamoDB VPC, Route53, Cloud Watch, Cloud
Formation, IAM, EMR, Data Pipeline, SQS and SNS.
Tools: PyCharm, SQL plus , Eclipse
Databases: Oracle 10g , MongoDb
BigData Ecosystem: Hadoop, HDFS, Hive, Spark(RDD ,SQL), Yarn ,Sqoop ,Hue, Cloudera Sandbox
Domain Knowledge: Health care , FinTech , Telecom , E-Com

PROFESSIONAL EXPERIENCE

Accenture Co Ltd

Working at Accenture as Digital Data Tech Developer Mumbai, IN | Feb '20 - Present

Project : Unify
Client : NBN(National Broadband Network Australia)
Responsibilities:
·Collaborated with various team & management to understand the requirement & design the complete system
·Experience in guiding the classification, plan, implementation, growth, adoption and compliance to enterprise
architecture strategies, processes and standards
·Designed and developed highly scalable and available systems
·Worked with services like EC2, Lamba, SES, SNS, VPC,CloudFront, CloudFormation etc,
·Demonstrated expertise in creating architecture blueprints and detailed documentation. Created bill of materials,
including required Cloud Services (such as EC2, S3 etc.) and tools
·Involved in the end to end deployment process

Anand Rathi IT Pvt. Ltd.

Working at ARIT as Hadoop Developer Mumbai, IN | April '19 - Jan '20

Project : PWM Portal (Private Wealth Management)


Client : Anand Rathi Financial Services & Anand Rathi Share and Stock Brokers Ltd.
Responsibilities:
·Gather and process raw data at scale (including writing scripts, calling APIs, write SQL queries, writing applications,
etc.)
·Cleans, transforms and aggregates unorganized data into databases from different sources as NSE,BSE and Anand Rathi
·Stock portals.
·Work closely with our engineering team to integrate your amazing innovations and algorithms into our production
systems
·Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.
·Developed Pyspark scripts, using both Data frames/SQL/ and RDD/MapReduce in Spark 2.0 for Data Aggregation,
queries and writing data back into OLTP system through Sqoop.
·Experienced in performance tuning of Spark Applications for setting right Batch Interval time, correct level of
Parallelism and memory tuning.
·Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.
·Performed advanced procedures like text analytics and processing, using the in-memory computing capabilities of
Spark.
·Experienced in handling large datasets using Partitions, Spark in Memory capabilities, Broadcasts in Spark, Effective &
efficient Joins, Transformations and other during ingestion process itself.
· Experience processing large amounts of structured (HDFS, HIVE,Sqoop,Spark)
· Ability to work and talk directly with the business users, as well as gather requirements.

Kiwi Technology Pvt. Ltd.

Working at KiwiTech in Data Engineer Noida, IN | Nov '17 - March’19

Project : Health Care Portal Data Analysis


Client : 7mb ,Qualivis
Responsibilities:
· Develop ETL processes in Python to send large data sets into Hadoop and bring summarized results back into the SQL
data warehouse
· Develop and maintain our Big Data pipeline that transfers and processes several terabytes of data using Apache Spark,
Python, Apache Kafka, Hive
· Work across multiple teams in a highly visible role and own the solution end-to-end
· Ability to write well-abstracted, reusable code components
· A self-starter who can find and fix issues without direction
· Expert level fluency in Python with Spark
· Experience with Hadoop, Hive, HDFS ,Sqoop ,Ozzie
· Eager to learn about new technologies and implementing these ideas in a production environment
· Ability to work both individually and across teams in a fast-paced environment

NextGenVision Technology Pvt. Ltd.

Working at NextGen as Quality Analyst in Hadoop Noida, IN | Nov '15 - Nov’17

Project : Pearson System of Courses


Client : Pearson
Responsibilities:
· Prepared and Reviewed Test/Business Scenarios and Functional Test Cases.
· Testing Hadoop ecosystem components (HIVE,Sqoop)
· Perform ad hoc data analysis, data processing and data visualization using SQL .
· Familiarity with SQL and NoSQL technologies
· Experience with Production Hadoop ecosystem
· Deploy and Troubleshoot ETL jobs and Created reports and dashboards using structured and unstructured data.
· Manage large data movements with partitions and data management.
· Troubleshoot the SQL server performance issues and optimize the SQL statements.
EDUCATION
· Studying Msc Data Science & Analytics from University of Hertfordshire (Jan 2021- May 2022).
· B.Tech with 7.37 CGPA from Bhagwant University, Ajmer in 2014.
· Intermediate with 66.31% from R.B.S.E Board in the year 2010.
· High School with 80.67% from R.B.S.E Board in the year 2008.

Declaration

I hereby declare that all the details furnished above are true to the best of my Knowledge. If given an opportunity I
would perform up to the best of your expectations.

Place : Hatfield Date :

Das könnte Ihnen auch gefallen