Sie sind auf Seite 1von 2

Tanmay Shankar

E-mail: tanmay.shankar@gmail.com Phone: 412-537-1968

Research

I am interested in discovering insights into and building connections between classical structured

Interests

robotics and machine learning techniques of deep, reinforcement, and imitation learning.

Education

Carnegie Mellon University, Pittsburgh, USA.

2016 - 2018

Masters in Robotics, Robotics Institute. Thesis Advisors: Katharina Muelling & Kris Kitani, Robotics Institute. Grade Point Average: 4.05/4.00

Indian Institute of Technology Guwahati, Guwahati, India.

2012 - 2016

B. Tech., Mechanical Engineering, minor in Electronics and Communication Engineering Cummulative Performance Index: 8.76/10

Work

Facebook AI Research, Pittsburgh, USA

2018 - Present

Experience

Research Engineer, working with Abhinav Gupta and Shubham Tulsiani

Research

Learning Robot Skills with Causal Variational Inference

2019-Present

Experience

Research Project, FAIR

Advisors: Shubham Tulsiani & Abhinav Gupta

Unsupervised Hierarchical Policy Learning from Demonstrations

Introduced a framework for hierarchical policy learning from demonstrations, using ideas of causality and variational inference.

Represented options as continuous and discrete latent variables, and formulated causal varia- tional inference to infer options as latent variables.

Reparameterized latent variables to facilitate learning via gradient descent.

Discovering Motor Programs by Recomposing Demonstrations

Research Project, FAIR

2018-2019

Advisors: Shubham Tulsiani & Abhinav Gupta

Unsupervised Skill Discovery from Robot Demonstrations by Recomposition

Introduced a framework to discover the space of motor primitives underlying a set of robot demonstrations, using insights of recomposition to formulate an unsupervised loss.

Used ideas of simplicity, parsimony, and plannability to regularize learnt motor primitives.

Accelerated downstream task learning with discovered primitives, and showcased feasibility of primitive execution on Baxter robot.

Learning Neural Parsers via Deterministic Differentiable Imitation Learning 2016-2018

Advisors: Katharina Muelling & Kris Kitani

Introduced a framework to learn to hierarchically decompose objects into segments by parsing, motivated by the problem of a painting robot covering an object.

Learnt a neural parser that generalizes to unseen object images by imitating a parsing oracle, treating a decision-tree style Information-Gain maximizing algorithm as a clairvoyant oracle.

Introduced a novel Deterministic Policy Gradient update DRAG - in the form of a deterministic actor-critic variant of AggreVaTeD, or an imitation learning variant of DDPG.

Graduate Research Thesis, CMU

Learning to Parse via hybrid Imitation-Reinforcement Learning

Reinforcement Learning via Recurrent Convolutional Neural Networks

Bachelor’s Thesis, IIT Guwahati

Reinforcement Learning Networks - Fusing Learning and Planning

2015 - 2016

Advisors: S. K. Dwivedy & Prithwijit Guha

Introduced the Value Iteration RCNN, a neural approximation to value iteration, by represent-

ing the expectation of Bellman backup as convolutions, and iterations as temporal recurrence.

Combined the VI-RCNN with an analogous differentiable Bayesian filtering network into the QMDP RCNN, a learnable approximation to planning under partial observability.

Research Intern, Stanford Intelligent Systems Lab, Stanford University May - July 2015 Advisor: Mykel Kochenderfer, Department of Aeronautics & Astronautics

Set up Visual SLAM for localization across multiple UAVs, enabling UAVs with autonomous capabilities for cooperative tasks, such as follow-the-leader and collision avoidance.

Visual SLAM, Cooperative and Collision Avoidance Behaviors for UAVs

Research Intern, Biorobotics Lab, Carnegie Mellon University Advisor: Howie Choset, Robotics Institute IWAMP: Interior Wing Assembly Mobile Platform

May - July 2014

Worked on monocular vision and 3D CAD model based localization, based on a ray-tracing

edge approach, on a mobile robot for automating interior assembly of aircraft wings.

HyAWET: Hybrid Assistive Wheelchair Exoskeleton Transformer 2013 - 2015 Designed a hybrid exoskeleton wheelchair device, for assisted mobility of patients of motor/nerve

conditions on various terrain. Employs an underactuated gait generation mechanism.

InViSyBlE: Intelligent Vision System for Blind Enablement

2014 - 2015

Developed a head mountable stereo vision system for assistance of visually impaired individuals,

capable of object manipulation guidance, face detection, and text recognition.

Papers in

T.

Shankar, S. Tulsiani, A. Gupta, “Learning Robot Skills with Causal Variational Inference”, to

preparation

be submitted to International Conference on Machine Learning, ICML 2020.

Papers under

T.

Shankar, S. Tulsiani, L. Pinto, A. Gupta, “Discovering Motor Programs by Recomposing Demon-

Review

strations”, International Conference on Learning Representations, ICLR 2020.

Conference

T.

Shankar, N. Rhinehart, K. Muelling, K. Kitani, “Learning Neural Parsers with Deterministic

Publications

Differentiable Imitation Learning”, Conference on Robot Learning, CoRL 2018.

 

T.

Shankar, S.K. Dwivedy and P. Guha,

“Reinforcement Learning via Recurrent Convolutional

Neural Networks”, International Conference on Pattern Recognition, ICPR 2016.

T.

Shankar and S.K. Dwivedy, “A Hybrid Assistive Wheelchair Exoskeleton”, International Con-

vention on Rehabilitation Engineering and Assistive Technology, i-CREATe 2015.

T.

Shankar, A. Biswas and V. Arun, “Development of an Assistive Stereo Vision System”, In-

ternational Convention on Rehabilitation Engineering and Assistive Technology, i-CREATe 2015. [PDF]

Teaching

Teaching Assistant, Deep Reinforcement Learning, CMU Instructor: Dr. Ruslan Salakhutdinov, Machine Learning Department

Spring 2018

Experience

Reviewing

Reviewer, International Conference on Learning Representations, ICLR 2020

 

Experience

Technical Skills

Languages Known:

Software Packages:

Hardware:

Python, C / C++, Matlab. TensorFlow, PyTorch, OpenCV, PCL, MATLAB, L A T E X, Rviz, Gazebo, ROS Rethink Baxter & Sawyer, Intel NUC, Odroid XU3, Pixhawk Autopilot.

Graduate

Deep Learning

Language Grounding to Vision and Control

Coursework

Deep Reinforcement Learning Computer Vision Kinematics Dynamics and Controls

Machine Learning Math Fundamentals for Robotics