me

Dhruv Srikanth

TLDR; I am a researcher at the Auton Lab (part of the Robotics Institute at Carnegie Mellon University) advised by Artur Dubrawski. I created and lead AutonFeat; an automatic featurization library for lightning fast feature extraction and selection. My research is focussed around learning fair and useful representations in the absence of supervision; unsupervised, self-supervised, semi-supervised, weakly-supervised, representation and meta learning. Recently, I have applied my work to computer vision, robotics and time-series in healthcare and medicine and enjoy building high-performance implementations of my research.

As CS graduate student specializing in Machine Learning and High Performance Computing at the University of Chicago, I spent time as an RA and Researcher at the UChicago Booth Center for Applied Artificial Intelligence advised by Sendhil Mullainathan. My research focussed on uncovering and proving the presence of inductive biases, their causes and effects in popular deep learning models such as AlexNet, ResNet and DenseNet. I also spent time at the Toyota Technological Institute at Chicago (TTIC) working on fairness and adversarial robustness in deep learning models advised by Matthew Turk. Though this work started in grad school, I continue to collaborate with students and professors at TTIC around the topic of fairness and interpretability in deep learning. In a past life, I studied Electronics and Communication Engineering specializing in Signals and Systems. The IEEE published a part of my undergraduate thesis on 2D-3D Single-View Reconstruction.

Research Interests

[Long-Term] - I am most interested in the science of learning; how neural activities are computed, cached and propogated through the brain effciently and effectively and the path toward more general forms of intelligence. Toward understanding this, my work explores Deep Learning (signals) and High-Performance Computing problems (synapses).

[Shorter Time Horizon] - More specifically, I am interested in generative networks (GAN, VAE, DDMP), representation learning (not RL but RL) and self-supervised learning (SSL). I developed several GAN experimentation packages as visual learning tools. More on my current work can be found in my publications and open source projects.

My passion for HPC primarily resides in accelerating simulation through distributed-memory parallelism and GPU programming for which I have several systems projects that can be found here.

Broader Interests

Though I am most interested in the science of learning, I often take an engineering approach to understanding it. This has led me to develop several open source tools and libraries that can be found here. The interdiscplinary nature of Deep Learning has kept me reading; Reinforcement Learning, Game Theory, Compuational Learning Theory and Neuroscience have been fascinating to learn about. Here's my brief foray into Learning Theory; An analysis of "Boosting and Boostability". What excites me about Deep Learning is its infancy and novelty and a little fun with math.

Fun

In order to better understand the software engineering aspect of Deep Learning, I developed the following frameworks inspired by PyTorch:
  • PyNN - A Deep Learning library for Python using purely NumPy.
  • CUDANN - A Deep Learning library for C++ accelerated by CUDA on NVIDIA GPUs.
Most recently, I've spent time developing GoLLUM; a compiler for GoLite, a simple mix between Go and C/C++. The compiler uses LLVM for its IR representation and is designed for an ARM 64 bit backend architecture.

Contact