I am a Founding AI Researcher at Weco AI where I work on building AI agents for software engineering, data science and machine learning tasks. Most of my work is around alignment of LLM and VLM agents. Prior to this, I was a researcher at the Auton Lab (Robotics Institute at Carnegie Mellon University) advised by Artur Dubrawski, While there, I worked on aligning large multimodal models, computer vision, robotics and time-series problems for healthcare. I also created AutonFeat; an automatic featurization library for lightning fast feature extraction and selection.
As a grad student at the University of Chicago, I specialized in Machine Learning and High Performance Computing and I spent time as an RA at the UChicago Booth Center for Applied AI with Sendhil Mullainathan. My research focussed on uncovering and proving the presence of inductive biases, their causes and effects in popular deep learning models such as AlexNet, ResNet and DenseNet. I also spent time at the Toyota Technological Institute at Chicago (TTIC) working on fairness and adversarial robustness in deep learning models advised by Matthew Turk. In a past life, I studied Electronics and Communication Engineering specializing in Signals and Systems. The IEEE published a part of my undergraduate thesis on 2D-3D Single-View Reconstruction.
I continue to do independent research with past and present collaborators at TTIC, Stanford and CMU. My research interests are focussed around Responsible AI; learning fair and useful representations, specifically in the absence of supervision. I enjoy building high-performance implementations of my research.
My interests are primarily around Responsible AI, learning in the absence of supervision and efficient implementations of deep learning methods applied to scientific problems. More on my current work can be found in my publications and open source projects.
My passion for HPC primarily resides in accelerating simulation through distributed-memory parallelism and GPU programming for which I have several systems projects that can be found here.
Though I am most interested in the science of learning, I often take an engineering approach to understanding it. This has led me to develop several open source tools and libraries that can be found here. The interdiscplinary nature of Deep Learning has kept me reading; Reinforcement Learning, Game Theory, Compuational Learning Theory and Neuroscience have been fascinating to learn about. Here's my brief foray into Learning Theory; An analysis of "Boosting and Boostability". What excites me about Deep Learning is its infancy and novelty and a little fun with math.