Shrinivas Ramasubramanian

I am an incoming Grad Student at Carnegie Mellon University. Currently I'm a research engineer at Fujitsu Research in Bangalore, where I work in collaboration with Fujitsu Research Japan and Vision and AI lab, IISc optimizing non-decomposable objectives and person re-identification. I also collaborate with Fujitsu Research India where I primarily focus on the use of graph neural networks for large scale data. I did my undergrad at IIT Bombay, in Electrical Engineering.

Email  /  CV  /  Scholar  /  Github  /  Linkedin

profile photo

Research

My primary area of focus is optimizing non-decomposable objectives, i.e objectives inxepressible as an average of a function of label-prediction pairs. The non-decomposable objectives are often usefull in fairness based objectives and constrained optimization problem. My secondary areas of research are classification on class-imbalanced data, robust person re-identification and self-supervised pre-training.

Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Metrics
Shrinivas Ramasubramanian*, Harsh Rangwani*, Sho Takemori*, Kunal Samanta, Umeda Yuhei, Venkatesh Babu Radhakrishnan,
ICLR, 2024 ( accept spotlight !!)

Internet usage growth generates massive data, prompting adoption of supervised/semi-supervised machine learning. Pre-deployment model evaluation, considering worst-case recall and fairness, is crucial. Current techniques lack on non-decomposable objectives; theoretical methods demand building new models for each. Introducing SelMix—a selective mixup-based, cost-effective fine-tuning for pre-trained models. SelMix optimizes specific objectives through feature mixup between class samples. Outperforming existing methods on imbalanced classification benchmarks, SelMix significantly enhances practical, non-decomposable objectives.

SelMix: Selective Mixup Fine Tuning for Optimizing Non-Decomposable Metrics
Shrinivas Ramasubramanian*, Harsh Rangwani*, Sho Takemori*, Kunal Samanta, Umeda Yuhei, Venkatesh Babu Radhakrishnan,
ICML workshop, Differentiable Almost Everything , 2023

SelMix is a fine-tuning technique designed to enhance machine learning models' performance on imbalanced data with non-decomposable objectives, such as fairness criteria. It optimizes feature mixup between specific classes to address class imbalance and outperforms existing methods on benchmark datasets.

Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics
Harsh Rangwani*, Shrinivas Ramasubramanian*, Sho Takemori*, Kato Takashi, Umeda Yuhei, Venkatesh Babu Radhakrishnan,
NeurIPS, 2022

This work introduces the Cost-Sensitive Self-Training (CSST) framework, which extends self-training-based methods to optimize non-decomposable metrics in practical machine learning systems. The CSST framework proves effective in improving non-decomposable metric optimization using unlabeled data, leading to better results in various vision and NLP tasks compared to the state-of-the-art methods.

In Submission
Pang Zhanzong, Fadime Sener, Shrinivas Ramasubramanian, Angela Yao
CVPR, 2024

In Submission

In Submission
Chaitanya Devegupta, Sumukh Aithal, Shrinivas Ramasubramanian*, Yamada Moyuru, Manohar Koul,
CVPR, 2024

In Submission

Patents

INFORMATION PROCESSING APPARATUS AND MACHINE LEARNING METHOD
Shrinivas Ramasubramanian, Harsh Rangwani, Sho Takemori, Kato Takashi, Umeda Yuhei, Venkatesh Babu Radhakrishnan
US Patent No. 20230376846

An information processing apparatus includes one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to decide a gain matrix based on an input metric, perform selection of first training data from a plurality of unlabeled training data, to be used for training a machine learning model, based on the gain matrix, and perform training of the machine learning model based on the first training data, a predicted label that is predicted from the first training data, and a loss function including the gain matrix.

MACHINE LEARNING METHOD AND INFORMATION PROCESSING APPARATUS
Shrinivas Ramasubramanian, Harsh Rangwani, Sho Takemori, Kunal Samanta, Umeda Yuhei, Venkatesh Babu Radhakrishnan
(Pending) Indian Patent Application No. 202331050473

An information processing apparatus includes one or more memories; and one or more processors coupled to the one or more memories, the one or more processors being configured to decide a gain matrix based on an input metric, perform selection of first training data from a plurality of unlabeled training data, to be used for training a machine learning model, based on the gain matrix, and perform training of the machine learning model based on the first training data, a predicted label that is predicted from the first training data, and a loss function including the gain matrix.

Academic Service

1. Served as reviewer for NeruIPS'23, AAAI'24, ICML'23 and ICLR'24
2. Served as Teaching Assistant for DS 265 Deep Learning for Computer Vision fall 2022 offering.

Notable projects

See in the dark: Adversarial training for image exposure correction
Shrinivas Ramasubramanian*, Srivatsan Sridhar*,
Course project EE:710, IIT Bombay , 2018

The project aims at achieving an image transformation from a low exposure image taen in a dimly lit environment to that taken by a long exposure camera.the dataset that has been used is the SID. Dataset prepared thanks to C. Chen et. al. For detailed analysis please refer to the project report named report.pdf .

Semantic Segmentation for autonomous vehicles
Shrinivas Ramasubramanian*,
Project under SeDriCa: Autonomous Vehicle Developement, IIT Bombay , 2019

This work involves the use of an encoder decoder architecture CNN for semantic segmentation of the image. We took inspiration from LinkNet for the same and trained our model oon both the mapillary dataset and the Berkley Deep Drive dataset.

Image Super Resolution
Shrinivas Ramasubramanian,
Course project, IIT Bombay , 2018

This project incorporates a supervised super-resolution scheme by having the process of super-resolving the image to a slightly lesser resoltion. This approach allows the model to learn necessary features at each scale of resolution.This work is highly inspired by Yifan Wang et-al who followed a simillar progressive super-resolution approach.


Feel free to steal this website's source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.