Shrinivas Ramasubramanian

I am an incoming Grad Student at Carnegie Mellon University. Previously I was a research engineer at Fujitsu Research in Bangalore, where I worked in collaboration with Fujitsu Research Japan and Vision and AI lab, IISc optimizing non-decomposable objectives and person re-identification. I also collaborated with Fujitsu Research India where I primarily focus on the use of graph neural networks for representation learning. I did my undergrad at IIT Bombay, in Electrical Engineering.

Email  /  CV  /  Scholar  /  Github  /  Linkedin

profile photo

Research

My primary area of focus is fairness, constrained optimization problems in deep neural networks. My secondary areas of research are classification on class-imbalanced data, semi-supervised and representation learning. Currently I am hoping to venture out into a couple of areas like large language models, video generation and data centric machine learning for unsupervised and semi-supervised learning.

Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Metrics
Shrinivas Ramasubramanian*, Harsh Rangwani*, Sho Takemori*, Kunal Samanta, Umeda Yuhei, Venkatesh Babu Radhakrishnan,
ICLR, 2024 spotlight presentation, also presented at ICML workshop differentiable almost everything

Internet usage growth generates massive data, prompting adoption of supervised/semi-supervised machine learning. Pre-deployment model evaluation, considering worst-case recall and fairness, is crucial. Current techniques lack on non-decomposable objectives; theoretical methods demand building new models for each. Introducing SelMix—a selective mixup-based, cost-effective fine-tuning for pre-trained models. SelMix optimizes specific objectives through feature mixup between class samples. Outperforming existing methods on imbalanced classification benchmarks, SelMix significantly enhances practical, non-decomposable objectives.

Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics
Harsh Rangwani*, Shrinivas Ramasubramanian*, Sho Takemori*, Kato Takashi, Umeda Yuhei, Venkatesh Babu Radhakrishnan,
NeurIPS, 2022

This work introduces the Cost-Sensitive Self-Training (CSST) framework, which extends self-training-based methods to optimize non-decomposable metrics in practical machine learning systems. The CSST framework proves effective in improving non-decomposable metric optimization using unlabeled data, leading to better results in various vision and NLP tasks compared to state-of-the-art methods.

Long-Tail Temporal Action Segmentation with Group-wise Temporal Logit Adjustment
Pang Zhanzong, Fadime Sener, Shrinivas Ramasubramanian, Angela Yao
ECCV, 2024

Temporal action segmentation assigns labels to each frame in untrimmed videos, often facing a long-tailed action distribution due to varying action frequencies and durations. However, current methods overlook this issue and struggle with recognizing rare actions. Existing long-tail methods, which make class-independent assumptions, also fall short in this context. To address these challenges, we propose a novel framework called Group-wise Temporal Logit Adjustment (G-TLA). G-TLA leverages activity information and action order to improve tail action recognition. Our approach shows significant improvements on five temporal segmentation benchmarks.

Semantic Graph Consistency: Going Beyond Patches for Regularizing Self-Supervised Vision Transformers
Chaitanya Devegupta, Sumukh Aithal, Shrinivas Ramasubramanian, Yamada Moyuru, Manohar Koul,
CVPR, 2024

Self-supervised learning (SSL) with vision transformers (ViTs) excels in representation learning but often underutilizes ViT patch tokens. We introduce the Semantic Graph Consistency (SGC) module, which enhances ViT-based SSL by treating images as graphs, with patches as nodes. This approach uses Graph Neural Networks for message passing and regularizes SSL by enforcing consistency between graph features across different image views.

Patents

INFORMATION PROCESSING APPARATUS AND MACHINE LEARNING METHOD
Shrinivas Ramasubramanian, Harsh Rangwani, Sho Takemori, Kato Takashi, Umeda Yuhei, Venkatesh Babu Radhakrishnan
US Patent No. 20230376846

MACHINE LEARNING METHOD AND INFORMATION PROCESSING APPARATUS
Shrinivas Ramasubramanian, Harsh Rangwani, Sho Takemori, Kunal Samanta, Umeda Yuhei, Venkatesh Babu Radhakrishnan
(Pending) Indian Patent Application No. 202331050473

Academic Service

1. Served as reviewer for NeruIPS'23, AAAI'24, ICML'23 and ICLR'24
2. Served as Teaching Assistant for DS 265 Deep Learning for Computer Vision fall 2022 offering.

Notable projects

See in the dark: Adversarial training for image exposure correction
Shrinivas Ramasubramanian*, Srivatsan Sridhar*,
Course project EE:710, IIT Bombay, 2018

The project aims at achieving an image transformation from a low exposure image taken in a dimly lit environment to that taken by a long exposure camera. The dataset used is the SID dataset prepared thanks to C. Chen et al. For detailed analysis, please refer to the project report named report.pdf.

Semantic Segmentation for autonomous vehicles
Shrinivas Ramasubramanian*,
Project under SeDriCa: Autonomous Vehicle Development, IIT Bombay, 2019

This work involves the use of an encoder-decoder architecture CNN for semantic segmentation of the image. We took inspiration from LinkNet and trained our model on both the Mapillary dataset and the Berkeley Deep Drive dataset.

Image Super Resolution
Shrinivas Ramasubramanian,
Course project, IIT Bombay, 2018

This project incorporates a supervised super-resolution scheme by having the process of super-resolving the image to a slightly lesser resolution. This approach allows the model to learn necessary features at each scale of resolution. This work is highly inspired by Yifan Wang et al., who followed a similar progressive super-resolution approach.


Feel free to steal this website's source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.