I am a graduate student in Computer Science at the University of California, Los Angeles (UCLA) . I am passionate about bridging the gap between cutting-edge machine learning research and its practical application, ensuring that deployed solutions are not only reliable but also secure and impactful.
In August 2023, I graduated from the Indian Institute of Technology Jammu (IIT Jammu) with a B.Tech. in Electrical Engineering with a minor in Computer Science and Engineering. I worked on "Low-light Action Recognition" for my B.Tech. thesis under the guidance of Prof. Badri N. Subudhi . I had the privilege of working on the project "Adversarially Robust and Efficient Neural Networks" under the mentorship of TU Dresden , as well as the innovative "Drill Core Analysis" guided by Prof. Rohitash Chandra at the University of New South Wales Sydney (UNSW) . I spent the summer of 2022 at WorldQuant as a Quantitative Research intern. My experience also includes an internship at Schneider Electric , where I focused on "Solder Joint Reliability Prediction for Printed Circuit Boards (PCBs)". Prior to these roles, I was an undergraduate researcher at IC-ResQ Lab under the supervision of Dr. Ambika Prasad Shah , where I worked on developing a True Random Number Generator (TRNG) utilizing Quantum Cellular Automata (QCA).
My research interests lie in Multimodal Machine Learning, Computer Vision, Reinforcement Learning and Natural Language Processing.
Email / CV / Resume / GitHub / Google Scholar / OpenReview / LinkedIn
We propose a novel structure of a hardware security primitive namely the True Random Number Generator (TRNG) is proposed using Quantum Cellular Automata (QCA) technology. The AND gate, XOR gate and a gate with irregular behavior are used to generate random output depending upon the AQ1 metastability of the QCA structure. Furthermore, the structure is cross-looped and asymmetrically inverted to induce additive randomness.
We propose a two-stream action recognition technique for recognizing human actions from dark videos. The proposed action recognition network consists of an image enhancement network with a Self-Calibrated Illumination (SCI) module, followed by a two-stream action recognition network. We have used R(2 + 1)D as a feature extractor for both streams with shared weights. Graph Convolutional Network (GCN), a temporal graph encoder is utilized to enhance the obtained features which are then further fed to a classification head to recognize the actions in a video. The experimental results are presented on the recent benchmark “ARID” dark-video dataset.
The Existing literature does not focus on evaluating different aspects of Persona injection. We employ prompt tuning techniques, specifically in-context and chain-of-thought methods, to generate open-ended dialogues that closely align with the assigned personas. Then performance of our model is evaluated using a robust framework that includes metrics such as LLMEval (using 5 LLMs) for fluency and Coherency , MMLU score for reasoning ability. We also achieve better performance than human generated dialogues for Fluency and Coherency. Additionally, we evaluate the toxicity scores to ensure the generated responses are devoid of harmful content, bias, and dishonesty to further enhance the chatbot’s performance. The results demonstrate visible improvements in the chatbot’s ability to generate coherent, contextually relevant, and persona-aligned dialogues.
Limited representation of darker skin tones in current dermatology datasets and benchmarks hinder the performance of and ability to ethically and safely deploy Artificial Intelligence for dermatology in the clinical setting. We present DiverseDermDiff a image generation pipeline for diversity evaluation in dermatology datasets by reformulating it as a classification task. DiverseDermDiff combines InstructPix2Pix and LLaVa for image generation and prompt verfification, respectivelyy. Fine‑tuned ResNet‑18 classifier on the diverse set of images led to 21.21% accuracy gains. We evaluated the diversity in HAM10k and DDI datasets.
I participated in the SAE eBAJA nationwide design competition. Being the Co-Head of Powertrain Department, I designed and implemented the Electric Powertrain of the All-Terrain Vehicle (E-ATV).We used a 4kW 48V PMSM motor, 48V 150A PMSM controller and 110 Ah 48V Li-ion Battery. Also installed a motor controller to regulate the speed and torque of the motor, a DC to DC converter to power auxiliary units and integrated hall sensors. Our team secured AIR 8 among 82 teams nationwide in eBAJA design competition.
This was a hig-prep problem in Inter IIT Tech Meet 10.0. We created a dashboard to measure growth and profitability for investors. I finetuned word embeddings that were generated using a pre-trained BERT model from Huggingface. I scraped SEC Filings from 10-K, 10-Q and 8-K forms for 292 companies using BeautifulSoup.