Astronaut AI —
AI-powered medical assistant for space missions that guides astronauts through symptom-based queries using a space-adapted medical knowledge base. Built with Python, Streamlit, and the Gemini AI API.
MindRescue AI —
AI-powered mental health support assistant that detects distress levels and escalates high-risk cases to emergency services. Built with Flask for the backend, DeepSeek API for AI-generated responses, and Leaflet to map nearby mental health centers.
SpeakFlow —
A multilingual business communication assistant leveraging IBM’s Granite models to deliver real-time voice translation and sentiment analysis. Used React for UI design and dashboard development.
Voice Emotion Detection —
AI-powered voice emotion detection system built with Flutter for the front end and TensorFlow/Keras for the deep learning backend. Used the Gradio web interface for audio input, enabling users to upload or record speech for analysis.
Generative AI
Kaggle & Gen AI Competition —
Legalight AI Assistant (RAG-powered legal document analysis using Streamlit and Gemini API). Hugging Face Practice Projects — Python tutorials and practice notebooks on Google Colab, deployed as interactive Hugging Face apps.
Generative AI Application Developer Top Performer Award —
I’m pleased to announce that I’ve been recognized as a Top Performer in the Generative AI (PEC x PakAngels, Silicon Valley USA, 2025) Application Developer track of PEC’s Generative AI course. View LinkedIn Announcement
Computer Vision Practice Projects
YOLOv5 Object Detector —
Developed a web application for real-time object detection using the pre-trained YOLOv5s model from Ultralytics. Integrated with a Gradio interface for user-friendly image uploads and leveraged PyTorch for efficient inference.
Context-Aware Multimodal Assistant (Bart-Large-CNN Model) —
Built a multimodal assistant that detects user stress from voice recordings and facial images to simplify tasks and messages during cognitive overload. Utilizes the Facebook/Bart-Large-CNN model from Hugging Face Transformers for text summarization and Gradio for the interactive interface. Stress detection modules are placeholders, easily replaceable with real pretrained models in the future.
MobileNetV2 Animal Classifier —
Developed an animal image classification app using a MobileNetV2 deep learning model trained on a labeled dataset. Deployed as an interactive web app with Gradio, utilizing TensorFlow/Keras for model training and Python libraries like Pillow and NumPy for image processing.
Plant Disease Identifier (in process) —
A custom-trained YOLOv5 model on the plant diseases dataset from Ultralytics. The app analyzes uploaded leaf images to identify diseases, classify severity, and provide confidence scores. Built with Python and Gradio. Also experimenting with a pretrained ViT model on ImageNet to classify plant diseases.