Hi There! 👋🏻

I'M ABHISHEK SAGAR SANDA

Computer Vision Engineer|
home pic

LET ME INTRODUCE MYSELF

Building the Future Through AI Innovation

Welcome to my portfolio! I'm Abhishek Sagar Sanda, a Graduate AI Engineer specializing in LLM applications and computer vision, passionate about building intelligent systems that transform how we interact with technology.

Currently, I'm pursuing my Master's in Information Systems at Northeastern University (GPA: 3.85) and serving as a Teaching Assistant for advanced generative AI coursework. My journey is marked by recognition as a Top-10 Finalist in Murf.AI Coding Challenge and as Winner of Northeastern's Roli.AI Hackathon.

My expertise lies in fine-tuning state-of-the-art models like YOLOv8 and GPT-4 for multimodal applications, building production-ready RAG pipelines, and developing full-stack AI systems. I've engineered solutions that achieve 85% detection accuracy on 70,000+ images, reduce response latency by up to 50%, and process 80,000+ documents for real-time semantic search.

What drives me is creating AI solutions that enhance human capabilities—from building voice-based interview coaching systems with intelligent feedback, to developing RAG-powered chatbots that understand context across massive knowledge bases, to implementing reinforcement learning trading systems that make complex financial strategies accessible. Each project balances cutting-edge research with practical, scalable implementation.

With professional experience spanning research at Virtual Presenz, enterprise development at HCL Technologies, and now teaching at Northeastern, I've learned to transform complex AI concepts into intuitive, impactful products. I'm always excited to explore new challenges in generative AI, computer vision, and NLP.

Explore my work below, and let's connect to discuss how AI can solve your next challenge!

avatar

FIND ME ON

Feel free to connect with me

Know Who I'M

Hi Everyone, I am Abhishek Sagar Sanda currently based in Boston, MA, pursuing my Master's at Northeastern University.

I'm a Graduate AI Engineer specializing in LLM applications and computer vision, with hands-on experience fine-tuning YOLOv8 and GPT-4 for multimodal detection and building production-ready RAG pipelines. Currently serving as a Teaching Assistant at Northeastern University, where I facilitate advanced generative AI coursework and mentor 50+ graduate students on cutting-edge AI concepts.

My journey in AI innovation is marked by significant achievements: I'm a Top-10 Finalist in Murf.AI Coding Challenge and Winner of Northeastern's Roli.AI Hackathon. My work spans from developing full-stack AI-powered systems like an Interview Coaching IVR platform with 40% reduced latency, to building RAG-powered chatbots that index and query over 80,000+ web pages in real-time.

As a Research Software Engineer Intern at Virtual Presenz, I fine-tuned YOLOv8 and GPT-4 for multimodal applications, achieving 85% detection accuracy on 70,000+ training images and reducing manual review time by 60% through automated labeling pipelines. I also built context-aware chatbot systems that reduced response latency by 50%, demonstrating my ability to deliver impactful, production-grade AI solutions.

Previously at HCL Technologies, I delivered secure, scalable enterprise applications achieving 90%+ project efficiency while reducing support tickets by 10% and security vulnerabilities by 20%. My technical expertise spans Python, PyTorch, HuggingFace Transformers, LangChain, and modern web technologies, enabling me to build complete, end-to-end AI systems that solve real-world problems.

"Strive to build things that make a difference!"

Abhishek Sagar
about

Professional Skillset

AI/ML

NLP

Computer Vision

Reinforcement Learning

Transformer Models

LLMs

PyTorch

TensorFlow

OpenCV

YOLO

Neural Translation

MediaPipe

C

JavaScript

Python

node.js

JavaScript

SQL

MongoDB

Next.js

Django

Angular

CSharp

Arduino

Raspberry Pi

Data Visualization

Python

Java

Tools I use

Days I Code

537 contributions in the last year

MarAprMayJunJulAugSepOctNovDecJanFebMar
145 contributions in the last year
LessMore

My Recent Works

Showcasing cutting-edge AI projects spanning LLM applications, computer vision, and full-stack systems. Each project demonstrates production-ready solutions with measurable impact.

card-img
AI-Powered Interview Coaching IVR System

Developed a production-grade, full-stack AI-powered IVR platform using Node.js, Express, PostgreSQL, and Twilio, enabling real-time, voice-based interview simulations and automated feedback for mock interview sessions. The system features secure JWT authentication, RESTful APIs, and seamless integration of OpenAI GPT-4 and MurfAI services for intelligent question generation and text-to-speech capabilities. Optimized API performance to reduce average response latency by 40%, resulting in a scalable, production-ready system deployed on Railway. The platform demonstrates advanced capabilities in voice AI, natural language understanding, and automated assessment, providing candidates with realistic interview practice and constructive feedback.

 GitHub  Demo
card-img
Northeastern University Assistant v2.0 🚀

**Live Demo Available!** An advanced AI-powered university assistant featuring RAG (Retrieval-Augmented Generation) with 80,000+ scraped web pages, built using Python, FastAPI, Scrapy, and ChromaDB for real-time natural language Q&A. **Key Features from Demo:** • 🎓 Academic Programs & Admissions Guidance • 💼 Co-op & Career Opportunities Support • 🏠 Campus Life & Housing Information • 💰 Financial Aid & Scholarships Help • 🌍 International Student Services • 📊 Real-time System Status & Analytics • 💬 Interactive Chat with Popular Questions • 📝 User Feedback & Rating System **Technical Implementation:** Engineered robust data pipeline for semantic document indexing, automated data management, and production-ready deployment. The modern glassmorphism UI delivers intuitive conversational AI experience, making vast university resources instantly accessible to students, faculty, and prospective applicants.

 GitHub  Demo
card-img
AI-Powered Richard Wyckoff Trading Assistant

Built a full-stack Wyckoff Trading Assistant using Flask, PyTorch, and Chart.js, featuring a transformer-based chatbot (8-head, 6-layer model trained on 1,189 Q&A pairs), real-time market analytics, 6+ technical indicators, and 3 REST APIs with robust error handling and responsive Bootstrap UI. Implemented a Q-learning reinforcement learning backtesting engine with ε-greedy strategy over 1,000 training episodes, achieving 15% improved ROI prediction accuracy. Optimized performance through lazy loading and caching, reducing API response time by 40% for scalable trading experiments. The system enables traders to backtest strategies across any stock symbol, visualize performance metrics, and analyze trading signals through an intuitive dashboard with interactive charts.

 GitHub
card-img
Neural Machine Translation for Low-Resource Languages

Trained a PyTorch-based Transformer model to translate low-resource language pairs (English-Manipuri) with custom tokenization using BPE (Byte Pair Encoding) and SentencePiece algorithms. The system addresses the unique challenges of working with non-Latin scripts (Meetei Mayek) and limited training data. Employed advanced techniques including prompt tuning and dataset augmentation to handle ambiguous source structures and improve translation quality. The model demonstrates the practical applications of transformer architectures for low-resource language pairs, showcasing how custom tokenization strategies can overcome challenges in multilingual NLP systems.

 GitHub
card-img
LinguaVision - Multilingual Chatbot with Image Generation

LinguaVision is an innovative educational platform that revolutionizes language acquisition by seamlessly integrating neural machine translation with real-time image generation. This cutting-edge system transforms the traditional language learning experience by providing immediate visual representations of translated content, creating a powerful cognitive connection between verbal concepts and their visual counterparts. The application leverages state-of-the-art AI technologies including Helsinki-NLP's multilingual translation model (opus-mt-mul-en) and Stability AI's diffusion-based image generation capabilities (stable-diffusion-2-1). Built with Gradio and Hugging Face libraries, LinguaVision offers an intuitive interface that makes advanced AI technology accessible to language learners across proficiency levels. By bridging the gap between textual understanding and visual context, LinguaVision creates a more immersive, engaging, and effective language learning environment that accommodates diverse learning styles and accelerates comprehension of new linguistic concepts.

 GitHub  Demo
card-img
Abhishek Sagar's Dynamic Chatbot (Roli.AI Hackathon Winner)

Developed an award-winning dynamic chatbot system that represents an innovative approach to personalized AI interaction through adaptive customization. As the Winner of Northeastern's Roli.AI Hackathon, this project demonstrates cutting-edge conversational AI capabilities. The system employs a three-question configuration process to establish user preferences, creating a tailored conversational experience within defined parameters. Key innovations include preference-based customization, intelligent boundary recognition, and seamless customer support integration. Built using the Roli.ai development environment with JavaScript and ChatGPT API integration, the solution balances personalization with practical functionality. The chatbot adapts its behavior, specialization area, and tone dynamically, creating responsive, user-centric conversational experiences that showcase effective application of contemporary AI technologies.

 GitHub  Demo
card-img
DispatchGenius

DispatchGenius is a specialized logistics platform built on JavaFX framework, designed specifically for international students to manage deliveries and gift-sending services. The technical architecture comprises: Frontend Development: Implemented using JavaFX for UI components with CSS for styling Development Environment: Built using Eclipse IDE for Java development Modular Architecture: Organized in functional modules for core operations, user management, and database interactions Data Management: Incorporates database management systems for storing user data, delivery information, and scheduling details Scheduling System: Features a scheduling algorithm for optimizing delivery timelines Feedback Management: Technical infrastructure for collecting and processing user feedback The application follows design patterns that prioritize scalability and efficiency while maintaining a lightweight footprint. The implementation strategy focused on creating a cost-effective solution with minimal technical overhead, with plans for future integration of real-time tracking capabilities through API connections.

 GitHub  Demo
card-img
EatWise

EatWise is a dietary management system built on robust computer science principles, featuring an architecture optimized for efficient nutritional data processing. The technical implementation includes: Data Structure Integration: Utilizes balanced search trees (BSTs), hash maps, and specialized graphs for O(log n) food item retrieval and relationship mapping Object-Oriented Architecture: Implements modular design with inheritance and polymorphism to separate concerns between nutritional analysis, user data management, and recommendation systems Specialized Algorithms: Features custom implementations of Nutri Match (utilizing similarity metrics and nearest-neighbor algorithms) and Nutri Sort (employing comparison-based sorting with multiple attribute weighting) Performance Optimization: Incorporates caching mechanisms and efficient memory management to enable real-time nutritional calculations despite handling large nutritional databases Testing Framework: Employs unit and integration testing methodologies with documented improvements in data retrieval speed and recommendation accuracy The system architecture prioritizes extensibility, allowing for the planned integration of machine learning components while maintaining the performance characteristics necessary for responsive dietary management.

 GitHub  Demo
card-img
Cyber Cuisine Ordering Solution

Cyber Cuisine Ordering Solution is a full-stack web application built with modern JavaScript technologies for online food ordering and delivery management. The technical implementation features: Frontend Framework: Developed using React.js with React Router for client-side routing and navigation between multiple pages Authentication System: Implements user authentication with role-based access control to provide different interfaces for customers, administrators, and delivery partners State Management: Utilizes React's context API or state management libraries to maintain user session information across the application Backend Integration: Connects to MongoDB database for persistent storage of user data, orders, menu items, and delivery information Responsive Design: Employs modern CSS frameworks or custom styling to ensure cross-device compatibility Multi-Role Architecture: Features specialized dashboards with different functionality for administrators and delivery partners RESTful API Communication: Likely implements API services to handle data exchange between the frontend and MongoDB backend The application follows modern web development practices with component-based architecture for maintainability and scalability while providing specialized interfaces tailored to different user roles within the food delivery ecosystem.

 GitHub
card-img
Acadamify

Led the creation of an intuitive Java-based education platform, focusing on object-oriented design for top-notch software quality. Boosted user satisfaction and registrations by enhancing UI and functionality. Introduced a secure authentication system, elevating user satisfaction by 20% and platform usability. The comprehensive platform attracted 40% more users, simplifying professor registration and course management for a 25% boost in administrative efficiency.

 GitHub
card-img
Wepon Detection

The Weapon Detection Project is a computer vision-based security solution that leverages deep learning for real-time identification of weapons in video streams. The technical implementation includes: Deep Learning Framework: Built using PyTorch for neural network implementation and training Computer Vision Processing: Integrates OpenCV for video capture, preprocessing, and image manipulation Object Detection Model: Implements YOLO (You Only Look Once) architecture for real-time object detection, providing both classification and localization of weapons Real-time Processing Pipeline: Optimizes frame processing to enable immediate threat detection with minimal latency Inference Optimization: Likely incorporates techniques like model quantization or hardware acceleration to improve processing efficiency Python Backend: Developed with Python as the core programming language for both model development and application integration The system employs a unified detection approach that enables single-pass analysis of video frames, allowing for prompt identification of potential weapons across diverse environmental conditions while maintaining processing efficiency. This architecture makes it suitable for integration with existing security infrastructure in public spaces, educational institutions, or commercial venues.

Failed to load PDF file.